How SMBs impacted by natural disasters gain new credit thanks to a finance matchmaker app

The next BriefingsDirect digital business innovation panel discussion explores how a finance matchmaker application assists small businesses impacted by natural disasters in the United States. 

By leveraging the data and trust inherent in established business networks, Apparent Financing by SAP creates digital handshakes between lenders and businesses in urgent need of working capital financing.

The solution’s participants — all in the SAP Ariba Network — are putting the innovative model to good use by initially assisting businesses impacted directly or via supply chain disruptions from natural disasters such as forest fires and hurricanes

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn how data-driven supplier ecosystems enable new kinds of matchmaker finance relationships that work rapidly and at low risk, we are joined by our panel, Vishal Shah, Co-Founder and General Manager of Apparent Financing by SAP; Alan Cohen, Senior Vice President and General Manager of Payments and Financing at SAP Ariba, and Winslow Garnier, President of Garnier Group Technology Solutions, LLC in San Diego, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Vishal, what’s unique about this point in time that allows organizations like Apparent Financing to play matchmaker between lenders and businesses?

Shah

Shah: The historical problem that limited small businesses from accessing financial services with ease was lack of trust and transparency. It’s also popularly known as the information asymmetry problem

At this point in time there are three emerging trends and forces that are transforming the small business finance industry.

The first one is the digitalization of small businesses, such as from digital bookkeeping systems that are becoming more affordable and accessible — even to the smallest of businesses globally. 

The second force is the financial industry innovation. The financial crisis of 2008 actually unlocked new opportunities and created a developed industry called FinTech. This industry’s strong focus on delivering the frictionless customer experience is the key enabler. 

And the third force is technological innovation. This includes cloud computing, mobility, and application programming interfaces (APIs). They combine to make it economically feasible to gain access to financial information about small businesses that is stored in today’s digital bookkeeping systems and e-commerce platforms. It’s the confluence of these three forces that solve that information asymmetry problem, leading to both reduction of risk and cost to serve small businesses.

Gardner: Alan Cohen, why is this new business climate for small- to medium-sized businesses (SMBs) a perfect fit for something like the SAP Ariba Network? Tell us how your business model and business network are helping Apparent Financing with its task.

Cohen

Cohen: Think about it in two ways. First, think differently about combining the physical and the financial supply chains. Historically, the Ariba Network has been focused on connecting buyers with their suppliers. Now we are taking the next step in this evolution to better connect the physical with the financial supply chain to provide choice and value to suppliers about access to capital. 

The second piece of it is in leveraging the data. There’s a ton of excitement in this world for artificial intelligence (AI) and machine learning (ML), and I am a big proponent of all of that. These are going to be awesome technologies that will help society and businesses as they evolve. It’s super important to keep in mind that the strength of the Ariba Network is not just its size — $2.1 trillion in annual spend, 3.4 million buyers and suppliers — it’s in the data. The intelligence drawn from this transactional data will enable lenders to make risk-adjusted lending decisions. 

And that real data value goes beyond just traditional lending. It also helps lenders assess risk differently. This will help transform how lending is done to small and medium-sized businesses as time evolves. 

Gardner: Some of these trends have been in the works for 20 or 30 years but are now coming together in a way that can help real people benefit in real situations. Winslow, please tell us about Garnier Group Technology Solutions and how you have been able to benefit from this new confluence of financing, data, and business platforms.

Rapid recovery resources

Garnier: Garnier Group Technology Solutions provides intrusion detection, installation services, security cameras, and Wi-Fi installation primarily for corporations and municipalities. We are a supplier and an installer with consistent requirements for working capital to keep our business functioning correctly. 

Garnier

A major challenge showed up for us in late 2017 when the Southern California fires took place. We had already ordered product for several installation sites. Because of the fires, those sites actually burned down. The time needed to recover from already having spent the capital, plus the fact that the business was no longer coming our way, created a real need for us. 

We previously looked at working capital lines and other resources. The challenge, though, is that it is fairly complex. Our company is really good at what we do, but we are not good at finding financing and taking the time to interview multiple banks, multiple lenders. The process to just find the right type of lender to work with us — that in itself could take four to six months. 

In this case, we did not have the time or the manpower to do the due diligence necessary to make that all happen for us. Also, on a day-to-day basis, in dealing with large corporations, we can hope to get paid in 30 days, but in reality that doesn’t happen. But we still need to pay our suppliers to maintain our credit terms and get delivery when required by making sure they get paid on the terms that we have agreed to. 

We were fortunate to then be introduced to Vishal [Shah at Apparent Financing]. From that point on, he turned into a one-stop shop for us. He took what we had and worked with it, under the SAP guidance. That helped us to have confidence that we were working with a credible source, and that they would deliver on what we agreed to. 

Gardner: We see that SMBs can be easily disrupted, they are vulnerable, and they have lag times between when they can get paid and when they have to pay their own suppliers. And they make up a huge part of the overall economy.

Vishal, this seems like a big market opportunity and addressable market. Yet traditional finance organizations mostly ignore this segment. Why is that? Why has bringing finance options to companies like Garnier Group been problematic in the past? 

Bank shies, Network tries

Shah: Going back to early 2008 when the global financial crisis started, there was a lot of supply in the market and small businesses did not have to struggle as much to get access to capital.

Since then, banks have been faced with increasing regulatory burdens, as well as the fact that the cost to serve SMBs became much larger. Therefore the mainstream banks have shied away from lending to and serving this market. That has been one of the big factors.

The second is that banks have not truly embraced the power of technology. They haven’t focused on delivering customer-centric propositions. Most of the banks today are very product-centric organizations, and very siloed in their approach to serving customers.

The fundamental problems were, one, the structure of the banks and the way they were incentivized to serve this market. And secondly, the turn of events that happened post the financial crisis, which effectively resulted in the traditional lenders just backing out from this market, significantly reducing the supply side of the equation.

Banks have not truly embraced the power of technology. They haven’t focused on delivering customer-centric propositions. Most banks today are very product-centric and siloed.

Gardner: Alan, it’s a great opportunity to show how this model can work by coming to the rescue of SMB organizations impacted by natural disasters. But it seems to me that this is a bellwether for a future wave of business services because of the transparency, data-driven intelligence, security, and mission-critical nature of SAP and SAP Ariba’s networks.

Do you see this as I do, as an opening inning in a longer game? Should we be thinking newly about how business networks and data-driven intelligence fosters entirely new markets and new business models?

SMB access to financing evolves

Cohen: Absolutely. I see this as the early stages of an evolution. There are a few reasons. One is ease. Winslow talked about it. It can be very hard for small businesses to access different banks or lenders to get financing. They need an easier way to do it. We have seen transformation in consumer banking, but that transformation has not followed through into business banking. So I think one opportunity is in bringing ease to the process transformation.

Another piece is trust. What I mean by that is the data from SAP and SAP Ariba is high-quality data that lenders can trust. And being able to trust that information is a big part of this process.

Finally, like with any network, being able to connect businesses with lenders has to evolve — just as Ariba has connected buyers with suppliers to transact. This is a natural evolution of the SAP Ariba Network. 

I am very excited. And while we are still early in a longer journey, this process will fundamentally change how business banking is done.

Gardner: Winslow, you had an hour of need. Certainly by circumstances that were beyond your control. You heard from Vishal. What happened next? How were they able to match you up with financing, and what was the outcome?

Garnier: The really unique thing here is that we were able to submit a single application to allow us to have offers by more than one lender. We decided on and agreed that it made sense select Fundation as the lender of choice.  All the lenders were competitive, but Fundation had a couple of features that were specific to our business and worked better for us.

I have to tell you, at first I was skeptical that we would get this done soon enough. At the same time, we had confidence — having worked through the SAP Ariba Network previously. Once we submitted the application, we stopped looking for other resources because we felt that this would work for us. Fortunately, it did end up that way.

Within 30 days we were talking with lenders. We received a term sheet to understand what would be available for us. That gave us time internally to make decisions on what would work best. We closed on the transaction and it’s been a good working relationship between us and Fundation ever since.

Gardner: Is this going to be more than a one-shot deal, a new business operating model for you all? Are you going to be able to take a revolving line of credit and thereby have a more secure approach to business? This may even allow you to increase the risk you are willing to take to find new clients. So is this a one-shot, band aid — or is this something that’s changed your business model?

Not just reparations, relationships 

Garnier: Oh, absolutely. Having a revolving line of credit has become a staple for us because it’s a way to maximize our cash flow within our business. We can add additional clients now and take on new jobs that we may have still taken on, but we would have had to push them out later in time.

We are able to deliver our services faster at this point in time. And so it is the absolute right solution for what we needed and what we will continue to use over time.

Having a revolving line of credit has become a staple for us because it’s a way to maximize our cash flow within our business. We can add additional clients and take on new jobs.

Gardner: Vishal, it’s clear that organizations like Garnier Group are benefiting from this new model. It’s clear that SAP and SAP Ariba have the platform, the data, and the integrity and trust to deliver on it.

But another big component here is to make sure that the financing organizations are comfortable, eager, and are gaining the right information to make their lending decisions. Tell us about that side of the equation. How do organizations like Fundation and others view this, and how do you keep them eager to find new credit opportunities?

Shah: If you think of Fundation, they are not a typical bank. They are willing to look at any e-commerce platform and any technology service providers as new distribution channels through which they can access new markets and a new customer base.

Beyond that, they are using these channels as a way to market their own products and solutions. They have much bigger reasons to look at these ecosystems that we have developed over the years. 

In my view, traditional banks and lending institutions look at businesses like Garnier Group using what I call the rearview mirror. What I mean by that is lenders mostly base their lending decisions or credit decisions by obtaining information from credit bureaus, which they believe is an indicator of past performance. And that good indicator of their past performance is also taken as an indicator of good future performance, which, yes, does work in some cases — but not in all. 

By working with us, lenders like Fundation can not only look at traditional data sources like credit bureaus, they are able to also assess the financial health and the risk of lending to a business through alternative data sources like the one Alan mentioned, which is the SAP Ariba supply chain data. This provides them an increased degree of confidence before they make prudent lending decisions. 

The data in itself doesn’t create the value. When processed in an appropriate manner — and when we learn from the insights the data provides – then our lending partner gains a precise view of both the historical business performance and a realistic view of the future position and future cash flow positions of a small business. That is an incredibly powerful proposition for our lending partners to comfortably and confidently lend to businesses such as Garnier Group.

Gardner: This appears to be a win, win, win. So far, everybody seems to be benefiting. Yet this could not have happened until the innovation of the model was recognized, and then executed on.

So how did this come about, Alan? How did such payments and financing innovation get started? SAP.iO Venture Studio got involved with Apparent Financing. How did SAP, SAP Ariba, and Apparent Financing come together to allow this sort of innovation to take place — and not just remain in theory?

Data serves to simplify commerce 

Cohen: Like anything, it begins with the marketplace and looking at a problem. At the end of the day, financing is very inefficient and expensive for both suppliers and lenders. 

From a supplier perspective, we saw this as an overly complex process. And it’s not always the most competitive because people don’t have the time. From a lender perspective, originating loans and mitigating risk are very important. Yet this process hasn’t gone through a transformation.

We looked at it all and said, “Gosh, how can we better leverage the Ariba Network and the data involved in it to help solve this problem?”

SAP.iO is a venture part of SAP that incubates new businesses. About a year-and-a-half ago, we began bringing this to market to challenge how things had been done and to open up new opportunities. It’s a very innovative approach to challenge the status quo, to get businesses and lenders to think and look at this differently and seize opportunities.

And if you think about what the SAP Ariba Network is, we run commerce. And we want the lenders to fund commerce. We are simply helping to bring these two together, leveraging some incredible data insights along with the security and trust of the SAP and SAP Ariba brands.

Gardner: Of course, it’s important to have the underlying infrastructure in place to provide such data availability, trust, integrity, and support of the mission-critical nature. But in more and more cases nowadays, the user experience and simplicity elements are terribly important.

Winslow, when it came to how you interacted with the process, did you find it simple? Did you find it direct? How important was that for you as an SMB to be able to take advantage of this?

Garnier: We found it very straightforward. It didn’t require us going outside of the data we have internally. We didn’t have to bring in our outside accounting firm or a legal firm to begin the process. We were able to interface by e-mail and simple phone calls. It was so simple. I’m still surprised that, based on our previous experiences, we were able to get this to happen as quickly as it did.

Gardner: Vishal, how do you account for the ability to make this simple and direct for both sides of the equation? Is there something about the investments SAP has made over the years in technology and the importance of the user experience?

How do you attribute getting from what could be a very complex process to something that’s boiled down to its essential simplicity?

Transparent transactions build trust 

Shah: A lot of people misunderstand the user experience and co-relate that to developing a very nice front end, creating an online experience, and making it seamless and easy to use. I think that is only a part of the truth, and part of the story.

What goes on behind that nice-looking user interface is really eliminating what I call the friction pointsin a customer’s journey. And a lot of those friction points are actually introduced because of manual processes behind those nice-looking screens.

What goes on behind that nice-looking user interface is really eliminating what I call the friction points in a customer’s journey. A lot of those friction points are actually introduced because of manual processes behind the nice-looking screens.

Secondly, there are a lot of exceptions — business exceptions — when you’re trying to facilitate a complex transaction like a financial credit transaction.

You must overcome these challenges. You must ensure that customers and borrowers have a seamless customer experience. We provide a transparent process, accessible to them so they know every single point in time: Where they are with their credit process, are they approved, are they disapproved, are they waiting on certain decisions, or are they negotiating the deal with the partner?

That is one element, we bring in an increased level of transparency and openness to the process. Traditionally these services have been opaque. Historically, businesses submit applications to banks and literally wait for weeks to get a decision. They don’t know what’s going on inside the four walls of the bank for those many weeks.

The second thing we did is to help our partners understand the exceptions that they traditionally encounter in their credit decision process. As a result, they can reduce those manual exceptions or completely eliminate them with the help of technology.

Again, the insights we generated from the data that we already had about the businesses helped us overcome those challenges and overcome the friction points in the entire interaction on both sides.

Gardner: Alan Cohen, where do you go next with this particular program around financing? Is this a bellwether for other types of business services that depend on the platform, the data integrity, and the simplicity of the process?

Win-win lending scenarios 

Cohen: Simplicity is, I think, first and foremost. Vishal and Winslow talked about it. Just as you can get a consumer loan online, it should be just as simple for a business to get access to capital online. Make that a pleasurable process, not a complex process that takes a long time. Simplicity cannot be underrated to help drive this change.

When it comes to the data, we’ve only scratched the surface of what can be done. We talked about risk-adjusted lending decisions based on transactional information. What we’ll see more of is price elasticity, around both risk and demand, come into play as banks help to better manage their portfolio — not with theoretical information but through practical information. They’ll have better insights to manage their portfolios.

Let’s not lose sight of what we’re trying to accomplish: Broaden the capital availability to the community of businesses. There are so many different types of lending scenarios that could happen. You’ll see more of those scenarios become available to businesses over time in a much more efficient, cost-effective, and economic manner.

It’s not just a shifting of cost. It will be an elimination of cost — where both parties win in this process.

Gardner: Winslow, for other SMBs that face credit issues or didn’t pursue revolving credit because of the complexity, what advice can you offer? What recommendations might you have for organizations to rethink their financing now that there are processes like what Apparent Financing provides?

Garnier: If I take a step back, we made the classic mistake that we should have put in place a bank line of credit prior to this event happening for us. The challenge was the time needed for the vetting process. We would rather pursue new clients than spend our time having to work with the different lenders.

Financing really is something that I think most small businesses should pursue, but I highly recommend they pursue it under something like what Apparent Financing has arranged. That’s because of the simplicity, the one-stop portal to find what you are looking for, the efficiency of the process, and the quality of the lenders.

All the folks that we ended up speaking to were very capable, and they wanted to do business with us, which was really outstanding. It was very different from the pushback and the, “We’ll let you know within the next 30 to 60 days or so.” That is very challenging.

We have not only added new clients since we put in the revolving credit, but our DUNS score has improved, and our credit-rating has continued to improve. It’s low risk for an SMB to look at a platform like Apparent Financing to see if this could be useful to them. I highly recommend it. It’s been nothing but a positive experience for us.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Advertisements
Posted in Ariba, Business networks, ERP, Networked economy, SAP, SAP Ariba, Spot buying, supply chain | Tagged , , , , , , , , , , , , , , , | Leave a comment

The Open Group panel explores ways to help smart cities initiatives overcome public sector obstacles

Smart city graphic

Credit: Wikimedia Commons

The next BriefingsDirect thought leadership panel discussion focuses on how The Open Group is spearheading ways to make smart cities initiatives more effective.

Many of the latest technologies — such as Internet of Things (IoT) platforms, big data analytics, and cloud computing — are making data-driven and efficiency-focused digital transformation more powerful. But exploiting these advances to improve municipal services for cities and urban government agencies face unique obstacles. Challenges range from a lack of common data sharing frameworks, to immature governance over multi-agency projects, to the need to find investment funding amid tight public sector budgets.

The good news is that architectural framework methods, extended enterprise knowledge sharing, and common specifying and purchasing approaches have solved many similar issues in other domains.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

BriefingsDirect recently sat down with a panel to explore how The Open Group is ambitiously seeking to improve the impact of smart cities initiatives by implementing what works organizationally among the most complex projects.

The panel consists of Dr. Chris Harding, Chief Executive Officer atLacibusDr. Pallab Saha, Chief Architect at The Open Group; Don Brancato, Chief Strategy Architect at BoeingDon Sunderland, Deputy Commissioner, Data Management and Integration, New York City Department of IT and Telecommunications, and Dr. Anders Lisdorf, Enterprise Architect for Data Services for the City of New York. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chris, why are urban and regional government projects different from other complex digital transformation initiatives?

Chris Harding

Harding

Harding: Municipal projects have both differences and similarities compared with corporate enterprise projects. The most fundamental difference is in the motivation. If you are in a commercial enterprise, your bottom line motivation is money, to make a profit and a return on investment for the shareholders. If you are in a municipality, your chief driving force should be the good of the citizens — and money is just a means to achieving that end.

This is bound to affect the ways one approaches problems and solves problems. A lot of the underlying issues are the same as corporate enterprises face.

Bottom-up blueprint approach

Brancato: Within big companies we expect that the chief executive officer (CEO) leads from the top of a hierarchy that looks like a triangle. This CEO can do a cause-and-effect analysis by looking at instrumentation, global markets, drivers, and so on to affect strategy. And what an organization will do is then top-down.

In a city, often it’s the voters, the masses of people, who empower the leaders. And the triangle goes upside down. The flat part of the triangle is now on the top. This is where the voters are. And so it’s not simply making the city a mirror of our big corporations. We have to deliver value differently.

There are three levels to that. One is instrumentation, so installing sensors and delivering data. Second is data crunching, the ability to turn the data into meaningful information. And lastly, urban informatics that tie back to the voters, who then keep the leaders in power. We have to observe these in order to understand the smart city.

Pallab Saha

Saha

Saha: Two things make smart city projects more complex. First, typically large countries have multilevel governments. One at the federal level, another at a provincial or state level, and then city-level government, too.

This creates complexity because cities have to align to the state they belong to, and also to the national level. Digital transformation initiatives and architecture-led initiatives need to help.

Secondly, in many countries around the world, cities are typically headed by mayors who have merely ceremonial positions. They have very little authority in how the city runs, because the city may belong to a state and the state might have a chief minister or a premier, for example. And at the national level, you could have a president or a prime minster. This overall governance hierarchy needs to be factored when smart city projects are undertaken.

These two factors bring in complexity and differentiation in how smart city projects are planned and implemented.

Sunderland: I agree with everything that’s been said so far. In the particular case of New York City — and with a lot of cities in the US — cities are fairly autonomous. They aren’t bound to the states. They have an opportunity to go in the direction they set.

The problem is, of course, the idea of long-term planning in a political context. Corporations can choose to create multiyear plans and depend on the scale of the products they procure. But within cities, there is a forced changeover of management every few years. Sometimes it’s difficult to implement a meaningful long-term approach. So, they have to be more reactive.

Create demand to drive demand

Smart_City_Graph

Credit: Wikimedia Commons

Driving greater continuity can nonetheless come by creating ongoing demand around the services that smart cities produce. Under [former New York City mayor] Michael Bloomberg, for example, when he launched 311 and nyc.gov, he had a basic philosophy which was, you should implement change that can’t be undone.

If you do something like offer people the ability to reduce 10,000 [city access] phone numbers to three digits, that’s going to be hard to reverse. And the same thing is true if you offer a simple URL, where citizens can go to begin the process of facilitating whatever city services they need.

In like-fashion, you have to come up with a killer app with which you habituate the residents. They then drive demand for further services on the basis of it. But trying to plan delivery of services in the abstract — without somehow having demand developed by the user base — is pretty difficult.

By definition, cities and governments have a captive audience. They don’t have to pander to learn their demands. But whereas the private sector goes out of business if they don’t respond to the demands of their client base, that’s not the case in the public sector.

The public sector has to focus on providing products and tools that generate demand, and keep it growing in order to create the political impetus to deliver yet more demand.

Gardner: Anders, it sounds like there is a chicken and an egg here. You want a killer app that draws attention and makes more people call for services. But you have to put in the infrastructure and data frameworks to create that killer app. How does one overcome that chicken-and-egg relationship between required technical resources and highly visible applications?

Lisdorf: The biggest challenge, especially when working in governments, is you don’t have one place to go. You have several different agencies with different agendas and separate preferences for how they like their data and how they like to share it.

This is a challenge for any Enterprise Architecture (EA) because you can’t work from the top-down, you can’t specify your architecture roadmap. You have to pick the ways that it’s convenient to do a project that fit into your larger picture, and so on.

It’s very different working in an enterprise and putting all these data structures in place than in a city government, especially in New York City.

Gardner: Dr. Harding, how can we move past that chicken and egg tension? What needs to change for increasing the capability for technology to be used to its potential early in smart cities initiatives?

Framework for a common foundation 

Harding: As Anders brought up, there are lots of different parts of city government responsible for implementing IT systems. They are acting independently and autonomously — and I suspect that this is actually a problem that cities share with corporate enterprises.

Very large corporate enterprises may have central functions, but often that is small in comparison with the large divisions that it has to coordinate with. Those divisions often act with autonomy. In both cases, the challenge is that you have a set of independent governance domains — and they need to share data. What’s needed is some kind of framework to allow data sharing to happen.

This framework has to be at two levels. It has to be at a policy level — and that is going to vary from city to city or from enterprise to enterprise. It also has to be at a technical level. There should be a supporting technical framework that helps the enterprises, or the cities, achieve data sharing between their independent governance domains.

Gardner: Dr. Saha, do you agree that a common data framework approach is a necessary step to improve things?

Saha: Yes, definitely. Having common data standards across different agencies and having a framework to support that interoperability between agencies is a first step. But as Dr. Anders mentioned, it’s not easy to get agencies to collaborate with one another or share data. This is not a technical problem. Obviously, as Chris was saying, we need policy-level integration both vertically and horizontally across different agencies.

Some cities set up urban labs as a proof of concept. You can make assessment on how the demand and supply are aligned.

One way I have seen that work in cities is they set up urban labs. If the city architect thinks they are important for citizens, those services are launched as a proof of concept (POC) in these urban labs. You can then make an assessment on whether the demand and supply are aligned.

Obviously, it is a chicken-and-egg problem. We need to go beyond frameworks and policies to get to where citizens can try out certain services. When I use the word “services” I am looking at integrated services across different agencies or service providers.

The fundamental principle here for the citizens of the city is that there is no wrong door, he or she can approach any department or any agency of the city and get a service. The citizen, in my view, is approaching the city as a singular authority — not a specific agency or department of the city.

Gardner: Don Brancato, if citizens in their private lives can, at an e-commerce cloud, order almost anything and have it show up in two days, there might be higher expectations for better city services.

Is that a way for us to get to improvement in smart cities, that people start calling for city and municipal services to be on par with what they can do in the private sector?

Public- and private-sector parity

Don Brancato

Brancato

Brancato: You are exactly right, Dana. That’s what’s driven the do it yourself (DIY) movement. If you use a cell phone at home, for example, you expect that you should be able to integrate that same cell phone in a secure way at work. And so that transitivity is expected. If I can go to Amazon and get a service, why can’t I go to my office or to the city and get a service?

This forms some of the tactical reasons for better using frameworks, to be able to deliver such value. A citizen is going to exercise their displeasure by their vote, or by moving to some other place, and is then no longer working or living there.

Traceability is also important. If I use some service, it’s then traceable to some city strategy, it’s traceable to some data that goes with it. So the traceability model, in its abstract form, is the idea that if I collect data it should trace back to some service. And it allows me to build a body of metrics that show continuously how services are getting better. Because data, after all, is the enablement of the city, and it proves that by demonstrating metrics that show that value.

So, in your e-commerce catalog idea, absolutely, citizens should be able to exercise the catalog. There should be data that shows its value, repeatability, and the reuse of that service for all the participants in the city.

Gardner: Don Sunderland, if citizens perceive a gap between what they can do in the private sector and public — and if we know a common data framework is important — why don’t we just legislate a common data framework? Why don’t we just put in place common approaches to IT?

Sunderland: There have been some fairly successful legislative actions vis-à-vis making data available and more common. The Open Data Law, which New York City passed back in 2012, is an excellent example. However, the ability to pass a law does not guarantee the ability to solve the problems to actually execute it.

Donald Sunderland

Sunderland

In the case of the service levels you get on Amazon, that implies a uniformity not only of standards but oftentimes of [hyperscale] platform. And that just doesn’t exist [in the public sector]. In New York City, you have 100 different entities, 50 to 60 of them are agencies providing services. They have built vast legacy IT systems that don’t interoperate. It would take a massive investment to make them interoperate. You still have to have a strategy going forward.

The idea of adopting standards and frameworks is one approach. The idea is you will then grow from there. The idea of creating a law that tries to implement uniformity — like an Amazon or Facebook can — would be doomed to failure, because nobody could actually afford to implement it.

Since you can’t do top-down solutions — even if you pass a law — the other way is via bottom-up opportunities. Build standards and governance opportunistically around specific centers of interest that arise. You can identify city agencies that begin to understand that they need each other’s data to get their jobs done effectively in this new age. They can then build interconnectivity, governance, and standards from the bottom-up — as opposed to the top-down.

Gardner: Dr. Harding, when other organizations are siloed, when we can’t force everyone into a common framework or platform, loosely coupled interoperability has come to the rescue. Usually that’s a standardized methodological approach to interoperability. So where are we in terms of gaining increased interoperability in any fashion? And is that part of what The Open Group hopes to accomplish?

Not something to legislate

Harding: It’s certainly part of what The Open Group hopes to accomplish. But Don was absolutely right. It’s not something that you can legislate. Top-down standards have not been very successful, whereas encouraging organic growth and building on opportunities have been successful.

The prime example is the Internet that we all love. It grew organically at a time when governments around the world were trying to legislate for a different technical solution; the Open Systems Interconnection (OSI) model for those that remember it. And that is a fairly common experience. They attempted to say, “Well, we know what the standard has to be. We will legislate, and everyone will do it this way.”

That often falls on its face. But to pick up on something that is demonstrably working and say, “Okay, well, let’s all do it like that,” can become a huge success, as indeed the Internet obviously has. And I hope that we can build on that in the sphere of data management.

It’s interesting that Tim Berners-Lee, who is the inventor of the World Wide Web, is now turning his attention to Solid, a personal online datastore, which may represent a solution or standardization in the data area that we need if we are going to have frameworks to help governments and cities organize.

A prime example is the Internet. It grew organically when governments were trying to legislate a solution. That often falls on its face. Better to pick up on something that is working in practice.

Gardner: Dr. Lisdorf, do you agree that the organic approach is the way to go, a thousand roof gardens, and then let the best fruit win the day?

Lisdorf: I think that is the only way to go because, as I said earlier, any top-down sort of way of controlling data initiatives in the city are bound to fail.

Gardner: Let’s look at the cost issues that impact smart cities initiatives. In the private sector, you can rely on an operating expenditure budget (OPEX) and also gain capital expenditures (CAPEX). But what is it about the funding process for governments and smart cities initiatives that can be an added challenge?

How to pay for IT?

Brancato: To echo what Dr. Harding suggested, cost and legacy will drive a funnel to our digital world and force us — and the vendors — into a world of interoperability and a common data approach.

Cost and legacy are what compete with transformation within the cities that we work with. What improves that is more interoperability and adoption of data standards. But Don Sunderland has some interesting thoughts on this.

Sunderland: One of the great educations you receive when you work in the public sector, after having worked in the private sector, is that the terms CAPEX and OPEX have quite different meanings in the public sector.

Governments, especially local governments, raise money through the sale of bonds. And within the local government context, CAPEX implies anything that can be funded through the sale of bonds. Usually there is specific legislation around what you are allowed to do with that bond. This is one of those places where we interact strongly with the state, which stipulates specific requirements around what that kind of money can be used for. Traditionally it was for things like building bridges, schools, and fixing highways. Technology infrastructure had been reflected in that, too.

What’s happened is that the CAPEX model has become less usable as we’ve moved to the cloud approach because capital expenditures disappear when you buy services, instead of licenses, on the data center servers that you procure and own.

This creates tension between the new cloud architectures, where most modern data architectures are moving to, and the traditional data center, server-centric licenses, which are more easily funded as capital expenditures.

The rules around CAPEX in the public sector have to evolve to embrace data as an easily identifiable asset [regardless of where it resides]. You can’t say it has no value when there are whole business models being built around the valuation of the data that’s being collected.

There is great hope for us being able to evolve. But for the time being, there is tension between creating the newer beneficial architectures and figuring out how to pay for them. And that comes down to paying for [cloud-based operating models] with bonds, which is politically volatile. What you pay for through operating expenses comes out of the taxes to the people, and that tax is extremely hard to come by and contentious.

So traditionally it’s been a lot easier to build new IT infrastructure and create new projects using capital assets rather than via ongoing expenses directly through taxes.

Gardner: If you can outsource the infrastructure and find a way to pay for it, why won’t municipalities just simply go with the cloud entirely?

Cities in the cloud, but services grounded

1024px-Smart_City_NanshaSaha: Across the world, many governments — not just local governments but even state and central governments — are moving to the cloud. But one thing we have to keep in mind is that at the city level, it is not necessary that all the services be provided by an agency of the city.

It could be a public/private partnership model where the city agency collaborates with a private party who provides part of the service or process. And therefore, the private party is funded, or allowed to raise money, in terms of only what part of service it provides.

Many cities are addressing the problem of funding by taking the ecosystem approach because many cities have realized it is not essential that all services be provided by a government entity. This is one way that cities are trying to address the constraint of limited funding.

Gardner: Dr. Lisdorf, in a city like New York, is a public cloud model a silver bullet, or is the devil in the details? Or is there a hybrid or private cloud model that should be considered?

Lisdorf: I don’t think it’s a silver bullet. It’s certainly convenient, but since this is new technology there are lot of things we need to clear up. This is a transition, and there are a lot of issues surrounding that.

One is the funding. The city still runs in a certain way, where you buy the IT infrastructure yourself. If it is to change, they must reprioritize the budgets to allow new types of funding for different initiatives. But you also have issues like the culture because it’s different working in a cloud environment. The way of thinking has to change. There is a cultural inertia in how you design and implement IT solutions that does not work in the cloud.

There is still the perception that the cloud is considered something dangerous or not safe. Another view is that the cloud is a lot safer in terms of having resilient solutions and the data is safe.

This is all a big thing to turn around. It’s not a simple silver bullet. For the foreseeable future, we will look at hybrid architectures, for sure. We will offload some use cases to the cloud, and we will gradually build on those successes to move more into the cloud.

Gardner: We’ve talked about the public sector digital transformation challenges, but let’s now look at what The Open Group brings to the table.

Dr. Saha, what can The Open Group do? Is it similar to past initiatives around TOGAFas an architectural framework? Or looking at DoDAF, in the defense sector, when they had similar problems, are there solutions there to learn from?

Smart city success strategies

Saha: At The Open Group, as part of the architecture forum, we recently set up a Government Enterprise Architecture Work Group. This working group may develop a reference architecture for smart cities. That would be essential to establish a standardization journey around smart cities.

One of the reasons smart city projects don’t succeed is because they are typically taken on as an IT initiative, which they are not. We all know that digital technology is an important element of smart cities, but it is also about bringing in policy-level intervention. It means having a framework, bringing cultural change, and enabling a change management across the whole ecosystem.

At The Open Group work group level, we would like to develop a reference architecture. At a more practical level, we would like to support that reference architecture with implementation use cases. We all agree that we are not going to look at a top-down approach; no city will have the resources or even the political will to do a top-down approach.

Given that we are looking at a bottom-up, or a middle-out, approach we need to identify use cases that are more relevant and successful for smart cities within the Government Enterprise Architecture Work Group. But this thinking will also evolve as the work group develops a reference architecture under a framework.

Gardner: Dr. Harding, how will work extend from other activities of The Open Group to smart cities initiatives?

Collective, crystal-clear standards 

Harding: For many years, I was a staff member, but I left The Open Group staff at the end of last year. In terms of how The Open Group can contribute, it’s an excellent body for developing and understanding complex situations. It has participants from many vendors, as well as IT users, and from the academic side, too.

Such a mix of participants, backgrounds, and experience creates a great place to develop an understanding of what is needed and what is possible. As that understanding develops, it becomes possible to define standards. Personally, I see standardization as kind of a crystallization process in which something solid and structured appears from a liquid with no structure. I think that the key role The Open Group plays in this process is as a catalyst, and I think we can do that in this area, too.

Gardner: Don Brancato, same question; where do you see The Open Group initiatives benefitting a positive evolution for smart cities?

Brancato: Tactically, we have a data exchange model, the Open Data Element Framework that continues to grow within a number of IoT and industrial IoT patterns.  That all ties together with an open platform, and into Enterprise Architecture in general, and specifically with models like DODAF, MODAF, and TOGAF.

Data catalogs provide proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and are traceable up to the strategy.

We have a really nice collection of patterns that recognize that the data is the mechanism that ties it together. I would have a look at the open platform and the work they are doing to tie-in the service catalog, which is a collection of activities that human systems or machines need in order to fulfill their roles and capabilities.

The notion of data catalogs, which are the children of these service catalogs, provides the proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and then are traceable up to the strategy.

I think we have a nice collection of standards and a global collection of folks who are delivering on that idea today.

Gardner: What would you like to see as a consumer, on the receiving end, if you will, of organizations like The Open Group when it comes to improving your ability to deliver smart city initiatives?

Use-case consumer value

Sunderland: I like the idea of reference architectures attached to use cases because — for better or worse — when folks engage around these issues — even in large entities like New York City — they are going to be engaging for specific needs.

Reference architectures are really great because they give you an intuitive view of how things fit. But the real meat is the use case, which is applied against the reference architecture. I like the idea of developing workgroups around a handful of reference architectures that address specific use cases. That then allows a catalog of use cases for those who facilitate solutions against those reference architectures. They can look for cases similar to ones that they are attempting to resolve. It’s a good, consumer-friendly way to provide value for the work you are doing.

Gardner: I’m sure there will be a lot more information available along those lines at www.opengroup.org.

When you improve frameworks, interoperability, and standardization of data frameworks, what success factors emerge that help propel the efforts forward? Let’s identify attractive drivers of future smart city initiatives. Let’s start with Dr. Lisdorf. What do you see as a potential use case, application, or service that could be a catalyst to drive even more smart cities activities?

Lisdorf: Right now, smart cities initiatives are out of control. They are usually done on an ad-hoc basis. One important way to get standardization enforced — or at least considered for new implementations – is to integrate the effort as a necessary step in the established procurement and security governance processes.

Whenever new smart cities initiatives are implemented, you would run them through governance tied to the funding and the security clearance of a solution. That’s the only way we can gain some sort of control.

This approach would also push standardization toward vendors because today they don’t care about standards; they all have their own. If we included in our procurement and our security requirements that they need to comply with certain standards, they would have to build according to those standards. That would increase the overall interoperability of smart cities technologies. I think that is the only way we can begin to gain control.

Gardner: Dr. Harding, what do you see driving further improvement in smart cities undertakings?

Prioritize policy and people 

Harding: The focus should be on the policy around data sharing. As I mentioned, I see two layers of a framework: A policy layer and a technical layer. The understanding of the policy layer has to come first because the technical layer supports it.

The development of policy around data sharing — or specifically on personal data sharing because this is a hot topic. Everyone is concerned with what happens to their personal data. It’s something that cities are particularly concerned with because they hold a lot of data about their citizens.

Gardner: Dr. Saha, same question to you.

Saha: I look at it in two ways. One is for cities to adopt smart city approaches. Identify very-high-demand use cases that pertain to environmental mobility, or the economy, or health — or whatever the priority is for that city.

Identifying such high-demand use cases is important because the impact is directly seen by the people, which is very important because the benefits of having a smarter city are something that need to be visible to the people using those services, number one.

The other part, that we have not spoken about, is we are assuming that the city already exists, and we are retrofitting it to become a smart city. There are places where countries are building entirely new cities. And these brand-new cities are perfect examples of where these technologies can be tried out. They don’t yet have the complexities of existing cities.

It becomes a very good lab, if you will, a real-life lab. It’s not a controlled lab, it’s a real-life lab where the services can be rolled out as the new city is built and developed. These are the two things I think will improve the adoption of smart city technology across the globe.

Gardner: Don Brancato, any ideas on catalysts to gain standardization and improved smart city approaches?

City smarts and safety first 

Brancato: I like Dr. Harding’s idea on focusing on personal data. That’s a good way to take a group of people and build a tactical pattern, and then grow and reuse that.

In terms of the broader city, I’ve seen a number of cities successfully introduce programs that use the notion of a safe city as a subset of other smart city initiatives. This plays out well with the public. There’s a lot of reuse involved. It enables the city to reuse a lot of their capabilities and demonstrate they can deliver value to average citizens.

In order to keep cities involved and energetic, we should not lose track of the fact that people move to cities because of all of the cultural things they can be involved with. That comes from education, safety, and the commoditization of price and value benefits. Being able to deliver safety is critical. And I suggest the idea of traceability of personal data patterns has a connection to a safe city.

Traceability in the Enterprise Architecture world should be a standard artifact for assuring that the programs we have trace to citizen value and to business value. Such traceability and a model link those initiatives and strategies through to the service — all the way down to the data, so that eventually data can be tied back to the roles.

For example, if I am an individual, data can be assigned to me. If I am in some role within the city, data can be assigned to me. The beauty of that is we automate the role of the human. It is even compounded to the notion that the capabilities are done in the city by humans, systems, machines, and sensors that are getting increasingly smarter. So all of the data can be traceable to these sensors.

Gardner: Don Sunderland, what have you seen that works, and what should we doing more of?

Mobile-app appeal

Sunderland: I am still fixated on the idea of creating direct demand. We can’t generate it. It’s there on many levels, but a kind of guerrilla tactic would be to tap into that demand to create location-aware applications, mobile apps, that are freely available to citizens.

The apps can use existing data rather than trying to go out and solve all the data sharing problems for a municipality. Instead, create a value-added app that feeds people location-aware information about where they are — whether it comes from within the city or without. They can then become habituated to the idea that they can avail themselves of information and services directly, from their pocket, when they need to. You then begin adding layers of additional information as it becomes available. But creating the demand is what’s key.

When 311 was created in New York, it became apparent that it was a brand. The idea of getting all those services by just dialing those three digits was not going to go away. Everybody wanted to add their services to 311. This kind of guerrilla approach to a location-aware app made available to the citizens is a way to drive more demand for even more people.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Cloud computing, Cyber security, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, multicloud, Platform 3.0, professional services, Security, The Open Group | Tagged , , , , , , , , , , , , | Leave a comment

The new procurement advantage: How business networks generate multi-party ecosystem solutions

The next BriefingsDirect intelligent enterprise discussion explores new opportunities for innovation and value creation inside of business networks and among their powerful ecosystem of third party services providers.

We now explore how business and technology platforms have evolved into data insights networks, and why third-party businesses and modern analytics solutions are joining forces to create entirely new breeds of digital commerce and supply chain knowledge benefits.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To explain how business ecosystems are becoming incubators for value-added services for both business buyers and sellers, we welcome Sean Thompson, Senior Vice President and Global Head of Business Development and Ecosystem at SAP Ariba. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why is now the right time to highlight collaboration inside of business ecosystems?

Thompson: It’s a fascinating time to be alive when you look at the largest companies on this planet, the five most valuable companies: AppleAmazonGoogleMicrosoft, and Facebook — they all share something in common, and that is that they have built and hosted very rich ecosystems.

Ecosystems enrich the economy

These platforms represent wonderful economics for the companies themselves. But the members of the ecosystems also enjoy a very profitable place to do business. This includes the end-users profiting from the network effect that Facebook provides in terms of keeping in touch with friends, etc., as well as the advertisers who get value from the specific targeting of Facebook users based on end-user interests and values.

sean thompson (1)

Thompson

So, it’s an interesting time to look at where these companies have taken us in the overall economy. It’s also an indication for other parts of the technology world that ecosystems in the cloud era are becoming more important. In the cloud era, you have multitenancy where you have the hosts of these applications, like SAP Ariba, using multitenant platforms. No longer are these applications delivered on-premise.

Now, it’s a cloud application enjoyed by more than 3.5 million organizations around the world. It’s hosted by SAP Ariba in the cloud. As a result, you have a wonderful ecosystem that evolved around a particular audience to which you can provide new value. For us, at SAP Ariba, the opportunity is to have an open mindset, much like the companies that I mentioned.

It is a very interesting time because business ecosystems now matter more than ever in the technology world, and it’s mainly due to cloud computing.

Gardner: These platforms create escalating value. Everybody involved is a winner, and the more they play, the more winnings there are for all. The participation grows the pie, builds a virtuous adoption cycle.

Is that how you view business ecosystems, as an ongoing value-added creation mechanism? How do you define a business ecosystem, and how is that different from five years ago?

Thompson: I say this to folks that I work with everyday — not only inside of SAP Ariba, but also to members of our partner community, our ecosystem – “We are privileged in that not every company can talk about an ecosystem, mainly because you have to have relevance in order for such an ecosystem to develop.”

wrote an article recently wherein I was reminded of growing up in Montana. I’m a big fly fisherman. I grew up with a fly rod in my hand. It didn’t dawn on me until later in my professional life that I used to talk about ecosystems as a kid. We used to talk about the various bug hatches that would happen and how that would make the trout go crazy.

I was taught by my dad about the certain ecosystems that supported different bugs and the different life that the trout feed on. In order to have an ecosystem — whether it was fly-fishing as a kid in the natural environment or business ecosystems built today in the cloud — it starts with relevance. Do you have relevance, much like Microsoft had relevance back in the personal computer (PC) era?

Power of relevance

Apple created the PC era, but Microsoft decided to license the PC operating system (OS) to many and thus became relevant to all the third-party app developers. The Mac was closed. The strategy that Apple had in the beginning was to control this closed environment. That led to a wonderful user experience. But it didn’t lead to a place where third-party developers could build applications and get them sold.

Windows and a Windows-compatible PC environment created a profitable place that had relevance. More PC manufacturers used Windows as a standard, third-party app developers could build and sell the applications through a much broader distribution network, and that then was Microsoft’s relevance in the early days of the PC.

Other ecosystems have to have relevance, too. There have to be the right conditions for third parties to be attracted, and ultimately — in the business world — it’s all about, if you will, profit. Can I enjoy a profitable existence by joining the ecosystem?

You have to have the right conditions for third parties to be attracted. In the business world, it’s all about profit. Can I enjoy a profitable existence by joining the ecosystem?

At SAP Ariba, I always say, we are privileged because we dohave relevance.

Salesforce.com also had relevance in its early days when it distributed its customer resource management (CRM) app widely and efficiently. They pioneered the notion of only needing a username, a password, and credit card to distribute and consume a CRM app. Once that Sales Force Automation app was widely distributed, all of a sudden you had an ecosystem that began to pay attention because of the relevancy that Salesforce had. It was able to turn the relevancy of the app into an ecosystem that was based on a platform, and they introduced Force.com and the AppExchange for the third parties to extend the value of the applications and the platform.

It’s very similar to what we have here at SAP Ariba. The relevance in the ecosystem is supported by market relevance from the network. So it’s a fascinating time.

Gardner: What exactly is the relevance with the SAP Ariba platform? You’re in an auspicious place — between buyers and sellers at the massive scale that the cloud allows. And increasingly the currency now is data, analytics, and insights.

Global ERP efficiency

Thompson: It’s very simple. I first got to know Ariba professionally back in the 1990s. I was at Deloitte, where I was one of those classic re-engineering consultants in the mid-90s. Then during the Y2K era, companies were getting rid of the old mainframes because they thought the code would fail when the calendar turned over to the year 2000. That was a wonderful perfect storm in the industry and led to the first major wave of consuming enterprise resource planning (ERP) technology and software.

Ariba was born out of that same era, with an eye toward procurement and helping the procurement organization within companies better manage spend.

ERP was about making spend more efficient, too, and making the organization more efficient overall. It was not just about reducing waste inherent within the silos of an organization. It was also about the waste in how companies spent money, managed suppliers, and managed spend against contracts that they had with those suppliers.

And so, Ariba — not unlike Salesforce and other business applications that became relevant — was the first to focus on the buyer, in particular the buyer within the procurement organization. The focus was on using a software application to help companies make better decisions around who they are sourcing from, their supply chain, and driving end-users to buy based on contracts that can be negotiated. It became an end-to-end way of thinking about your source-to-settle process. That was very much an application-led approach that SAP Ariba has had for the better part of 20 years.

When SAP bought Ariba in 2012, it included Ariba naturally within the portfolio of the largest ERP provider, SAP. But instead of thinking of it as a separate application, now Ariba is within SAP, enabling what we call the intelligent enterprise. The focus remains on making the enterprise more intelligent.

Pioneers in the cloud

SAP Ariba was also one of the first to pioneer moving from an on-premises world into the cloud. And by doing so, Ariba created a business network. It was very early in pioneering the concept of a network where — by delighting the buyer and the procurement organization – that organization also brought in their suppliers with them.

Ariba early on had the concept of, “Let’s create a network where it’s not just one-to-one between a buyer and a supplier. Rather let’s think about it as a network — as a marketplace — where suppliers can make connections with many buyers.”

And so, very early on, SAP Ariba created a business network. That network today is made up 3.5 million buyers and sellers doing $2.2 trillion annually in commerce through the Ariba Network.

Now, as you pointed out, the currency is all about data. Because we are in the cloud, a network, and multitenant, our data model is structured in such a way that is far better than in an on-premises world. We now live within a cloud environment with a consistent data structure. Everybody is operating within the same environment, with the same code base. So now the data we have within SAP Ariba — within that digital commerce data set — becomes incredibly valuable to third parties. They can think about how they can enhance that value.

Because we are in a cloud, a network, and multitenant, our data model is structured in a way that’s far better than in an on-premises world. We now live in a cloud environment with a consistent data structure.

As an example, we are working with banks today that are very interested in using data to inform new underwriting models. A supplier will soon be able to log-in to the SAP Ariba Network and see that there are banks offering them loans based on data available in the network. It informs about new loans at better rates because of the data value that the SAP Ariba Network provides. The notion of an ecosystem is now extending to very interesting places like banking, with financial service providers being part of a business network and ecosystem.

We are going beyond the traditional old applications — what we used to call independent software vendors (ISVs). We’re now bringing in service providers and data services providers. It’s very interesting to see the variety of different business models joining today’s ecosystems.

Gardner: Another catalyst to the power and value of the network and the platform is that many of these third parties are digital organizations. They’re sharing their value and adding value as pure services so that the integration pain points have been slashed. It’s much easier for a collaborative solution to come together.

Can you provide any other examples, Sean, of how third parties enter into a platform-network ecosystem and add value through digital transformation and innovation?

Relationships rule 

Thompson: Yes. When you look back at my career, 25 years ago, I met SAP for the first time when I was with Deloitte. And Deloitte is still a very strong partner of SAP, a very strong player within the technology industry as a systems integrator (SI) and consulting organization.

We have enjoyed relationships with Deloitte, Accenture, IBMCapgemini, and many other organizations. Today they play a role — as they did in the past — of delivering value to the end customer by providing expertise, human capital, and intellectual property that is inherent in their many methodologies — change management methodologies, business process change methodologies. And there’s still a valuable role for these professional services organizations, consultants, and SIs today.

But their role has evolved, and it’s a fascinating evolution. It’s no longer customizing on-premises software. Back in the day, when I was at Deloitte, we made a lot of money by helping companies adopt an application like an SAP or an Oracle ERP and customizing it. But you ended up customizing for one and building a single-family home, if you will, that was isolated. You ended up forking the code, if you will, so that you had a very difficult time upgrading because you customized the code so much that you then fell behind.

Now, on cloud, the SI is no longer customizing on-premises, it’s now configuring cloud environments. That configuring of cloud environments allows for not only the customer to never be left behind — a wonderful value for the industry in general — but it also allows the SI to play a new role.

That role is now a hybrid of both consulting and of helping companies to understand how to adopt and change their multicloud processes to become more efficient. The SIs are also becoming [cloud service providers] themselves because – what they used to do in customizing on-premises — they’re now building extensions to clouds and among clouds.

They can create extensions of a solution like SAP Ariba for certain industries, like oil and gas, for example. You will see SAP continue to evolve its relationships with these service providers so that those services companies begin to look more like hybrid business models — where they enjoy some intellectual property and extensions to cloud environments, as well as monetizing their methodologies as they have in the past.

This is a fascinating evolution that’s profitable for those companies because they go from a transactional business model — where they have to sell one client at a time and one implementation at a time — to monetizing based on a subscription model, much like we in the ISV world have done.

There are many other examples of new and interesting ways within the SAP Ariba ecosystem and network of buyers and suppliers where third-party ecosystem participants gather additional data about suppliers — and sometimes about buyers. For example, in helping both suppliers and buyers manage their risk better in terms of financial risk, for supply chain disruption, and if you want to ensure there isn’t slave labor in your supply chain, or if there is sufficient diversity in your supply chain.

The supplier risk category for us is very important. It requires an ecosystem of provider data that enriches the supplier profile. And that can then become an enhancement to the overall value of the business network.

We are now able to reach out and offer ways in which third parties can contribute their intellectual property — be it a methodology, data, analytics, or financial services. And that’s why it’s a really exciting time to be in the environment we are today.

Gardner: This network effect certainly relates to solution sets like financial services and risk management. You mentioned also that it pertains to such vertical industries like oil and gas, pharmaceutical, life sciences, and finance. Does it also extend to geographies and a localization-solution benefit? Does it also pertain to going downstream for small- to medium-sized businesses (SMBs) that might not have been able to afford or accommodate this high-level collaboration?

Reach around the world

Thompson: Absolutely, and it’s a great question. I remember the first wave of ERP and it marked a major consumption of technology to improve business. And that led to a tremendous amount of productivity gains that we’ve enjoyed through the growth of the world economy. Business productivity through technology investment has led to a tremendous amount of growth in the economy.

Now, you ask, “Does this extend?” And that’s what’s so fascinating about cloud and when you combine cloud with the concept of ecosystem — because everybody enjoys a benefit from that.

As an example, you mentioned localization. Within SAP Ariba, we are all about intelligent business commerce, and how can we make business commerce more efficient all around the world. That’s what we are about.

In some countries, business commerce involves the good old-fashioned invoicing, orders, and taxation tasks. At Ariba, we don’t want to solve all of that so-called last mile of the tax data and process needed in for invoices in, say, Mexico.

And that’s what’s so fascinating about cloud and when you combine cloud with the concept of ecosystem — because everybody enjoys a benefit.

We want to work with members of the ecosystem that do that. An example is Thomson Reuters, whose business is in part about managing a database of local tax data that is relevant to what’s needed in these different geographies.

By having one relationship with a large provider of that data and being able to distribute that data to the end users — which are companies in places like Mexico and Korea that need a solution – means they are going to be compliant with the local authorities and regulations thanks to up-to-date tax data.

That’s an example of an extremely efficient way for us to distribute to the globe based on cloud and an ecosystem from within which Thomson Reuters provides that localized and accurate tax data.

Support for all sizes

You also asked about SMBs. Prior to being at SAP Ariba, I was part of an SMB support organization with the portfolio of Business ByDesign and Business One, which are smaller ERP applications designed for SMBs. And one of them, Business ByDesign, is a cloud-based offering.

In the past, the things that large companies were able to do were often too expensive for SMBs. That’s because they required on-premises data centers, with servers, software consultants, and all of the things that large enterprises could afford to drive innovation in the pre-cloud world. This was all just too expensive for SMBs.

Now the distribution model is represented by cloud and the multitenant nature of these solutions that allow for configuration — as opposed to costly and brittle customization. They now have an easy upgrade path and all the wonderful benefits of the cloud model. And when you combine that with a business solutions ecosystem then you can fully support SMBs.

For example, within SAP Ariba, we have an SMB consulting organization focused on helping midsize companies adopt solutions in an agile way, so that it’s not a big bang. It’s not an expensive consulting service, instead it’s prescriptive in terms of how you should begin small and grow in terms of adopting cloud solutions.

Such an SMB mindset has enabled us to take the same SAP Ariba advantage of no code, to just preconfigure it, and start small. As we like to say at SAP Ariba, it’s a T-shirt size implementation: small, medium, and large.

That’s an example of how the SMB business segment really benefits from this era of cloud and ecosystem that drives efficiency for all of us.

Gardner: Given that the value of any business network and ecosystem increases with the number of participants – including buyers, sellers, and third-party service providers — what should they be thinking to get in the best position to take advantage of these new trends, Sean? What should you be thinking in order to begin leveraging and exploiting this overall ecosystem approach and its benefits?

Thompson: I’m about to get on an airplane to go to South Korea. In some of these geographies where we do business, the majority of businesses are SMBs.

And I am still shocked that some of these companies have not prioritized technology adoption. I’m still surprised that there are a lot of industries, and a lot of companies in different segments, that are still very much analog. They are doing business the way they’ve been doing business for many years, and they have been resistant to change because their cottage industry has allowed them to maintain, if you will, Excel spreadsheet approaches to business and process.

I spent a decade of my life at Microsoft, and when we looked at the different ways Excel was used we were fascinated by the fact that Excel in many ways was used as a business system. Oftentimes, that was very precarious because you can’t manage a business on Excel. But I still see that within companies today.

The number one thing that every business owner needs to understand is that we are in an exponential time of transformation. What was linear in terms of how we expect transformation is now in an exponential phase. Disruption of industries is happening in real time and rapidly. If you’re not prioritizing and investing in technology — and not thinking of your business as a technology business — then you will get left behind.

Never underestimate the impact that technology can have to drive topline growth. But technology also preserves the option value for your company in the future because disruption is happening. It’s exponential and cloud is driving that.

Get professional advice

You also have to appreciate the value of getting good advice. There are good companies that are looking to help. We have many of those within our ecosystem, such as providers of assistance like the large SIs as well as midsize companies focused on helping SMBs.

As I mentioned before, I grew up fly fishing. But anybody that comes to me and says, “Hey, I’d love to go learn how to fly fish.” I say, “Start with hiring a professional guide. Spend a day on a river with a professional guide because they will show you how to do things.” I honestly think that that same advice applies to the professional guide who can help you understand how to consume cloud software services.

And that professional guide fee is not going to be as much as it was in the past. So I would say get professional help to start.

Gardner: I’d like to close out with a look to the future. It seems that for third-party organizations that want to find a home in an ecosystem that there’s never been a better time for them to innovate, and find new business models, new ways of collaborating.

You mentioned risk management and financial improvements and efficiency. What are some of the other areas for new business models within ecosystems? Where are we going to see some new and innovative business models cropping up, especially within the SAP Ariba network ecosystem?

Thompson: You mentioned it earlier in the conversation. The future is about data. The future is about insights that we gather from the data.

We’re still early in a very interesting future. We’re still understanding how to gather insights from data. At SAP Ariba we have a treasure trove of data from $2.1 trillion in commerce among 3.5 million members in the Ariba Network.

I started a company in the natural language processing world. I spent five years of my life understanding how to drive a new type of user experience by using voice. It’s about natural language and understanding how to drive domain-specific knowledge of what people want through a natural user interface.

I’ve played on the edge of where we are in terms of artificial intelligence (AI) within that natural language processing. But we’re still fiddling in many respects. We still fiddle in the business software arena, talking about chatbots, talking about natural user interfaces.

We’re still early in a very interesting future. We’re still very early in understanding how to gather insights from data. At SAP Ariba we have a treasure trove of data from $2.1 trillion in commerce among 3.5 million members in the Ariba Network.

The future is data driven

There are so many data insights available on contracts and supplier profiles alone. So the future is about being able to harvest insights from that data. It’s now very exciting to be able to leverage the right infrastructure like the S/4 HANA data platform.

But we have a lot of work to do still to clean data and ensure the structure, privacy, and security of the data. The future certainly is bright. It will be magical in how we will be able to be proactive in making recommendations based on understanding all the data.

Buyers will be proactively alerted that something is going on in the supply chain. We will be able to predict and be a prescriptive in the way the business operates. So it is a fascinating future that we have ahead of us. It’s very exciting to be a part of it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, Deloitte, Enterprise app stores, Enterprise transformation, ERP, machine learning, multicloud, Networked economy, Platform 3.0, procurement, professional services, risk assessment, Salesforce.com, SAP, SAP Ariba, supply chain, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

Regional dental firm Great Expressions protects distributed data with lower complexity thanks to amalgam of Nutanix HCI and Bitdefender security

Modern dentistry depends on more than good care. It also demands rapid access to data and applications.

For a rapidly growing dental services company — consisting of hundreds of dental offices spread across 10 US states — the task of managing all of its data availability, privacy, and security needs grew complex and costly.

The next BriefingsDirect security innovations discussion examines how Great Expressions Dental Centers found a solution by combining hyperconverged infrastructure (HCI) with advanced security products.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the story of how to best balance data compliance and availability requirements via modern IT infrastructure is Kevin Schokora, Director of IT Operations at Great Expressions Dental Centers in Southfield, Michigan. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What makes Great Expressions Dental Centers unique? How does that impact your ability to deliver data wherever your dentists, staff, and patients need it with the required security?

Kevin Schokora

Schokora

Schokora: Our model is based on being dispersed in multiple states. Across those sites, we have many software packages that we have to support on our infrastructure. Based on those requirements, we were afforded an excellent opportunity to come up with new solutions on how to meet our patients’, doctors’, and customers’ needs.

Gardner: You have been in business since 1982, but you have really expanded a lot in the past few years. Tell me about what’s happened to your company recently.

Schokora: We found our model was ripe for success. So we have experienced tremendous growth, expanding to 275-plus sites. And going forward, we expect to expand by 62 to 100 new sites every year. That is our goal. We can do that because of the unique offerings we have, specifically around patient care and our unique software.

Gardner: Not only do you have many sites, but you allow your patients to pick and choose different sites — if they need to cross a state border or move around for any reason, but that wide access requires you to support data mobility.

Snowbird-driven software

Schokora: It does. This all came about because, while we were founded in Michigan, some of our customers go to Florida for the winter. Having had a dental office presence in Florida, they were coming to our offices there and asking for the same dental care that they had received in Michigan.

So, we expanded our software’s capabilities so that when a patient has an appointment in another state, the doctor there will have access to that patient’s records. They can treat them knowing everything in the patient’s history.

Gardner: Who knew that snowbirds were going to put you to the test in IT? But you have come up with a solution.

Schokora: We did. And I think we did well. Our patients are extremely happy with us because they have that flexibility.

Gardner: In developing your solution, you leveraged HCI that is integrated with security software. The combination provides not only high availability and high efficiency, but also increased management automation. And, of course, you’re able to therefore adhere to the many privacy and other compliance rules that we have nowadays.

Tell us about your decision on infrastructure, because, it seems to me, that’s really had an impact on the end-solution.

We were able to go from five server racks in a co-location facility down to one — all while providing a more consistent services delivery model. We have been able to grow and focus on the business side.

Schokora: It did, and the goal was always to set ourselves up for success so that we can have a model that would allow growth easily, without having huge upticks in cost.

When we first got here, growing so fast, we had a “duct tape solution” of putting infrastructure in place and doing spot buys every year to just meet the demands and accommodate the projected growth. We changed that approach by putting a resource plan together. We did a huge test and found that hyperconverged would work extremely well for our environment.

Given that, we were able to go from five server racks in a co-location facility down to one – all while providing a more consistent services delivery model. Our offices have been able to grow so that the company can pursue its plans without having to check back and ask, “Can the IT infrastructure support it?”

This is now a continuous model. It is part of our growth acquisition strategy. It’s just one more check-box where we don’t have to worry about the IT side. We can focus on the business side, and how that directly relates to the patients.

Gardner: Tell us about the variety of data and applications you are supporting for all 275 sites.

Aligning business and patient records

Schokora: We have the primary dentistry applications, and that includes x-rays, patient records, treatment plans, and all of the various clinical applications that we need. But then we also have cumbersome processes – in many cases still manual – for coordinating that all of our patients’ insurance carriers are billed properly. We have to ensure that they get their full benefits.

Anywhere we can, we are targeting for more provider-payer process automation, to ensure that any time we bill for services or care, it is automatically processed.  That level of automatic payments eliminates the touch points that we would have to do manually or through a patient.

And such automation allows us, as we scale and grow, to not have to add as many full-time employees. Our processes can scale in many cases by leveraging the technology.

Gardner: Another big part of the service puzzle is addressing privacy and compliance issues around patient information. You have to adhere to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Payment Card Industry Data Security Standard (PCI DSS) nowadays. What were your concerns were when it came to balancing the availability of data with these compliance requirements?

Schokora: We had to ensure from an infrastructure perspective that we afford all of our customers — including the software applications development team — a platform that they can have confidence in, and we had to earn their trust. To that end, the HCI approach allowed us the capability to use encryption at rest, which is a huge component for compliance for HIPAA, PCI, and things of that nature.

The other benefit was to move our entire environment — what I call a forklift of our entire data center. That allowed us to then review what I would call the sins of our past to ensure that any of that cobbled-together infrastructure is built with the security needed to meet all of the requirements of the customer. We can now plan on a top-down basis.

We just completed this project and we have made a lot of changes to that model to support a better and more secure infrastructure.

Gardner: Before you had a Swiss army knife approach to security. What was the problem with that approach? And what kind of performance tax came with that?

HCI scalability adds value

Schokora: To meet the needs of the business at the time, the Swiss army knife approach took us far. But as we ramped up our acquisition strategy and expanded Great Expressions, we found that this was not scalable to achieve our new business needs.

We needed to look at a couple of key pieces. One was automation, and two was how we revolutionized how we do things. Once we looked at HCI and saw the differences in how we used to do things – it was an easy decision.

We put our new plan through a proof of concept (POC) test. I had some people who were heavily invested in our former technology, but they begged for this new technology. They wanted to use it. They saw how it translated into a value-add for the customers.

Gardner: What was the story behind the partners and technology you chose?

The one thing that really stood out for us with Nutanix was their customer approach, their engagement, and how they ensured that they are a partner with us. They were there hand-in-hand with us.

Schokora: We looked at three different vendors. We were an existing VMware customer, so we looked at their solution. We looked at Hewlett Packard Enterprise (HPE) SimpliVity, and we looked at Nutanix. They were all very similar in their approach, they all had their strengths.

The one thing that really stood out for us with Nutanix was their customer approach, their engagement, and how they ensured that they are a partner with us. They showcased this through the POC process, throughout testing the equipment and environment. They were there, hand-in-hand with us, responding to our questions — almost ad nauseam. They ensured that customer experience for us, just to make sure that we were comfortable with it.

They also had their own hypervisor, what all their virtual machines rest on; same as VMware has their own. There were some benefits in moving with that, and it also aligned into our backup strategy, with the product we use called Rubrik.

So given all of this, as a complete package, we felt that this was an opportunity that could not be passed up on. When we wrote the business case — and this was the easy part at that point, showcasing the benefits over five years — this solution easily won out from a cost perspective and aligned with the business requirements of growth. That alignment supported our whole business, not just IT. That was also critical.

Gardner: How quickly were you able to do the migration? How did it go across 275 sites and 4,000-plus workstations, laptops, and other client devices?

Well-managed migration

Schokora: This required a lot of testing. This was about us going through with planning, with the test migrations, working with our users to have maintenance windows, so that once we did move we could execute a fully developed test plan to ensure that our customers also signed off on, “Okay, yes, this works for me, this meets my requirements.” I thought that was key as well.

Going through it, we did experience some hiccups, things that impacted project events, and so we had to adjust our timelines. We still finished it before we thought we would. We were on a pace to beat our timelines by half.

Gardner: Wow.

Schokora: Yeah. It was great. We were moving at this rapid pace and then we discovered that there were some issues or some errors happening in some of our virtual servers and some of the ones that were rather big, and this kind of showcases that support from Nutanix.

So we had Nutanix on the phone. They were with us every step of the way. They took our logs and they evaluated them, and they quickly issued out patches to address some of the things that they noticed that could be better within their migration tool. So we had a positive effect on Nutanix as well, recognizing some of their opportunities and them quickly addressing them.

Once we implemented this new tool that was provided to us, we were able to move some of our extremely large systems over without impacting the customer outside of our maintenance windows. And we are talking, not necessary petabytes, but very close to it, with database servers and customer entry points into our dental software.

Gardner: And this is for 2,400 employees, but you only have an IT staff of 30 or so people?

Schokora: Correct. And you will hear the A word a lot: Automation. While we had some late nights, given the tools and some of the automation techniques that the vendors use, specifically Nutanix, we were able to get this done with limited staff and with the result of our board of directors personally thanking us, which was great.

Gardner: Not only did you consolidate and modernize your infrastructure, but you in a sense consolidated and modernized your approach to security, too. How did the tag team between Nutanix and your security vendor help?

A secure solution

Schokora: From a security perspective, we chose — after a lengthy process of evaluation — a Bitdefender solution. We wanted to attack our endpoints and make sure that they were protected, as well as our servers. In addition to having a standardized methodology of delivering patches to both endpoints and to servers, we wanted an organization that integrated with Nutanix. Bitdefender checked off all of those boxes for us.

So far the results have been fairly positive to overwhelmingly positive. One thing that was a positive — and was a showstopper with our last vendor — was that our file server was so big. We needed to resolve that. We couldn’t run our antivirus or anti-malware security software on our file server because it made it too slow. It would bog down, and even as we worked with the vendor at the time we could not get it passed to “green.”

With Bitdefender, during our POC, we put it on the [file server] just to test it and our users had no impact. There were no impacting events, and we were now protected against our biggest threats on our file server. That was one of the clear highlights of moving to a Bitdefender solution.

Gardner: And how important was Bitdefender’s integration and certification with Nutanix?

The integration between Nutanix and Bitdefender put them ahead. Leveraging encryption at rest was a huge win for us from a compliance standpoint.

Schokora: It was one of the strengths listed on the business case. That integration between Nutanix and Bitdefender was not a key decision point, but it was one of those decision points that if it was close between two vendors it would have put Bitdefender ahead. It just so happened, based on the key decision points, that Bitdefender was already ahead. This was just another nice thing to have.

Gardner: By deploying Bitdefender, you also gained full-disk encryption. And you extended it to your Windows 10 endpoints. How easy or difficult was it?

Schokora: Leveraging encryption at rest was a huge win for us from a compliance standpoint. The other thing about the workstations and endpoints was that our current solution was unable to successfully encrypt Windows 10 devices, specifically the mobile ones, which we wanted to target as soon as possible.

The Bitdefender solution worked right out of the box. And I was able to have my desktop support team run that project, instead of my network operations team, which was hugely critical for me in leveraging labor and resources. One team is more designed for that kind of “keep the lights on” activity, and not necessarily project-based. So I was able to leverage the project-based resources in a more efficient and valuable way.

Gardner: It sounds like you have accomplished a lot in a short amount of time. Let’s look at some of the paybacks, the things that allowed you to get the congratulations from your board of directors. What were the top metrics of success?

Timing is everything

Schokora: The metrics were definitely based on timing. We wanted to be wrapped up by the end of June [2018] in support of our new enterprise resource planning (ERP) system. Our new ERP system was going through testing and development, and it was concluding at the end of June. We were going for a full roll-out for our Michigan region at that time. The timing was critical.

We also wanted to make sure there were no customer-impacting events. We wanted to ensure that all of our offices were going to be able to provide patient care without impact from the project that was only going to be deployed during scheduled maintenance hours.

We were able to achieve the June timeframe. Everything was up and running on our new Nutanix solution by the third week of June. So we even came in a week early, and I thought that was great.

We had no large customer-impacting events. The one thing we will own up to is that during our IT deployment and maintenance window, the applications development team had some nightly processes that were impacted — but they recovered. All cards on the table, we did impact them from a nightly standpoint. Luckily, we did not impact the offices or our patients when they wanted to receive care.

Gardner: Now that you have accomplished this major migration, are there any ongoing operational paybacks that you can point to? How does this shakeout so far on operational efficiency measurements?

Schokora: We now have had several months of measurements, and the greatest success story that we’ve had on this new solution has been a 66 percent cut in the time it takes to identify and resolve incidents when they happen.

If we have slow server performance, or an impacting event for one of our applications, this new infrastructure affords us the information we need to quickly troubleshoot and get to the root cause so we can resolve it and ensure our customers are no longer impacted.

That has occurred at least five times that I can recall, where the information provided by this hyperconverged solution and Bitdefender have given us the ability to get our customers back on track sooner than we could on our old systems.

Gardner: And this is doing it all with fewer physical racks and fewer virtual servers?

Schokora: Yes. We went from five racks to one, saving $4,000 a month. And for us that’s real money. We also do not have to expand personnel on my network operations team, which is also part of infrastructure support piece.

Now, as we’re preparing for even more expansion in 2019, I’m not going to have to ask for any additional IT personnel resources. We are now attacking things on our to-do lists that had always been pushed back. Before the “keep the lights on” activities always took priority. Now, we have time back in our days to proactively go after those things that our customers request from us.

Gardner: Because you have moved from that Swiss army knife approach, are there benefits from having a single pane of glass for management?

Know who and what’s needed 

Schokora: Based on having that single pane of glass, we are able to do better resource evaluations and forecasting. We are better able to forecast availability.

So when the business comes back with projects — such as improved document management, which is what’s currently being discussed, and such as a new learning management system from our training department — we are able to forecast what they will demand from our systems and give them a better cost model.

From an automation standpoint, we are now able to get new virtualized servers up within seconds, whereas it used to take days. We have a window into more metrics, and are in a better place as we migrate off of legacy systems.

From an automation standpoint, we are now looking at how to get new virtualized servers up within seconds, whereas it used to take days. From a support of legacy systems standpoint, now that we have a window into more metrics, we are in a better place as we migrate off. We are not having lingering issues when we are moving to our new ERP system.

All of these things have been the benefits that we have reaped, and that’s just been in two months.

Gardner: Looking to the future, with a welcome change in emphasis away from IT firefighting to being more proactive, what do you see coming next?

Schokora: This is going to directly translate into our improved disaster recovery (DR) and business continuity (BC) strategies. With our older ERP system and that Swiss army knife approach, we had DR, but it was very cumbersome. If we ever had a high-impact event, it would have been a mad scramble.

This new solution allows us to be able to promise our customers a set schedule, that everything will be up in a certain number of days or hours, and that all critical systems will be online to meet their requirements. We never really had that before. It was hopes and prayers without concrete data behind how long we would need to get back up.

From a business continuity standpoint, the hyperconverged solution affords us the flexibility to leverage a hybrid cloud, or a secondary data center, in a way that my technicians feel, based on their testing, will be easier than our older approach.

Now, we haven’t done this yet. This is more for the future, but it is something that they are excited about, and they feel is going to directly translate into a better customer experience.

Being able to have Bitdefender provide us that single pane of glass for patching and to get critical patches out quickly also affords us the confidence in our compliance. For the latest assessment we had, we passed with flying colors.

There are some gaps we have to address, but there are significantly fewer gaps than last year. And other than some policies and procedures, the only thing we changed was Bitdefender. So that is where that value-add was.

Gardner: Any words of advice now that you have been through a really significant transition — a wholesale migration of your infrastructure, security, encryption, new ERP system, and moving to a better DR posture. What words of advice do you have for other folks who are thinking of biting off so much at once?

Smooth transition tips

Schokora: Pick your partners carefully. Engage in a test, in a POC, or a test plan. Ensure that your technicians are allowed to see, hear, touch and feel every bit of the technology in advance.

Do yourself a favor and evaluate at least three different solutions or vendors, just so that you can see what else is out there.

Also, have a good relationship with your business and the business representation. Understand the requirements, how they want to accomplish things, and how you can enable them – because, at the end of the day, we can come up with the best technical solutions and the most secure. But if we don’t have that business buy-in, IT will only fail.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender

You may also be interested in:

Posted in big data, Bitdefender, Cloud computing, Cyber security, data center, disaster recovery, electronic medical records, enterprise architecture, Enterprise transformation, ERP, healthcare, hyperconverged infrastructure, Nutanix, Security, Software-defined storage, storage, Virtualization | Tagged , , , , , , , , , , , , , , | Leave a comment

The Open Group digital practitioner effort eases the people path to digital business transformation

1280px-Alpamayo

The next BriefingsDirect panel discussion explores the creation of new guidance on how digital business professionals should approach their expanding responsibilities.

Perhaps more than at any time in the history of business and IT, those tasked with planning, implementation, and best use of digital business tools are being transformed into a new breed of  digital practitioner.

This discussion focuses on how The Open Group is ambitiously seeking to close the gap between IT education, business methods, and what it will take to truly succeed at such work over the next decades.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explain what it will take to prepare the next generation of enterprise leadership is our panel, Venkat Nambiyur, Director of Business Transformation, Enterprise, and Cloud Architecture at OracleSriram Sabesan, Consulting Partner and Digital Transformation Practice Lead at ConexiamMichael Fulton, Associate Vice President of IT Strategy and Innovation at Nationwide and Co-Chair of The Open Group IT4IT™ Forum, and David Lounsbury, Chief Technical Officer at The Open Group. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: David, why is this the right time to be defining new guidance on how IT and digital professionals should approach their responsibilities?

Lounsbury: We had a presentation by a couple of Forrester analysts about a year ago at a San Francisco meeting of The Open Group. They identified a change in the market.

Dave Lounsbury

Lounsbury

We were seeing a convergence of forces around the success of Agile as a product management methodology at the edge, the increased importance of customer experience, and the fact that we have radically new and less expensive IT infrastructure and IT management approaches, which make this all happen more at the edge.

And they saw this change coming together into a new kind of person who’s ready to use digital tools to actually deliver value to their businesses. They saw this as a new part of transformation. The Open Group looked at that challenge and stepped up to define this activity, and we created the Digital Practitioners Work Group to bring together all of the necessary factors.

Those include an emphasis on customer experience, to manage digital delivery, to manage digital products, and the ability to manage digital delivery teams together. We want to build one body of knowledgefor how to actually be such a digital practitioner; what it means for individuals to do that. So the people on this podcast have been working in that group toward that objective since then.

Gardner: Is this digital practitioner position an expansion of an earlier category, such as enterprise architect, chief information officer (CIO), or chief technology officer (CTO)? Or is it something new? Are we transitioning, or are we starting fresh?

Sriram Sabesan

Sabesan

Sabesan: We are in the middle of transitioning, as well as creating something fresh. Through the last few decades of computing change, we had been chasing corporate-efficiency improvement, which brought in a level of rigidity. Now, we are chasing individual productivity.

Companies will have to rethink their products. That means a change will have to happen in the thinking of the CIO, the chief financial officer (CFO), the chief marketing officer (CMO), and across the full suite of chief executives. Many companies have dabbled with the new role of a Chief Digital Officer (CDO) and Chief Data Officer (CDO), but there has been a struggle of monetization and of connecting with customers because loyalties are not as [strong as] they used to be.

We are creating guidance to help people transition from old, typical CIO and CFO roles into thinking about connecting more with the customer, of improving the revenue potentials by associating closely with the productivity of the customers, and then improving their productivity levels.

Lead with experience

Nambiyur: This is about leadership. I work with Oracle Digital, and we have worked with a lot of companies focused on delivering products and services in what I call the digital market.

Venkat Nambiyur

Nambiyur

They are all about experiences. That’s a fundamental shift from addressing specific process or a specific capability requirement in organizations. Most of the small- to medium-sized business (SMB) space is now focused on experiences, and that essentially changes the nature of the dialogue from holistic to, “Here’s what I can do for you.”

The nature of these roles has changed from a CIO, a developer, or a consumer to a digital practitioner of different interactions. So, from my perspective at Oracle, this practitioner work group becomes extremely important because now we are talking in a completely different language as the market evolves. There are different expectations in the market.

Fulton: There are a couple of key shifts going on here in the operating model that are driving the changes we’re seeing.

First and foremost is the rapid pace of change and what’s happening in organizations and the marketplace with this shift to a customer focus. Businesses require a lot more speed and agility.

Historically, businesses asked IT to provide efficiency and stability. But now we are undergoing the shift to more outcomes around speed and agility. We are seeing organizations fundamentally change their operating models, individual skills, and processes to keep up with this significant shift.

The other extremely interesting thing we’re seeing are the emerging technologies that are now coming to bear. We’re seeing brand-new what’s possiblescenarios that affect how we provide business benefits to our customers in new and interesting ways.

We are getting to a much higher bar in the context of user experience (UX). We call that the Apple- or Amazon-ification of UX. Organizations have to keep up with that.

The technologies that have come up over the last few years, such as cloud computing, as well as the near-term horizon technologies, things like quantum computing and 5G, are shifting from a world of technology scarcity to a world of technology abundance.

Dave has talked quite a bit about this shift. Maybe he can add how he thinks about this shift from scarcity to abundance when it comes to technology and how that impacts a digital practitioner.

From scarcity to abundance 

Lounsbury: We all see this, right? We all see the fact that you can get a cloud account, either with a credit card or for free. There has been this explosion in the number of tools and frameworks we have to produce new software.

The old model – of having to be very careful about aligning scarce, precious IT resources with business strategies — is less important these days. The bar to roll out IT value has migrated very close to the edge of the organization. That in turn has enabled this customer focus, with “software eating the world,” and an emphasis on digital-first experiences.

The result is all of these new business skills emerging. And the people who were previously in the business realm need to understand all of these digital skills in order to live in this new world. That is a very important point.

Dana, you introduced this podcast as being on what IT people need to know. I would broaden that out quite a bit. This is about what business people need to know about digital delivery. They are going to have to get some IT on their hands to do that. Fortunately, it’s much, much easier now due to the technology abundance that Michael noted.

Michael Fulton

Fulton

Fulton: The shift we are undergoing — from a world of physical to information-based — has led to companies embedding technology into the products that they sell.

The importance of digital is, to Dave’s point, moving from an IT functional world to a world where digital practitioners are embedded into every part of the business, and into every part of the products that the vast majority of companies take to market.

This includes companies that historically have been very physical, like aircraft engines and GE, or oil refineries at Shell, or any number of areas where physical products are becoming digital. They now provide much more information to consume and much more technology rolls into the products that companies sell. It creates a new world that highlights the importance of the digital practitioner.

Limitless digital possibilities 

Nambiyur: The traditional sacred cows of the old are no longer sacred cows. Nobody is willing to just take a technologist’s word that something is doable or not. Nobody is willing to take a process expert’s word that something is doable or not.

In this new world, possibility is transparent, meaning everybody thinks that everything is possible. Michael said that businesses need to have a digital practitioner in their line of business or in many areas of work. My experience of the last four years of working here is that, every participant in any organization is a digital practitioner. They are both a service provider and a service consumer simultaneously, irrespective of where they stand in an organization.

It becomes critical that everybody recognizes the impact of this digital market force, and then recognize how their particular role has evolved or expanded to include a digital component, both when they deliver value and how they receive value.

In this new world, possibility is transparent, meaning everybody thinks that everything is possible. … The traditional sacred cows are no longer sacred.

That is the core of what they are accomplishing as practitioners, to allow people to define and expand their roles from the perspective of a digital practitioner. They need to ask, “What does that really mean? How do I recognize the market? How do I recognize my ecosystem? How do I evolve to deliver that?”

Sabesan: I will provide a couple of examples on how this impacts existing roles and new roles.

For example, we have intelligent refrigerators and intelligent cooking ovens and ranges that can provide insights to the manufacturer about the customers’ behaviors, which they never had before. The designers used to operate on a business-to-business (B2B) sales process, but now they have insights into the customer. They can directly get to the customer’s behaviors and can fine-tune the product accordingly.

Yet enterprises never had to build the skill sets to be able to use that data and create new innovative variations to the product set. So that’s one gap that we are seeing in the market. That’s what this digital practitioner guide book is trying to address, number one.

Number two, IT personnel are now having to deal with a much wider canvas of things to be brought together, of various data sets to be integrated.

Because of the sensors, what was thought of as an operational technology has become part of the network of the IT as well. The access to accelerometers, temperature sensors, pressure sensors, they are all now part of your same network.

A typical software developer now will have to understand the hardware behaviors happening in the field, so the mindset will have to change. The canvas is wider. And people will have to think about an integrated execution model.

That is fundamental for any digital practitioner, to be thinking about putting [an integrated execution model] into practice and having an architectural mindset to approach and deliver improved experiences to the customer. At the end of the day, if you don’t deliver experiences to the customer, there is no new revenue for the company. You’re thinking has to pivot-change from operation efficiency or performance milestones to the delivery of an experience and outcome for the customer.

Gardner: It certainly looks like the digital practitioner role is applicable to large enterprises, as well as SMBs, and cuts across industries and geographies.

In putting together a set of guidelines, is there a standardization effort under way? How important is it to make digital practitioners at all these different types of organizations standardized? Or is that not the goal? Is this role instead individual, organization by organization?

Setting the standards

Nambiyur: It’s a great question. In my view, before we begin creating standards, we need the body of knowledge and to define what the practitioner is looking to do. We have to collect all of the different experiences, different viewpoints, and define the things that work. That source of experience, if you will, can eventually evolve into standards.

Do I personally think that standards are coming? I believe so. What defines that standard? It depends on the amount of experiences we are able to collect. Are we able to agree on some of the best practices, and some of the standards that we need to follow so that any person functioning in the physical ecosystem can successfully deliver in repeatable outcomes?

I think this can potentially evolve into a standard, but the starting point is to first collect knowledge, collect experience from different folks, use cases, and points of use so that we are reasonably able to determine what needs to evolve further.

Gardner: What would a standard approach to be a digital practitioner look like?

Sabesan: There are certain things such as a basic analysis approach, and a decomposition and execution model that are proven as a repeatable. Those we can put as standards and start documenting right now.

We are looking for some sort of standardization of the analysis, decomposition, and execution models, yet providing guidance.

However, the way we play the analysis approach to a financial management problem versus a manufacturing problem, it’s a little different. Those differences will have to be highlighted. So when Venkat was talking about going to a body of knowledge, we are trying to paint the canvas. How you apply these analysis methods differently under different contexts is important.

If you think about Amazon, it is a banking company as well as a retail company as well as an IT service provider company. So, people who are operating within or delivering services within Amazon have to have multiple mindsets and multiple approaches to be presented to them so that they can be efficient in their jobs.

Right now, we are looking at some form of standardization of the analysis, decomposition, execution models, and yet providing guidance for the variances that are there for each of the domains. Can each of domains by itself standardize? Definitely, yes, and we are miles away from achieving that.

Lounsbury: This kind of digital delivery — that customer-focused, outside-in mindset — happens at organizations of all different scales. There are things that are necessary for a successful digital delivery, that decomposition that Sriram mentioned, that might not occur in a small organization but would occur in a large organization.

And as we think about standardization of skills, we want to focus on what’s relevant for an organization at various stages of growth, engagement, and moving to a digital-first view of their markets. We still want to provide that body of knowledge Venkat mentioned that says, “As you evolve in your organization contextually, as you grow, as your organization gets to be more complex in terms of the number of teams doing the delivery, here’s what you need to know at each stage along the way.”

The focus initially is on “what” and not “how.” Knowing what principles you have to have in order for your customer experiences to work, that you have to manage teams, that you have to treat your digital assets in certain ways, and those things are the leading practices. But the tools you will use to do them, the actual bits and the bytes, are going to evolve very quickly. We want to make sure we are at that right level of guidance to the practitioner, and not so much into the hard-core tools and techniques that you use to do that delivery.

Organizational practices that evolve 

Fulton: One of the interesting things that Dave mentions is the way that the Digital Practitioner Body of Knowledge™ (DPBoK) is constructed. There are a couple of key things worth noting there.

One, right now we are viewing it as a perspective on the leading practices, not necessarily of standards yet when it comes to how to be a digital practitioner. But number two, and this is a fairly unique one, is that the Digital Practitioner Body of Knowledge does not take a standard structure to the content. It’s a fairly unique approach that’s based on organizational evolution. I have been in the IT industry for longer than I would care to admit, and I have never seen a standard or a body of knowledge that has taken this kind of an approach.

Typically, bodies of knowledge and standards are targeted at large enterprise, and they put in place what you need to do — all the things that you need to do when you do everything perfect at full scale. What the Digital Practitioner’s Body of Knowledge does is walk you through the organizational evolution, from starting at an individual or a founder of a startup — like two people in a garage — through when you have built that startup into a team, and you have to start to put some more capabilities around that team, up to when the team becomes a team of teams.

You are starting to get bigger and bigger, until you evolve into a full enterprise perspective, where you are a larger company that needs more of the full capabilities.

By taking this organizational maturity, evolution, and emergence approach to thinking about a leading practice, it allows an individual to learn and grow as they step through in a standard way. It helps us fit the content to you, where you are as an individual, and where your organization is in its level of maturity.

Taking this organizational maturity, evolution, and emergence approach to thinking about leading a practice allows an individual to learn and grow in a standard way.

It’s a unique approach, walking people through the content. The content is still full and comprehensive, but it’s an interesting way to help people understand how things are put together in that bigger picture. It helps people understand when you need to care about something and when you don’t.

If you are two people in a garage, you don’t need to care about enterprise architecture; you can do the enterprise architecture for your entire company in your head. You don’t need to write it down. You don’t need to do models. You don’t need to do all those things.

If you are a 500,000-person Amazon, you probably need to have some thought around the enterprise architecture for your company, because there’s no way anybody can keep that in their mind and keep that straight. You absolutely have to, as your company grows and matures, layer in additional capabilities. And this Body of Knowledge is a really good map on what to layer in and when.

Gardner: It sounds as if those taking advantage of the Body of Knowledge as digital practitioners are going to be essential at accelerating the maturity of organizations into fully digital businesses.

Given the importance of that undertaking, where do these people come from? What are some typical backgrounds and skill sets? Where do you find these folks?

Who runs the digital future?

Sabesan: You find them everywhere. Today’s Millennials, for example, let’s go with different categories of people. Kids who are out of school right now or still in school, they are dabbling with products and hardware. They are making things and connecting to the Internet and trying to give different experiences for people.

Those ideas should not be stifled; we need to expand them and help them try to convert these ideas and solutions into an operable, executable, sustainable business models. That’s one side.

On the other far end, we have very mature people who are running businesses right now, but who have been presented with a challenge of a newcomer into the market trying to threaten them, to question their fundamental business models. So, we need to be talking to both ends — and providing different perspectives.

As Mike was talking about, what this particular Body of Knowledge provides us is what can we do for the new kids, how do we help them think about the big picture, not just one product version out. In the industry right now, between V1 and V2, you could potentially see three different competitors for your own functionality and the product that you are bringing to market. These newcomers need to think of getting ahead of competition in a structured way.

And on the other hand, enterprises are sitting on loads of cash, but are not sure where to invest, and how to exploit, or how to thwart a disruption. So that’s the other spectrum we need to talk about. And the tone and the messaging are completely different. We find the practitioners everywhere, but the messaging is different.

Gardner: How is this then different from a cross-functional team; it sounds quite similar?

Beyond cross-functionality 

Sabesan: Even if you have a cross-functional team, the execution model is where most of them fail. When they talk about a simple challenge that Square is trying to become, they are no longer a payment tech company, they are a hardware company, and they are also a website development company trying to solve the problem for a small business.

So, unless you create a structure that is able to bring people from multiple business units together — multiple verticals together to focus on a single customer vertical problem – the current cross-functional teams will not be able to deliver. You need risk mitigation mindset. You need to remove a single team ownership mindset. Normally corporations have one person as accountable to be able to manage the spend; now we need to put one person accountable to manage experiences and outcomes. Unless you bring that shift together, the traditional cross-functional teams are not going to work in this new world.

Nambiyur: I agree with Sriram, and I have a perspective from where we are building our organization at Oracle, so that’s a good example.

Now, obviously, we have a huge program where we hire folks right out of college. They come in with a great understanding of — and they represent — this digital world. They represent the market forces. They are the folks who live it every single day. They have a very good understanding of what the different technologies bring to the table.

We have a huge program where we hire right out of college. They represent the digital world, the market forces, and they are living it every day.

But one key thing that they do — and I find more often – is they appreciate the context in which they are operating. Meaning, if I join Oracle, I need to understand what Oracle as a company is trying to accomplish at the end of the day, right? Adding that perspective cannot just be done by having a cross-functional team, because everybody comes and tries to stay in their comfort zone. If they bring in an experienced enterprise architect, the tendency is to stay in the comfort zone of models and structures, and how they have been doing things.

The way that we find the digital practitioners is to allow them to have a structure in place that tells them to add a particular perspective. Like just with the Millennials, you need to understand what the company is trying to accomplish so that you just can’t let your imagination run all over the place. Eventually and likewise, for a mature enterprise architect, “Hey, you know what? You need to incorporate these changes so that your experience becomes continuously relevant.”

I even look at some of the folks who are non-technologists, folks who are trying to understand why they should work with IT and why they need an enterprise architect. So to help them answer these questions, we give them the perspective of what value they can bring from the perspective of the market forces they face.

That’s the key way. Cross-functional teams work in certain conditions, but we have to set the change, as in organizational change and organizational mindset change, at every level. That allows folks to change from a developer to a digital practitioner, from an enterprise architect to a digital practitioner, from a CFO to a digital practitioner.

That’s really the huge value that the Body of Knowledge is going to bring to the table.

Fulton: It’s important to understand that today it’s not acceptable for business leaders or business members in an organization to simply write off technology and say that it’s for the IT people to take care of.

Technology is now embedded throughout everything that we do in our work lives. We all need to understand technology. We all need to be able to understand the new ways of working that that technology brings. We all need to understand these new opportunities for us to move more quickly and to react to customer wants and needs in new and exciting ways; ways that are going to add distinct value.

To me the exciting piece about this is it’s not just IT folks that have to change into digital practitioners. It’s business folks across every single organization that also have to change and bringing both sides closer together.

IT everywhere, all the time, for everyone

Lounsbury: Yes, that’s a really important point, because this word “digital” gets stuck to everything these days. You might call it digital washing, right?

In fact, you put your finger on the fundamental transformation. When an organization realizes that it’s going to interact with its customers through either of the digital twins — digital access to physical products and services or truly digital delivery — then you have pieces of information, or data, that they can present to the customer.

That customer’s interactions through that — the customer’s experience of that – which also then brings value to the business. A first focus, then is to shift from the old model of, “Well, we will figure out what our business is, and then we will throw some requirements down the IT channel, and sooner or later it will emerge.” As we have said, that’s not going to cut it anymore.

You need to have that ability to deliver through digital means right at the edge with your product decisions.

Gardner: David, you mentioned earlier the concept of an abundance of technology. And, Michael, you mentioned the gorilla in the room, which is the new tools around artificial intelligence (AI), machine learning (ML), and more data-driven analysis.

To become savvier about how to take advantage of the abundance of technology and analytics requires a cultural and organizational shift that permeates the entire organization.

To what degree does a digital practitioner have to be responsible for changing the culture and character of their organization?

Lounsbury: I want to quote something I heard at the most recent Center for Information Systems Research Conference at the MIT Sloan School. The article is published by Jeanne Ross, who said, the time for digitization, for getting your digital processes in place, getting your data digitalized, that’s passed. What’s important now is that the people who understand the ability to use digital to deliver value actually begin acting as the agents of change in an organization.

To me, all of what Sriram said about strategy — of helping your organization realize what can happen, giving them through leading practices and a Body of Knowledge as a framework to make decisions and lower the barrier between the historical technologist and business people, and seeing them as an integrated team – that is the fundamental transition that we need to be leading people to in their organizations.

Sabesan: Earlier we said that the mindset has been, “This is some other team’s responsibility. We will wait for them to do their thing, and we will start from where they left off.”

Now, with the latest technology, we are able to permeate across organizational boundaries. The person to bring out that cultural change should simply ask the question, “Why should I wait for you? If you are not looking out for me, then I will take over, complete the job, and then let you manage and run with it.”

We want people to be able to question the status quo and show a sample of what could be a better way. Those will drive the cultural shifts.

There are two sides of the equation. We also have the DevOps model where, “I build, and I own.” The other one is, “I build it for you, you own, and keep pace with me.” So basically we want people to be able to question the status quo and show a sample of what could be a better way. Those will drive the cultural shifts and push leaders beyond their comfort zone, that Venkat was talking about, to be able to accept different ways of working: Show and then lead.

Talent, all ages, needed for cultural change 

Nambiyur: I can give a great example. There is nothing more effective than watching your own company go through that, and just building off on bringing Millennials into the organization. There is an organization we call a Solutions Hub at Oracle that is entirely staffed by college-plus-two folks. Ans they are working day-in and day-out on realizing the art of what’s possible with the technology. In a huge way, this complements the work of senior resources — both in the pre-sales and the product side. This has had a cumulative, multiplier effect on how Oracle is able to present what it can do for its customers.

We are able to see the native digital-generation folks understanding their role as a digital practitioner, bringing that strength into play. And that not only seamlessly complements the existing work, it elevates the nature of how the rest of the senior folks who have been in the business for 10 or 20 years are able to function. As an organization, we are now able to deliver more effectively a credible solution to the market, especially as Oracle is moving to cloud.

That’s a great example of how culturally each player – it doesn’t matter if they are a college-plus-two or a 20-year person — can be a huge part of changing the organizational culture. The digital practitioner is fundamental, and this is a great example of how an organization has accomplished that.

Fulton: This is hard work, right? Changing the culture of any organization is hard work. That’s why the guidance like what we are putting together with the Digital Practitioner Body of Knowledge is invaluable. It gives us as individuals a starting point to work from to lead the change. And it gives us a place to go back to and continue to learn and grow ourselves. We can point our peers to it as we try to change the culture of an organization.

It’s one of the reasons I like what’s being put together with the Digital Practitioner Body of Knowledge and its use in enterprises like Nationwide Insurance. It’s a really good tool to help us spend our time focused on what’s most important. In Nationwide’s case, being on our site for the members that we serve, but also being focused on how we transform the culture to better deliver against those business objectives more quickly and with agility.

Lounsbury: Culture change takes time. One thing everybody should do when you think about your digital practitioners is to go look at any app store. See the number of programming tutorials targeted at grade-school kids. Think about how you are going to be able to effectively manage that incoming generation of digitally savvy people. The organizations that can do that, that can manage that workforce effectively, are going to be the ones that succeed going forward.

Gardner: What stage within the Body of Knowledge process are we at? What and how should people be thinking about contributing? Is there a timeline and milestones for what comes next as you move toward your definitions and guidelines for bring a digital practitioner?

Contributions welcome

Lounsbury: This group has been tremendously productive. That Digital Practitioner Body of Knowledge is, in fact, out and available for anyone to download at The Open Group Bookstore. If you look for the Digital Practitioner Body of Knowledge, publication S185, you will find it. We are very open about getting public comments on that snapshot as we then finish the Body of Knowledge.

Of course, the best way to contribute to any activity at The Open Group is come down and join us. If you go to www.opengroup.org, you will see ways to do that.

Gardner: What comes next, David, in the maturation of this digital practitioner effort, Body of Knowledge and then what?

Lounsbury: Long-term, we already began discussing both how we work with academia to bring this into curricula to train people who are entering the workforce. We are also thinking in these early days about how we identify Digital Practitioners with some sort of certification, badging, or something similar. Those will be things we discuss in 2019.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, artificial intelligence, Cloud computing, data center, Data center transformation, enterprise architecture, Enterprise transformation, multicloud, professional services, The Open Group | Tagged , , , , , , , , , , , | Leave a comment

Better management of multicloud IaaS proves accelerant to developer productivity for European gaming leader Magellan Robotech

Football2The next BriefingsDirect Voice of the Customer use case discussion explores how a European gaming company adopted new cloud management and governance capabilities with developer productivity as the prime motivator.

We’ll now learn how Magellan Robotech puts an emphasis on cloud management and control as a means to best exploit hybrid cloud services to rapidly bring desired tools and app building resources to its developers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to reveal the journey to responsible cloud adoption with impressive payoffs is Graham Banner, Head of IT Operations at Magellan Robotech in Liverpool, England, and Raj Mistry, Go-to-Market Lead for OneSphere at Hewlett Packard Enterprise (HPE), based in Manchester, England. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the drivers in your organization for attaining faster cloud adoption, and how do you keep that from spinning out of control, Graham?

Graham Banner

Enter a caption

Banner: That’s a great question. It’s been a challenge for us. One of the main problems we have as a business is the aggressive marketplace in Europe. It’s essential that we deliver services rapidly. Now some of our competitors might be able to deliver something in a month. We need to undercut them because the competition is so fierce.

Going from on premises into virtualization and on-premises cloud was our first step, but it wasn’t enough. We needed to do more.

Gardner: Speed is essential, but if you move too fast there can be risks. What are some of the risks that you try to avoid?

Banner: We want to avoid shadow IT. We’ve adopted capabilities before where the infrastructure team wasn’t able to provision services that supported our developers fast enough. We learned that the developers were then doing their own thing: There was no governance, there was no control over what they were doing, and that was a risk for us.

Gardner: Given that speed is essential, how do you bring quality control issues to bear when faced with hybrid cloud activities?

Banner: That’s been a challenge for us as well. There hasn’t traditionally been central payment management from across multiple cloud interfaces so that we could ensure that the correct policies are being applied to those services.

We needed a product that ensures that we deliver quality services to all of our customs across Europe.

Gardner: Raj, is this developer focus for cloud adoption maturity a main driver as HPE OneSphere is being evaluated?

Raj Mistry

Enter a caption

Mistry: Yes, absolutely. The reason OneSphere is so good for the developers is we enable them to use the tools and frameworks they are accustomed to — but under the safety and governance of IT operations. They can deploy with speed and have safe, secure access to the resources they need when they need them.

Gardner: Developers probably want self-service more than anyone. Is it possible to give them self-service but using tools that keep them under a governance model?

Mistry: Some developers like the self-service element, and some developers might use APIs. That’s the beauty of HPE OneSphere, it addresses both of those requirements for the developers. So they can use native tools or the self-service capabilities.

Gardner: We also have to consider the IT operators. When we think about where those new applications end up — it could be on-premises or in some number of clouds, even multiple clouds.

Learn More About

Simplified Hybrid Cloud Management

Mistry: HPE OneSphere is very much application-centric, with the capability to manage the right workload, in the right cloud, at the right cost. Through the ability to understand what the workload is doing — and based on the data and insights we collect — we can then make informed decisions on what best to do next.

Gardner: Graham, you are on the operations’ side and you have to keep your developers happy. What is it about HPE OneSphere that’s been beneficial for both?

Effective feedback 

Banner: It provides great insights and reporting features into our state. When we deployed it, the feedback was almost instantaneous. We could see where our environments were, we could see the workloads, we could see the costs, and this is something that we did not have before. We didn’t have this visibility function.

And this was a very simple install procedure. Once it was up and running, everything rolled out smoothly in a matter of hours. We have never seen a product do this before.

Gardner: Has having this management and monitoring capability given you the confidence to adopt multicloud in ways that you may not have been willing to before?

Banner: Yes, absolutely. One of the challenges we faced before was we were traditionally on-premises for the entire state. The developers had wanted to use and leverage functions that were available only in public clouds.

One of the challenges we faced before was we were traditionally on-premises for the entire state. But the developers wanted to use and leverage functions only available in the public clouds.

But we have a small operations team. We were wary about spending too much training our staff across the multiple public cloud platforms. HPE OneSphere enabled us to onboard multiple clouds in a very smooth way. And people could use it with very little training. The user interface (UI) was fantastic to use, it was very intuitive. Line of business, stack managers, compliance and directors, they all could go on and run reports straight away. It ticked off all the boxes that we needed for it to do.

maxresdefault

Gardner: Getting the trains to run on time is important, but the cost of the trip is also important. Have you been able to gain better control over your own destiny when it comes to the comparative costs across these different cloud providers?

Banner: One of the great features that OneSphere has is the capability to input values about how much your on-premise resources cost. Now, we have had OPEX and CAPEX models for our spend, but we didn’t have real-time feedback on what the different environments we are using cost across our shared infrastructures.

Getting this information back from HPE OneSphere was essential for us. We can now look at some products and say, “You know what? This is actually costing x amount of money. If we move it onto another platform, or to another service provider, we’d actually save costs.” These are the kind of insights that are generated now that we did not have before.

Gardner: I think that economics trumps technology, because ultimately, it’s the people paying the bills who have the final say. If economics trumps technology, are you demonstrating a return on investment (ROI) with HPE OneSphere?

Mistry: One of the aims for OneSphere is the “what-if” analysis. If I have a cloud workload, what are its characteristics, its requirements, and where should I best place it? What’s best for that actual thing? And then having the capability to determine which hyperscale cloud provider — or even the private cloud — has the correct set of features for that application. So that will come in the not too distant future.

Gardner: Tell us more about Magellan Robotech and why application quality, speed, and operational integrity are so important.

Game On

Banner: We operate across Europe. We offer virtual gaming, sports, terminals, and casino products, and we have integration to other providers, which is unique for a bookmaking company. A lot of gaming providers operate just retail platforms, or maybe have an online presence. We do everything.

Because we compete with so many others, it’s essential that our applications are stable, scalable, and have zero downtime. If we don’t meet these requirements, we’re not going to be able to compete and our customers are going to move elsewhere.

As a service provider we sell all of these products to other vendors. We have to make sure that our customers are pleasing their own customers. We want to make sure that our customers have these value-adds as well. And this is where HPE OneSphere comes into play for us.

Learn More About

Simplified Hybrid Cloud Management

Gardner: As the third largest gaming enterprise in Europe, you’re in multiple markets, but that means multiple jurisdictions, with multiple laws about privacy. Tell us about your security and compliance needs and how HPE OneSphere helps manage complexity across these different jurisdictions?

Banner: We deal with several regulatory bodies across Europe. Nearly all of them have different compliance standards that have to be applied to products. It’s unreasonable for us to expect the developers to know which standards have to be applied.

The current process is manual. We have to submit applications and spin-up machines on-premises. They have to be audited by a third-party, and by a government body from each country. This process can take months. It’s a long, arduous process for us to just release a product.

We needed a tool that provides us an overview of what is available out there, and what policies need to be applied to all of our services. We need to know how long it’s going to take to solve the problems before we can release services.

With HPE OneSphere, we are gaining great insights into what’s coming with regards to better managing compliance and policies. There will be governance panes, and the capability for line-of-business staff members to come in and assign policies to various different cloud providers.

And we can take this information to the developers and they can decide, “You know what? For us to go live in this particular country, we have to assign these various policies, and so we are going to need to change our code.” And this means that our time-to-market and time-to-value are going to be much higher.

Gardner: Raj, how important is this capability to go into different jurisdictions? I know there is another part of HPE called Cloud28+ and they are getting into different discrete markets and working with an ecosystem of providers. How much of a requirement is it to deal with multiple jurisdictions?

Guided compliance, vigilance

Mistry: It’s very complex. One of the evolving challenges that customers face as they adopt a hybrid or a multicloud strategy is how do I maintain my risk posture and compliance. So the intellectual property (IP) that we have built into OneSphere, which has been available from August 2018 onward, allows customers to look at the typical frameworks: FIPS, HIPAA,  GDPR, FCA, etc.

They will be able to understand, not just from a process perspective, but from a coding perspective, what needs to occur. Guidelines are provided to the developers. Applications can be deployed based on those, and then we will continually monitor the application.

If there is a change in the framework that they need to comply with, the line-of-business teams and the IT operations teams will get a note from the system saying, “Something has happened here, and if you are okay, please continue.” Or, “There is a risk, you have been made aware of it and now you need to take some action to resolve it.” And that’s really key. I don’t think anybody else in the market can do that.

Gardner: Graham, it sounds like you are going to be moving to wider adoption for HPE OneSphere. Is it too soon to get a sense of some of the paybacks, some of the metrics of success?

Guidelines are provided to the developers. Applications can only be deployed based on those, and we will continuously monitor the applications in production.

Banner: Fortunately, during the proof of concept we managed to get some metrics back. We had set some guidelines, and some aims for us to achieve during this process. I can give you an example. Traditionally we had a very old-fashioned ticket system for developers and our other customers.

They turned in a ticket, and they could wait for up to five days for that service to become available, so the developer or the customer could begin using that particular service.

With HPE OneSphere, and the self-service function which we provided, we found out that the time was no longer measured in days, it was no longer hours — it was minutes. This enabled the developers to quickly spin up machines. They can do iterative testing and get their products live, functioning, and bug-free faster. It frees up operational time so that we can concentrate on upgrading our platform and focus on various other projects.

We have already seen massive value in this product. When we spoke to the line of business about this, they have been pleased. They have already seen the benefits.

Gardner: Raj, what gets the most traction in the market? What is it that people perk up to when it comes to what OneSphere can do?

The data-insight advantage

Mistry: It’s the cost analytics and governance element. Deployment is a thing of the past. But once you have deployed it, how do you know what’s going on? How do you know what to do next? That’s the challenge we are trying to resolve. And that’s what’s resonating well with customers. It’s about, “Let me give you insights. Let’s get you the data so you can do something about it and take action.” That’s the biggest thing about it.

Learn More About

Simplified Hybrid Cloud Management

Gardner: What is it about the combination of product and support services and methodologies that are also helping to bring this to market?

Mistry: It’s about the guidance on application transformation. As people go digital, writing the new cloud-native stuff is easy. But like with Graham’s organization, and many organizations we talk to, they have a cloud-hosted, cloud-aware application that they need to be able to transform to make it more digitally friendly.

From a services perspective, we can guide customers in terms of what they should do and how they should introduce microservices and more cloud-native ways of working. Beyond that, it’s helping with cultural stuff. So, the beginnings of Agile development, leading to DevOps in the not too distant future.

The other side of it is the capability to build minimum viable clouds, both in the private and the public clouds with the IP that we have. So, the cloud thing can be had, but our effort is really to make it very easy.

Gardner: That strikes me as a huge next chapter, the minimum viable cloud. Is that attractive to you at Magellan Robotech?

Banner: Absolutely, yes. From an on-premise perspective, we want to go forward into the public cloud. We know we can leverage its services. But one thing we are very wary of is the cost. Traditionally, it has been expensive. Things have changed. We want to make sure we are not provisioning services that aren’t being used. Having these metrics is going to allow us to make the right choices in the future.

Gardner: Let’s look into the crystal ball. Going to the future, Graham, as a consumer, what would you like to see in HPE OneSphere next?

Public core and private cloud together

Banner: We already have the single pane of glass with OneSphere, so we can look at all our different clouds at once. We don’t have to go in multiple consoles and spend time learning and training on how to get to these reports from three or four different providers. So, we have the core, the core is there. We know that the public cloud and private cloud have different functionalities.

Onpremises can do certain things extremely well; it can handle all our current workloads. Public cloud can do this, too, and there are loads of additional features available. What we would like to see is a transition where some of these core functionalities of the public cloud are taken, managed, and applied to our private cloud as well.

There are compliance reasons why we can’t move all of our products into the public cloud. But by merging them together, you get a much more agnostic point of view of where are you going to best deploy your services and what features you should have.

Gardner: Ultimately, it may even be invisible to you as to whether it’s in a public or private cloud architecture. You want your requirements met, you want your compliance and security issues met, and let the automation of the underlying tool to take over.

Learn More About

Simplified Hybrid Cloud Management

Banner: Absolutely, yes. We would like to abstract away the location completely from our developers and our application guys. So, when they deploy, it gets put in the right place automatically, it has the right policies assigned to it. It’s in the right location. It can provide the services needed. It can scale. It can auto-bounce — all of this stuff. The end-user, our applications team, they won’t need to know which cloud it’s in. They just want to be able to use it and use the best available services.

Gardner: Raj, you just heard what the market is asking for. What do you see next for providers of cloud monitoring and management capabilities?

Mistry: Our focus will be around customizable cloud reporting, so the capability to report back on specific things from across all of the providers. Moving forward, we will have trending capabilities, the what-if forecasting capability from an analytics and insights perspective. Then we will build more on the compliance and governance. That’s where we are heading in the not-too-distant future. If our own developers do well, we will have that by the end of the year.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, managed services, multicloud, Security, User experience | Tagged , , , , , , , , , , , , , | Leave a comment

How Norway’s Fatland beat back ransomware thanks to a rapid backup and recovery data protection stack

FreshMeatSupermarket (1)The next BriefingsDirect strategic storage and business continuity case study discussion explores how Norway’s venerable meat processing business, Fatland, relied on rapid backup and recovery solutions to successfully defended against a nasty ransomware attack.

The comprehensive backup and recovery stack allowed Fatland’s production processing systems to snap back to use after only a few hours, but the value of intelligent and increasingly hybrid storage approaches go much further to assure the ongoing integrity of both systems — and business outcomes.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to explain how vertically integrated IT infrastructure and mirrored data strategies can prevent data loss and business downtime are Terje Wester, the CEO at Fatland, based in Norway, and Patrick Osborne, Vice President and General Manager of Big Data and Secondary Storage at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Terje, getting all of your systems back up in a few hours after an aggressive ransomware attack in 2017 probably wasn’t what first drove you to have a comprehensive backup and recovery capability. What were the early drivers that led you to put in a more modern approach to data lifecycle management?

Terje Wester

Wester

Wester: First of all, we have HPE end-to-end at Fatland. We have four production sites. At one production site we have our servers. We are running a meat business, doing everything from slaughtering to processing and packing. We deal with the farmers; we deal with the end customers. It’s really important to have good IT systems, also safe systems.

When we last invested in these HPE systems, we wanted something that was in front of the line, which was safe, because the uptime in the company is so important. Our IT people had the freedom to choose what they thought was the best solution for us. And HPE was the answer. We tested that really hard on this ransomware episode we had in September.

Gardner: Patrick, are you finding in the marketplace that people have primary reasons for getting into a comprehensive data protection mode? It can become a gift that keeps giving.

Osborne: A lot of our customers are now focusing on security. It’s definitely top of mind. What we are trying to provide is more of an integrated approach, so it’s not a secondary or an afterthought that you bolt on.

Patrick Osborne

Osborne

Whether it’s our server products, with silicon root of trust, or our storage products, with things like we have done for Fatland such as Recovery Manager Central (RMC), or with our integrated offerings such as our hyper-converged infrastructure (HCI) product line — the theme is the same. What we are trying to weave through this is that data protection and availability are an endemic piece of the architecture. You get it on day one when you move to a modernized architecture, as opposed to running into a ransomware or an availability issue and then having to re-architect after-the-fact.

What we are trying to do with a number of customers is, from day one, when you renew your infrastructure, it has all of this availability and security built in. That’s one of the biggest things that we see, that’s helpful for customers these days.

Learn How HPE BladeSystem

Speeds Delivery of Business Outcomes 

Gardner: Data and security integration are, in fact, part of the architecture. Security is not a separate category or a chunk that you bolt on later.

Osborne: Exactly.

Gardner:T erje, tell us a about the .NM4 crypto virus. In 2017, this hit a lot of people. Some were out for days. What happened when this hit your organization?

Rapid response, recovery

Wester: These people were trying to attack us. They started to visit our servers and got in on a Thursday. They worked until that Friday night and found an opening. This was something that happened in the middle of the night and they closed down the servers. They put in this ransomware, so that closed down everything.

On Saturday, we had no production. So, Saturday and Sunday for us were the days to work on and solve the problem. We contacted HPE for consultants, to determine what to do. They came over from Oslo on Sunday, and from Sunday afternoon to early Monday morning we recovered everything.

On Monday morning we started up, I think, only about 30 minutes behind schedule and the business was running. That was extremely important for us. We have live animals coming in on Sunday to be slaughtered on Monday. We have rapid processing. Christmas was around the corner and everything that we produce is important every day. The quick recovery was really important for us.

Gardner: You are an older, family-run organization, dating back to 1892. So, you have a very strong brand to protect.

On Monday morning we started up only 30 minutes behind schedule and the business was running. That was extremely important to us. The quick recovery was really important.

Wester: That’s right, yes.

Gardner: You don’t want to erode that brand. People want to continue to hold the trust they have had in you for 125 years.

Wester: They do. The farmers have been calling us for slaughtering of their cattle for generations. We have the typical supermarket chains in Norway as our main customers. We have a big daily turnover, especially in September through October, when all the lambs are coming in. It’s just a busy period and everybody trusts that we should work for them every day, and that’s our goal, too.

Gardner: Patrick, what was it about the HPE approach, the Recovery Manager Central and StoreOnce, that prevented the ransomware attack, in this case, from causing the significant downtime that we saw in other organizations?

Osborne: One of the important things to focus on is that in the case of Fatland it’s not so much the money that you would have had to pay for the ransomware, it’s the downtime. That is key.

Using our architecture, you can take application or data-specific point-in-time copies of the data that’s critical — either mission-critical or business-critical — at a very granular level. You can orchestrate that, and then send that all off to a secondary system. That way you have an additional layer of security.

What we announced in November 2017 at Discover in Madrid is the ability to go even further beyond that and send an additional copy to the cloud. At all layers of the infrastructure, you will be able to encrypt that data. We designed the system around not so much backup — but to be able to restore quickly.

The goal is to provide a very aggressive recovery time objective (RTO) in a very granular recovery point objective. So, when a team like Terje’s at Fatland recognizes that they have a breach, you can mitigate that, essentially staunch the issue, and be able to rapidly recover from a well-known set of data that wasn’t compromised.

For us it’s all about architecting to rapidly recover, of making that RTO as quickly as possible. And we see a lot of older architectures where you have a primary storage solution that has all of your data on it and then not a really good backup infrastructure.

What turned into two days of disruption for Fatland could have been many more days, if not weeks, in older infrastructure. We really just are focused on mitigation of RTO.

Learn How HPE BladeSystem

Speeds Delivery of Business Outcomes 

Gardner: In the case of the cryptovirus, did the virus not encrypt the data at all, or was it encrypted but you were able to snap back to the encryption-free copies of the data fast?

Osborne: When we do this at the storage layer, we are able to take copies of that data and then move it off to a secondary system, or even a tertiary system. You then have a well-known copy of that data before it’s been encrypted. You are able to roll back to a point in time in your infrastructure before that data has been compromised, and then we can actually go a step further.

Some of the techniques allow you to have encryption on your primary storage. That usually helps if you are changing disk drives and whatnot. It’s from a security perspective. Then we are actually able to encrypt again at the data level on secondary storage. In that case, you have a secure piece of the infrastructure with data that’s already been encrypted at a well-known point in time, and you are able to recover. That really helps out a lot.

Gardner: So, their encryption couldn’t get past your encryption?

Osborne: Yes.

Gardner: The other nice thing about this rapid recovery approach is that it doesn’t have to be a ransomware or a virus or even a security issue. It could be a natural disaster; it could be some human error. What’s important is the business continuity.

Now that you have been through the ransomware attack, how is your confidence in always being up and running and staying in business in general, Terje?

Business continuity bonus

Wester: We had been discussing this quite a lot before this ransomware issue. We established better backup systems, but now we are looking into extending them even more, to have another system that can run from the minute the main servers are down. We have a robotized system picking out meat for the supermarket chains 24×7, and when their main server stops, something should be able to take over and run the business. So, within a very short time we will also have that solution in place, with good help from HPE.

Gardner: Patrick, not that long ago the technology to do this may have been there, but the costs were prohibitive. The network and latency and issues were prohibitive. What’s happened in the past several years that allows you to go to a company such as Fatland and basically get them close to 99.9999 percent availability across the board?

Osborne: In the past, you had customers with a preferred vendor for servers, a preferred vendor for networking, and another preferred vendor for storage. That azimuth is changing to a vertically oriented stack. So, when Terje has a set of applications or business needs, we are able to, as a portfolio company, bring together that whole stack.

In the past, the customer was the integrator, and the cost was in bringing many, many different disparate solutions together. They would act as the integrator. That was probably the largest cost back in the day.

We’re now bringing together something that’s vertically oriented and has security and data protection availability throughout the stack. At the end of the day it’s a business enabler for a business of any size.

Now, we’re bringing together something that’s more vertically oriented and that has security and data protection availability throughout the stack. We’re making these techniques and levels of availability for customers of any size, where IT is not really their core competency. At the end of day, it’s a business enabler, right?

Wester: Right, absolutely.

Osborne: The second piece from a networking perspective is that very large and low-cost bandwidth has definitely changed the game in terms of being able to move data, replicate data from on-premise, even off-premise to the cloud, that’s certainly been an enabler as well.

Gardner: We are seeing mirroring of entire data centers in amazing amounts of time.

Also, you have an integrated stack approach, with HPE focused on security engineered in, across the board, from the silicon up. What are some of the newer technologies that we can expect to see that further increases higher availability, lower risk and lower cost?

Shared signature knowledge 

Osborne: Terje’s team had cryptovirus on-premise, a breach with a number of different signatures. We are now focusing on artificial intelligence (AI) for the data center. So, taking the human factor out of it to help recognize the problems faster.

So, if they have a breach, and that has certain signatures found in the infrastructure, we can take that and apply that knowledge to other customers. And likewise, they may have some things that happened to them that can benefit Fatland as well.

Using machine learning techniques, we have a number of things that we have brought to the table for what we call predictive analytics in the data center. So HPE Aruba on the networking side has a number of capabilities, too.

We are bringing InfoSight, which is our predictive analytics for storage, and extending that to other parts of the infrastructure. So, servers, networking, and storage. You can start to see signatures in more places.

The General Data Protection Regulation (GDPR) has been implemented, and there are some high fines. You have to report within 72 hours. So, anything you can do to take the human factor out of this, from a technology perspective is a win for everyone, and we have a big investment in that.

Learn How HPE BladeSystem

Speeds Delivery of Business Outcomes 

Gardner: And that gets back to the idea that strategic data protection is the gift that keeps giving. As more systems are integrated, the more data analysis can be done, signatures patterns shared with other organizations, and you can ultimately become predictive rather than reactive.

Terje, the level of confidence that you have seems to be high, it’s perhaps going to get higher. What other recommendations might you have for other organizations that are thinking about this? Did it turn out to be a good investment, and what sort of precautions might you have for others if they haven’t done this already?

Communication is key

Wester: Data itself is not part of our core business. But communication is. It is extremely important for us to communicate internally and externally all the time.

In every organization, IT people need to talk to the management and the board about these safety issues. I think that should be brought to the table before these problems come up.

We have good systems, HPE end-to-end. Of course, one thing that is important is to have modern technology in place, so we could have a quick recovery, and that was a good thing.

Most important for us was that the IT management had the trust from us — the management and the board — to invest in what they thought was the best solution. We still saw some operational breaches and we need to do better. This is a big focus with us. Every organization should invest time to look into the infrastructure to see what to do to make it safer for quick recovery, which is important for any company. Bring it on to the table for the board, for the management, for a really good discussion — it’s worth that.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, Cloud computing, Cyber security, data analysis, data center, Data center transformation, disaster recovery, enterprise architecture, Hewlett Packard Enterprise, hyperconverged infrastructure, machine learning, retail, Security, server, Software-defined storage, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How hybrid cloud deployments gain traction via Equinix datacenter adjacency coupled with the Cloud28+ ecosystem

DE-CIX5

The next BriefingsDirect hybrid cloud advancement interview explores how the triumvirate of a global data center hosting company, a hybrid cloud platform provider, and a global cloud community are solving some of the most vexing problems for bringing high-performance clouds to more regions around the globe.

We will now explore how EquinixMicrosoft Azure Stack, and Hewlett Packard Enterprise (HPE)’s Cloud28+ are helping managed service providers (MSPs) and businesses alike obtain world-class hybrid cloud services.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to explain more about new breeds of hybrid cloud solutions are David Anderson, Global Alliance Director at Equinix for its Microsoft alliance, and Xavier Poisson, Vice-President of Worldwide Services Providers Business and Cloud28+ at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There seems to be a paradox when it comes to hybrid cloud — that it works best in close proximity technologically yet has the most business payoff when you distribute it far and wide. So how are Equinix, Microsoft, and HPE together helping to solve this paradox of proximity and distribution?

David Anderson

Anderson

Anderson: That’s a great question. You are right that hybrid cloud does tend to work better when there is proximity between the hybrid installation and the actual public cloud you are connecting to. That proximity can actually be lengthened with what we call interconnectedness.

Interconnectedness is really business-to-business (B2B) and business-to-cloud private network Ethernet connections. Equinix is positioned with more than 200 data centers worldwide, the most interconnections by far around the world. Every network provider is in our data centers. We also work with cloud providers like Microsoft. The Equinix Cloud Exchange connects businesses and enterprises to those clouds through our Equinix Cloud Exchange Fabric. It’s a simple one-port virtual connection, using software-defined networking (SDN), up to the public clouds.

That provides low-latency and high-performance connections — up to 10 Gigabit network links. So you can now run a hybrid application and it’s performing as if it’s sitting in your corporate data center not far away.

The idea is to be hybrid and to be more dispersed. That dispersion takes place through the breadth of our reach at Equinix with more than 200 data centers in 45 metro areas all over the world — and so, interconnected all over.

Plus, there are more than 50 Microsoft Azure regions. We’re working closely with Microsoft so that we can get the cloud out to the customers fairly easily using the network service providers in our facilities. There are very few places on Earth where a customer can’t get from where they are to where we are, to a cloud – and with a really high-quality network link.

Gardner: Xavier, why is what we just heard a good fit for Cloud28+? How do you fit in to make hybrid clouds possible across different many regions?

Xavier Poisson Gouyou Beauchamps

Poisson

Poisson: HPE has invested a lot in intellectual property in building our own HPE and Microsoft Azure Stack solution. It’s designed to provide the experience of a private cloud while using Microsoft as your technology’s tool.

Our customers want two things. The first is to be able to execute clouds on-premises, but also to connect to wider public clouds. This is enabled by what we are doing with a partner like Equinix. We can jump from on-premises to off-premises for an end-user customer.

The second is, when a customer decides to go to a new architecture around hybrid cloud, they may need to get reach and this reach is difficult now.

So, how we can support partners to find the right place, the right partners at the right moment in the right geographies with the right service level agreements (SLAs) for them to meet their business needs?

The fact that we have Equinix inside of Cloud28+ as a very solid partner is helping our customers and partners to find the right route. If I am an enterprise customer in Australia and I want to reach into Europe, or reach into Japan, I can, through Cloud28+, find the right service providers to operate the service for me. But I will also be hosted by a very compelling co-location company like Equinix, with the right SLAs. And this is the benefit for every single customer.

This has a lot of benefits for our MSPs. Why? Because our MSPs are evolving their technologies, evolving their go-to-market strategies, and they need to adapt. They need to jump from one country to another country, and they need to have a sustainable network to make it all happen. That’s what Equinix is providing.

Learn How Cloud28+ Accelerates

Cloud Adoption Around the Globe

We not only help the end-user customers, but we also help our MSPs to build out their capabilities. Why? We know that with interconnectedness, as was just mentioned, that they can deliver direct cloud connectivity to all of their end users.

Together we can provide choice for partners and end-user customers in one place, which is Cloud28+. It’s really amazing.

Gardner: What are some of the compelling new use cases, David? What are you seeing that demonstrates where this works best? Who should be thinking about this now as a solution?

Data distribution solutions 

Anderson: The solution — especially combined with Microsoft Azure Stack — is suited to those regions that have had data sovereignty and regulatory compliance issues. In other words, they can’t actually put their data into the public cloud, but they want to be able to use the power, elasticity, and the compute potential of the public cloud for big data analytics, or whatever else they want to do with that data. And so they need to have that data adjacent to the cloud.

Same for an Azure Stack solution. Oftentimes it will be in situations where they want to do DevOps. The developers might want to develop in the cloud, but they are going to bring it down to a private Azure Stack installation because they want to manage the hardware themselves. Or they actually might want to run that cloud in a place where public Azure may not yet have an availability zone. That could be sub-Saharan Africa, or wherever it might be — even on a cruise ship in the middle of the ocean.

There’s a lot of legacy hardware out there. The need is for applications to run on a cloud, but the hardware can’t be virtualized. These workloads could be moved to Equinix and then connect to a cloud.

Another use case that we are driving hard right now with Microsoft, HPE, and Cloud28+ is on the idea of an enterprise cage, where there is a lot of legacy hardware out there. The need is for applications to run to some degree on a cloud, but the hardware can’t be virtualized. But these workloads could be moved to an Equinix data center and connected to the cloud. They can then use the cloud for the compute part, and all of a sudden they are still getting value out of that legacy hardware, in a cloud environment, in a distributed environment.

Other areas where this is of value include a [data migration] appliance that is shipped out to a customer. We’ve worked a lot with Microsoft on this. The customer will put up to 100 TB of data on the appliance. It then gets shipped to one of our data centers where it’s hooked up through high-speed connection to Azure and the data can be ingested into Azure.

Now, that’s a onetime thing, but it gives us and our service providers on Cloud28+ the opportunity to talk to customers about what they are going to do in the cloud and what sort of help might you need.

Scenarios like that provide an opportunity to learn more about what enterprises are actually trying to do in the cloud. It allows us then to match up the service providers in our ecosystem, which is what we use Cloud28+ for with enterprise customers who need help.

Gardner: Xavier, it seems like this solution democratizes the use of hybrid clouds. Smaller organizations, smaller MSPs with a niche, with geographic focus, or in a vertical industry. How does this go down market to allow more types of organizations to take advantage of the greatest power of hybrid cloud?

Hybrid cloud power packaged

Poisson:We have packaged the solutions together with Equinix by default. That means that MSPs can just cherry pick to provide new cloud offerings very quickly.

Also, as I often say, the IT value chain has not changed that much. It means that if you are a small enterprise, let’s say in the United States, and you want to shape your new generation of IT, do you go directly to a big cloud provider? No, because you still believe in your systems integrator (SI), and in your value-added reseller (VAR).

Interestingly, when we package this with Equinix and Microsoft, having this enterprise cage, the VARs can take the bull by the horns. Because, when the customer comes to them and says, “Okay, what should I do, where should put my data, how can I do the public cloud but also a private cloud?” The VAR can guide them because they have an answer immediately — even for small- to medium-sized (SMB) businesses.

Learn How Cloud28+ Accelerates

Cloud Adoption Around the Globe

Our purpose at Cloud28+ is to explain all of this through thought leadership articles that we publish — explaining the trends in the market, explaining that the solutions are there. You know, not a lot of people know about Equinix. There are still people who don’t know that they can have global reach.

If you are a start-up, for example, you have a new business, and you need to find MSPs everywhere on the globe. How you do that? If you go to Cloud28+ you can see that there are networks of service providers or learn what we have done with Equinix. That can empower you in just a few clicks.

We give the access to partners who have been publishing more than 900 articles in less than six months on various topics such as security, big data, interconnection, globalization, artificial intelligence (AI), and even the EU’s General Data Protection Regulation (GDPR). They learn and they find offerings because the articles are connected directly to those offering services, and they can get in touch.

We are easing the process — from the thought leadership, to the offerings with explanations. What we are seeing is that the VARs and the SIs are still playing an enormous role.

So, it’s not only Microsoft, with HPE, and with the data centers of Equinix, but we put the VARs into the middle of the conversation. Why? Because they are near the SMBs. We want to make everything as simple as you just put in your credit card and you go. That’s fair enough for some kinds of workloads.

But in most cases, enterprises still go to their SIs and their VARs because they are all part of the ecosystem. And then, when they have the discussion with their customers, they can have the solution very, very quickly.

Gardner: Seems to me that for VARs and SIs, the cloud was very disruptive. This gives them a new lease on life. A middle ground to take advantage of cloud, but also preserve the value that they had already been giving.

Take the middle path 

Poisson: Absolutely. Integration services are key, application migrations are key, and security topics are very, very important. You also have new areas such as AI and blockchain technologies.

For example, in Asia-Pacific and Europe, Middle East and Asia (EMEA), we have more-and-more tier-two service providers that are not only delivering their best services but are now investing in practices around AI or blockchain — or combine them with security — to upgrade their value propositions in the market.

For VARs and for Sis, it is all benefit because they know that solutions exist, and they can accompany their customers to the transition. For them, this is all also a new flow of revenue.

Gardner: As we get the word out that these distributed hybrid cloud solutions are possible and available, we should help people understand which applications are the right fit. What are the applications that work well in this solution?

The hybrid solution gives SIs, service providers, and enterprises more flexibility than if they try and move an application completely into the cloud.

Anderson: The interesting thing is that applications don’t have to be architected in a specific way, based on the way we do hybrid solutions. Obviously, the apps have to be modern.

I go back to my engineering days 25 years ago, when we were separating data and compute and things like that. If they want to write a front-end and everything in platform-as-a-service (PaaS) on Azure and then connect that down to legacy data, it will work. It just works.

The hybrid situation gives SIs, service providers, and enterprises more flexibility than if they try and move an application, whatever it is, completely into the cloud, because that actually takes a lot more work.

Some service providers believe that hybrid is a transitory stage, that enterprises would go to hybrid just to buy them time till they go fully public cloud. I don’t believe Microsoft thinks that way, and we certainly don’t think that way. I think there is a permanent place for hybrid cloud.

In fact, one of the interesting things when I first got to Equinix was that we had our own sellers saying, “I don’t want to talk to the cloud guys. I don’t want them in our data centers because they are just going to take my customers and move them to the cloud.”

The truth of the matter is that demand for our data centers has increased right along with the increase in public cloud consumption. So it’s a complementary thing, not a substitution thing. They need our data centers. What they are trying to do now is to close their own enterprise data centers.

And they are getting into Equinix and finding out that the connectivity possibilities and — especially in the Global 2000 enterprises — nobody wants cloud vendor lock-in. They are all multicloud. Our Equinix Cloud Exchange Fabric solution is a great way to get in at one point and be able to connect to multiple cloud providers from right there.

It gives them more flexibility in how they design their apps, and also more flexibility in where they run their apps.

Gardner: Do you have any examples of organizations that have already done this? What demonstrates the payoffs? When you do this well, what do you get for it?

Cloudify your networks

Anderson: We have worked with customers in these situations where they have come in initially for a connection to Microsoft, let’s say. Then we brought them together with a service provider and worked with them on network transformations to the point where they have taken their old networks – a lot of Multiprotocol Label Switching (MPLS) and everything else that were really very costly and didn’t perform that well — and ended up being able to rework their networks. We like to say they cloudifytheir networks, because a lot of enterprise networks aren’t really ready for the heavy load of getting out to the cloud.

And we ended up increasing their performance by up to 10, 15, 20 times — and at the same time cut their networking costs in half. Then they can turn around and reinvest that in applications. They can also then begin to spin up cloud apps, and just provision them, and not have to worry about managing the infrastructure.

They want the same thing in a hybrid world, which is where those service providers that we find on Cloud28+ and that we amplify, come in. They can build those managed services, whether it’s a managed Azure Stack offering or anything else. That enables the enterprise IT shops to essentially do the same thing with hybrid that they are doing with public cloud – they can buy it on a consumption model. They are not managing the hardware because they are offloading that to someone else.

Learn How Cloud28+ Accelerates

Cloud Adoption Around the Globe

Because they are buying all of their stuff in the same model — whether it’s considered on-premises or a third-party facility like ours, or a totally public cloud. It’s the same purchasing model, which is making their procurement departments happy, too.

Gardner: Xavier, we have talked about SIs, VARs, and MSPs. It seems to me that for who we used to call independent software vendors (ISVs), the former packaged software providers, that this hybrid cloud model also offers a new lease on life. Does this work for the applications providers, too?

Extend your reach 

Poisson: Yes, absolutely. And we have many, many examples in the past 12 months of ISVs, software companies, coming to Cloud28+ because we give them the reach.

Lequa AB, a Swedish company, for example, has been doing identity management, which is a very hot topic in digital transformation. In the digital transformation you have your role when you speak to me, but in your other associations you have another role. The digital transformation of these roles needs to be handled, and Lequa has done that.

And by partnering with Cloud28+, they have been able to extend their reach in ways they wouldn’t ever have otherwise. Only in the past six months, they have been in touch with more than 30 service providers across the world. They have already closed deals.

If I am only providing baseline managed information services, how can I differentiate from the hyperscale cloud providers? MSPs now care more about the applications to differentiate themselves in the market.

On one side of the equation for ISVs, there is a very big benefit — to be able to reach ready-to-be-used service providers, powered by Equinix in many cases. For the service providers, there is also an enormous benefit.

If I am only providing baseline managed information services, how can I differentiate from the hyperscale cloud providers? How can I differentiate from even my own competitors? What we have seen is that the MSPs are now caring more about the application makers, the former ISVs, in order for them to differentiate in the market.

So, yes, this is a big trend and we welcome into Cloud28+ more and more ISVs every week, yes.

Gardner: David, another concern that organizations have is as they are distributing globally, as there are more moving parts in a hybrid environment, things become more complex. Is there something that HPE is doing with new products like OneSphere that will help? How do we allow people to gain confidence that they can manage even something that’s a globally distributed hybrid set of applications?

Confident connections in global clouds 

Anderson: There are a number of ways we are partnering with HPE, Microsoft, and others to do that. But one of the keys is the Equinix Cloud Exchange Fabric, where now they only have to manage one wire or fiber connection in a switching fabric. That allows them to spin up connections to virtually all of the cloud providers, and span those connections across multiple locations. And so that makes it easier to manage.

The APIs that drive the Equinix Cloud Exchange Fabric can be consumed and viewed with tools such as HPE OneSphere to be able to manage everything across the solution. The MSPs are also having to take on more and be the ones that provide management.

As the huge, multinational enterprises disperse their hybrid clouds, they will tend to view those in silos. But they will need one place to go, one view to look at, to know what’s in each set of data centers.

At Equinix, our three pillars are the ideas of being able to reach everywhere, interconnect everything, and integrate everything. That idea says we need to be the place to put that on top of HPE with the service providers because then that gives you that one place that reaches those multiple clouds, that one set of solid, known, trusted advisors in HPE and the service providers that are really certified through Cloud28+. So now we have built this trusted community to really serve the enterprises in a new world.

Gardner: Before we close out, let’s take a look into the crystal ball. Xavier, what should we expect next? Is this going to extend to the edge with the Internet of Things (IoT), more machine learning (ML)-as-a-service built into the data cloud? What comes next?

The future is at the Edge

Poisson: Today we are 810 partners in Cloud28+. We cover more than 560 data centers in more than 34 countries. We have been publishing nearly 30,000 cloud services in only two years. You see how fast it has been growing

What do we expect in the future? You named it: Edge is a very hot topic for us and for Equinix. We plan to develop new offering in this area, even new data center technology. It will be necessary to have new findings around what a data center of tomorrow is, how it will consume energy, and what we can do with it together.

We are already engaged in conversations between Equinix, ourselves, and another company within the Cloud28+ community to discuss what the future data center could be.

A huge benefit of having this community is that by default we innovate. We have new ideas because it’s coming through all of the partners. Yes, edge computing is definitely a very hot spot.

For the platform itself, I believe that even though we do not monetize in the data center, which is one of the definitions of Cloud28+, the revenues at the edge are for the partners, and this is also by design.

Nonetheless, we are thinking of new things such as a smart contracting around IoT and other topics, too. You need to have a combination of offerings to make a project. You need to have confidentiality between players. At the same time, you need to deliver one solution. So next it may be solutions on best ways for contracting. And we believe that blockchain can add a lot of value in that, too.

Cloud28+ is a community and a digital business platform. We are thinking of such things as smart contracting for IoT and using blockchain in many solutions.

Cloud28+ is a community and a digital business platform. By the way, we are very happy to have been recognized as such by Gartner in several research notes since September 2017. We want to start to include these new functions around smart contracting and blockchain.

The other part of the equation is how we help our members to generate more business. Today we have a module that is integrated into the platform to amplify partner articles and their offerings through social media. We also have a lead-generation engine, which is working quite well.

We want to launch an electronic lead-generation capability through our thought leadership articles. We believe that if we can give the feedback to the people filling in these forms, with how they position versus all of their peers, on how they position versus the industry analysts, they will be very eager to engage with us.

And the last piece is we need to examine more around using ML across all of these services and interactions between people. We need to deep dive on this to find what value we can bring from out of all this traffic, because we have such traffic now inside Cloud28+ that trends are becoming clear.

Learn How Cloud28+ Accelerates

Cloud Adoption Around the Globe

For instance, I can say to any partner that if they publish an article on what is happening in the public sector today, it will have a yield that is x-times the one that has been published at an earlier date. All this intelligence, we have it. So what we are packaging now is how to give intelligence back to our members so they can capture trends very quickly and publish more of what is most interesting to the people.

But in a nutshell, these are the different things that we see.

Gardner: And I know that evangelism and education are a big part of what you do at Cloud28+. What are some great places that people can go to learn more?

Poisson: Absolutely. You can read not only what the partners publish, but examine how they think, which gives you the direction on how they operate. So this is building trust.

For me, at the end of the day, for an end-user customer, they need to have that trust to know what they will get out of their investments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise

You may also be interested in:

Posted in Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Microsoft, multicloud, Software-defined storage | Tagged , , , , , , , , , , , , , | Leave a comment

How data analytics-rich business networks help close the digital transformation gap

Digital-transformationThe next BriefingsDirect thought leadership discussion explores how intelligence gleaned from business applications, data, and networks provides the best new hope for closing the digital transformation gap at many companies.

A recent global survey of procurement officers shows a major gap between where companies are and where they want to be when it comes to digital transformation. While 82 percent surveyed see digital transformation as having a major impact on processes — only five percent so far see significant automation across their processes.

How can business networks and the cloud-based applications underlying them better help companies reach a more strategic level of business intelligence and automation?

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To find out, BriefingsDirect recently visited SAP in Palo Alto, Calif. to sit down with Darren Koch, Chief Product Officer at SAP Ariba. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s holding companies back when it comes to becoming more strategic in their processes? They don’t seem to be able to leverage intelligence and automation to allow people to rise to a higher breed of productivity.

Darren Koch, Chief Product Officer at SAP Ariba

Koch

Koch: I think a lot of it is inertia. The ingrained systems and processes that exist at companies impact a lot of people. The ability for those companies to run their core operations relies on people andtechnology working together. The change management required by our customers as they deploy solutions — particularly in the move from on-premises to the cloud — is a major inhibitor.

But it’s not just the capabilities and the change in the new technology. It’s really re-looking at — and reimagining — the processes, the things that existed in the highly customized on-premises world, and the way those things change in a digital-centric cloud world. They are fundamentally different.

Gardner: It’s always hard to change behavior. It seems like you have to give people a huge incentive to move past that inertia. Maybe that’s what we are all thinking about when we bring new data analytics capabilities to bear. Is that what you looking at, incentivization — or how do we get that gap closed? 

Reimagining change in the cloud

Koch: You are seeing more thought leadership on the executive side. You are seeing companies more willing to look holistically at their processes and saying, “Is this something that truly differentiates my company and adds sustainable competitive advantage?” And the answer on some processes is, “No.”

And so, we see more moving away from the complex, on-premises deployments that were built in a world where a truckload of consultants would show up and configure your software to do exactly what you wanted. Instead, we’re moving to a data-centric best-practices type of world that gives scale, where everybody operates in the same general business fabric. You see the emergence of things like business networks.

Gardner: And why the procurement and supply chain management folks? Why are they in an advantageous position to leverage these holistic benefits, and then evangelize them?

Koch: There’sbeen a ton of talk and innovation on the selling side, on the customer resource management (CRM) side, such as our announcement of C/4HANA at Sapphire 2018 and the success in the cloud generally in the CRM space. What most people stop at is, for every seller there’s a buyer. We represent the buy-side, the supply chain, the purchasing departments. And now from that buy-side we have the opportunity to follow the same thought processes on the sell-side.

The beauty at SAP Ariba is that we have the world’s biggest business network. We have over $2 trillion of buy-side spend and our ability to take that spend and find real insights and real actionable change to drive value at the intersection of buyers and sellers. This is where we’re headed.

Gardner: It seems like we are moving rapidly beyond the buy and sell being just transactional and moving more to deeper partnerships, visibility, of understanding the processes on both sides of the equation. That can then bring about a whole greater than the sum of the parts.

Understanding partners 

Koch: Exactly. I spent 10 years working in the consumer travel space, and my team in particular was working on how consumers choose hotels. It’s a very complex purchasing decision.

There are location aspects, there are quality aspects, there are amenities, room size, obviously price, and there are a lot of non-price actors that go into the purchase decision, too. When you look at what a procurement audience is doing, what a company is doing, there are a lot of such non-price factors. It’s exactly the same problem.

The investments that we are making inside of SAP Ariba get at allowing you to see things like supplier risk. You are seeing things like the Ariba Network handling direct materials. You are seeing time, quality, and risk factors — and these other non-price dimensions — coming in, in the same way that consumers do when choosing a hotel. Nobody chooses the cheapest one, or very few people do. Usually it’s a proper balance of all of these factors and how they best meet the total needs. We are seeing the same thing on the business procurement side.

When you look at what a procurement audience is doing, what a company is doing, there are now a lot of non-price factors.

Gardner: As consumers we have information at our fingertips — so we can be savvy and smart – probably better than at any other time in history. But that doesn’t always translate to a larger business-to-business (B2B) decisions.

What sort of insights do you think businesses will want when it comes that broader visibility?

Koch: It starts with the basics. It starts with, “How do I know my suppliers? How do I add scale? Is this supplier General Data Protection Regulation (GDPR)-compliant? Do they have slavery or forced labor in their supply chain? Where are they sourcing their materials?” All of these aspects around supplier risk are the basics; knowing your supplier well is the basic element.

Then when you go beyond that, it’s about things like, “Well how do I weigh geographic risk? How do I weigh supply chain risk?” And all the things that the practitioners of those disciplines have been screaming about for the rest of their companies to pay attention to.

That’s the new value they are providing. It’s that progression and looking at the huge opportunity to see the way companies collaborate and share data strategically to drive efficiency into processes. That can drive efficiency ultimately into the whole value chain that leads to a better customer experience at the end.

Gardner: Customer experience is so important across the board. It must be a big challenge for you on the product side to be able to contextually bring the right information and options to the end-user at the right time. Otherwise they are overwhelmed, or they don’t get the benefit of what the technology and the business networks can do.

What are you doing at SAP Ariba to help bring that right decision-making — almost anticipating where the user needs to go — into the actual applications and services?

Intelligent enterprise

Koch: That begins with our investments in re-platforming to SAP HANA. That feeds into the broader story about the intelligent enterprise. Purchasing is one facet, supply-chain management is a facet, sales is a facet, and production — all of these components are elements of a broader story of how you synthesize data into a means where you have a digital twin of the whole enterprise.

Then you can start doing things like leveraging the in-memory capabilities of HANA around scenario planning, and around, “What are the implications of making this decision?”

What happens when a hurricane hits Puerto Rico and your supply chain is dramatically disrupted? Does that extend to my suppliers’ suppliers?  Who are my people on the ground there, and how are they disrupted? How should my business respond in an intelligent way to these world events that happen all the time?

Gardner: We have talked about the intelligent enterprise. Let’s hypothetically say that when one or two — or a dozen — enterprises become intelligent that they gain certain advantages, which compels the rest of their marketplace to follow suit.

When we get to the point where we have a critical mass of intelligent enterprises, how does that elevate to an intelligent economy? What can we do when everyone is behaving with this insight, of having tools like SAP Ariba at their disposal?

Koch: You hit on a really valuable and important point. Way back, I was an economics major and there was a core thing that I took away 20 years ago from my intro to macroeconomics class. The core of it was that everything is either value or waste. Every bit of effort, everything that’s produced around the world, all goods or services are either valuable or a waste. There is nothing in between.

The question then as we look at value chains, when we look at these webs of value, is how much of that is transaction cost? How much of that is information asymmetry? How much of that is basic barriers that get in the way of ultimately providing value to the end consumer? Where is all of that waste?

When you look at complex value chains, at all of the inventory sitting in warehouses, the things that go unsold, the mismatches between supply and demand across a value chain — whether you are talking about direct materials or about pens and paper sitting in a supply closet — it really doesn’t matter.

When you look at complex value chains … how much of that goes into actually delivering on what your customers and employees value — and how much of it is waste?

It’s all about how much of that goes to actually delivering on what your customers, your employees, and your stakeholders’ value — and how much of it is waste? As we link these data sets together — the real production planning, understanding end-user demand, and all the way back through the supply chain – we can develop new transparency that brings a ton of value. And by ultimately everyone in the value chain understanding what the consumers’ actually value, then they can innovate in the right ways.

So, I see this all dramatically changing as you link these intelligent companies together. As companies move in the same way — into a sharing mindset – then the sharing economy uses resources in a far more efficient way, in the exact same way as we use our data resources in a more efficient way.

Gardner: This also dovetails well with being purposeful as a business. If many organizations are encouraging higher productivity, which reduces inefficiencies and helps raise wages, it can lead to better standards of life. So, the stakes here are pretty high.

We’re not just talking about adding some dollars to the bottom and top lines. We’re also talking about a better economy that raises all boats.

Purposeful interconnections

Koch: Yes, absolutely. You see companies like Johnson and Johnson, who at their core, from their founding principles, have the importance of their community as one of the core founding principles. You see it in companies like Ford and their long heritage. Those ideals are really coming back from the decade of the 1980s where greed was good and now back to a more holistic understanding of the interconnectedness of all of this.

And it’s good as humans. It’s also good from the business perspective because of the need to attract and retain the talent required to run a modern enterprise. And building the brands that our consumers are demanding, and holding companies accountable, they all go hand-in-hand.

And so, the purpose aspect really addresses the broader stakeholder aspects of creating a sustainable planet, a sustainable business, sustainable employment, and things like that.

Gardner: When we think about attaining this level of efficiency through insights and predictive analytics — taking advantage of business networks and applications and services — we are also on the cusp of getting even better tools.

We’re seeing a lot more information about machine learning (ML). We’re starting to tease out the benefits of artificial intelligence (AI). When these technologies are maturing and available, you need to be in a position to take advantage of them.

So, moving toward the intelligent enterprise and digital transformation are not just good or nice to have, they are essential because of what’s going to come next in just a few years.

Efficiency in the digital future 

Koch: Yes, you see this very tactically in the chief procurement officers (CPOs) that I’ve talked with as I’ve entered this role. I have yet to run across any business leader who says, “I have so many resources, I don’t know what to do.” That’s not usually what I hear. Usually, it’s the opposite. It’s, “I’m being asked to do more with less.”

When you look at the core of AI, and the core of ML, it’s how do you increase efficiency? And that’s whether it’s all the way on the full process automation side, or it’s along the spectrum of bringing the right intelligence and insights to streamline processes to make better decisions.

All of that is an effort to up-level the work that people do, so that raises wages, it raises productivity, all of those things. We have an example inside of our team. I was meeting with the head of our customer value organization, Chris Haydon, over dinner last night.  Chris was talking about how we were applying ML to enhance our capability to onboard new customers.

And he said the work that we’ve done has allowed him to redeploy 80 people in his team on to higher productivity use cases. All of those people became more valuable in the company because they were working on things that were at the next level of creating new solutions and better customer experiences, instead of turning the crank in the proverbial factory of deploying software.

Gardner: I happen to personally believe that a lot of the talk about robots taking over people’s jobs is hooey. And that, in fact, what’s more likely is this elevation of people to do what they can do best and uniquely. Then let the machines do what they do best and uniquely.

How is that translating both into SAP Ariba products and services, and also into the synergy between SAP and SAP Ariba?

We’re just getting through a major re-platforming to S/4 HANA and that’s really exciting because of HANA’s maturity and scale. We’re using ML algorithms and applying them.

Koch: We are at a really exciting time inside of our products and services. We’re just getting through a major re-platforming to S/4 HANA, and that’s really exciting because of HANA’s maturity and scale. It’s moving beyond basic infrastructure in the way that [SAP Co-Founder] Hasso Plattner had envisioned it.

We’re really getting to the point of not replicating data. We are using the ML algorithms and applying them, building them once and applying them at large. And so, the company’s investments in HANA and in Leonardo are helping to create a toolkit of capabilities that applications like SAP Ariba can leverage. Like with any good infrastructure investment, when you have the right foundation you see scale and innovation happen quickly.

You’ll see a lot more of how we leverage the data that we have both inside the company as well as across the network to drive intelligence into our process. You will just see that come through more as we move from the infrastructure foundation setting stage to building the capabilities on top of that.

Gardner: Getting back to that concept of closing the transformation gap for companies, what is it they should be thinking about when these services and technologies become available? How can they help close their own technology gap by becoming acquainted in advances and taking some initiative to best use these new tools?

Digital transformation leadership 

Koch: The companies that are forward-leading on digital transformation are the ones that made the cloud move early. The next big move for them is to tap into business networks. How can they start sharing across their value chains and drive higher efficiency? I think you’ll see from that the shift from tactical procurement to strategic procurement.

The relationships need to move from transactional to a true partnership, of how do we create value together? That change involves rethinking the ways you look at data and of how you share data across value chains.

Gardner: Let’s also think about spend management conceptually. Congratulations, by the way, on your recent Gartner Magic Quadrant positioning on pay-to-procure processes. How does spend management also become more strategic?

Koch: The building blocks for spend management always come down to what is our tactical spend and where should we focus our efforts for strategic spend? Whether that is in the services area, travel, direct materials, or indirect, what customers are asking SAP for is, how do all of these pieces fit together?

What’s the difference between a request for proposal (RFP) for a hotel in New York City versus an RFP for chemicals in Southeast Asia? They’re both a series of business processes of selecting the right vendor that balances all of the critical dimensions: Price and everything else that makes for a good decision and that has longevity.

We see a lot of shared elements in the way you interact with your suppliers. We see a lot of shared elements in the way that you deploy applications inside of your company. We’re exploring how well the different facets of the applications can work together, how seamless the user experience is, and how well all of these tie together for all the stakeholders.

Ultimately, each element of the team, each element of the company, has a role to play. That includes the finance organization’s desire to ensure that value is being created in a way that the company can afford. It means that the shareholders, employees, management, and end-users are all on the same page.

This is the core of spend management – and the intelligent enterprise as a whole. It means being able to see everything, by bringing it all together, so the company can manage its full operations and how they create value.

Gardner: The vision is very compelling. I can certainly see where this is not going to be just a small change — but a step-change — in terms of how companies can benefit in productivity.

As you were alluding to earlier, architecture is destiny when it comes to making this possible. By re-architecting around, as for S/4 HANA, by taking advantage of business networks, you are well on the way to delivering this. Let’s talk about the platform changes that grease the skids toward the larger holistic benefits.

Shifting to the cloud 

Koch: It’s firmly our belief that the world is moving to mega-platforms. SAP has a long history of bringing the ecosystem along, whether the ecosystem is delivering process innovation or is building capabilities on top of other capabilities embedded deeply into the products.

What we’re now seeing is the shift from the on-premises world to a cloud world where it’s API-first, business events driven, and where you see a decoupling of the various components. Underneath the covers it doesn’t matter what technology stack things are built on. It doesn’t matter how quickly they evolve. It’s the assumption that we have this API contract between two different pieces of technology: An SAP Ariba piece of technology, an SAP S/4 Cloud piece of technology, or a partner ecosystem piece of technology.

For example, a company like Solenis was recently up on stage with us at Ariba Live in Amsterdam. That’s one of the fastest-growing companies. They have raised a B round at $1 billion valuation. Having companies that are driving innovation like that in partnership with an SAP platform brings not just near-term value for us and our customers, it brings future-proofing. It brings extensibility when there is a specific requirement that comes in for a specific industry or geography. It provides a way a customer can differentiate. You can just plug-in.

We’re now seeing the shift from on-premises to cloud where you see a decoupling of the components. It doesn’t matter what the technology stack is. … It’s now about API-first business events.

[SAP business unit] Concur has been down this path for a long time. The president of SAP Ariba, Barry Padgett, actually started the initiative of opening up the Concur platform. So deep at our core — in our roots — we believe that networks, ecosystems, and openness will ensure that our customers get the most value out of their solutions.

Gardner: Because SAP is an early adopter of multicloud, SAP can be everywhere at the most efficient level given what the hyperscale cloud providers are providing with global reach and efficiency. This approach also allows you to service small- to medium-sized businesses (SMBs), for example, essentially anywhere in the world.

Tell me why this long-term vision of a hyperscale-, multicloud-supported future benefits SAP, SAP Ariba, and its customers.

A hyperscale, multicloud landscape

Koch: When you look across the landscape of the hyperscalers and you look at the pace of innovation and the level of scale that that they are able to deliver, our lead time is slashed. We can also scale up and down as required. The cloud benefits apply to speed compared to having boxes installed in data centers, as well as ease in workload variability — whether it’s test variability or our ability to run ML-training models.

The idea that we still suffer multi-month lead times to get our physical boxes installed in our data centers is something that we just can’t afford. Our customers demand more.

Thankfully there are multiple solutions around the world that solve these problems while at the same time giving us things like world-class security, geographic footprints, and localized expertise. When a server fails halfway around the world and the expert is somewhere else, the hyperscalers provide a solution to that problem.

They have somebody who walks through every data center and makes sure that the routers are upgraded, and the switches and load balancers are working the way they should. They determine whether data correctly rests inside of a Chinese firewall or inside of Europe [due to compliance requirements]. They are responsible for how those systems interact.

We still need to do our investment on the applications tier and in working with our customers to handle all of the needed changes in the landscape around data and security.

But the hyperscalers give us a base-level of infrastructure so we don’t need to think about things like, “Is our air conditioner capacity inside of the data center sufficient to run the latest technology for the computing power?” We don’t worry about that. We worry about delivering value on top of that base-level of infrastructure and so that takes our applications to the next level.

In the same way we were talking earlier about ML and AI freeing up our resources to work on higher-value things, [the multicloud approach] allows us to stop thinking about these base-level things that are still critical for the delivery of our service. It allows us to focus on the innovation aspects of what we need to do.

Gardner: It really is about driving value higher and higher and then making use of that in a way that’s a most impactful to the consumers — and ultimately the whole economy.

Koch: You got it.

Listen to the podcast.Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in application transformation, Ariba, artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Enterprise transformation, ERP, multicloud, Networked economy, procurement, SAP, SAP Ariba | Tagged , , , , , , , , , , , , , , | Leave a comment

Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf fans

Hazeltine_National_Golf_Club_17th_Hole_-_Ryder_Cup

The next BriefingsDirect extreme IT-in-sports use case examines how an edge-computing Gordian Knot is being sliced through innovation and pluck at a prestigious live golfing event.

We will now explore how the 2018 Ryder Cup match between European and US golf players places a unique combination of requirements on its operators and suppliers. As a result, the IT solutions needed to make the Ryder Cup better than ever for its 250,000 live spectators and sponsors will set a new benchmark for future mobile sports events.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe the challenges and solutions for making the latest networks and applications operate in a highly distributed environment is Michael Cole, Chief Technology Officer for the European Tour and Ryder Cup. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is the Ryder Cup, set for September 2018 near Paris, for those who might not know?

Michael Cole 2

Cole

Cole: The Ryder Cup is a biannual golf event, contested by teams representing Europe and the United States. It is without doubt the most prestigious team event in golf and arguably the world’s most compelling sporting contest in the world.

As such, it really is our blue-ribbon event and requires a huge temporary infrastructure to serve 250,000 spectators — over 50,000 super fans every day of the event — but also media, journalists, players, and their entourages.

Gardner: Why do you refer this as blue-ribbon? What is it about the US versus Europe aspect that makes it so special?

Cole: It’s special for the players, really. These professionals play the majority of their schedule in the season as individuals. The Ryder Cup gives them the opportunity to play as a team — and that is special for the players. You can see that in the passion of representing either the United States or Europe.

Gardner: What makes the Ryder Cup such a difficult problem from this digital delivery and support perspective? Why are the requirements for a tournament-wide digital architecture so extreme?

Cole: Technology deployment in golf is very challenging. We have to bear in mind that every course essentially is a greenfield site. We very rarely return to the same course on two occasions. Therefore, how you deploy technology in an environment that is 150 acres large – or the equivalent of 85 football pitches — is challenging. And we must do that as a temporary overlay for four days of operation, or three days for the Ryder Cup, operationally leading in, deploying our technology, and then bumping out very quickly onto the next event.

We typically deploy up to five different infrastructures: one for television; another for the tournament television big digital screens in the fan zones on the course; the scoring network has its own infrastructure; the public Wi-Fi, and, of course, we have the back-of-house operational IT infrastructure as well. It’s a unique challenge in terms of scale and complexity.

Gardner: It also exemplifies the need for core data capabilities that are deeply integrated with two-way, high-volume networks and edge devices. How do you tie the edge and the core together effectively?

Data delivery leads the way

Cole: The technology has a critical role to play for us. We at the European Tour lead the transformation in global golf — very much putting in data at the heart of our sports to create the right level of content and insight for our key stakeholders. This is critical.

For us this is about adopting the Hewlett Packard Enterprise (HPE) Intelligent Edge network and approach, which ensures the processing of data, location-based services, and the distribution of content that all takes place at the point of interaction with our key stakeholders, i.e., at the edge and on the golf course.

Learn More About HPE

Media and Entertainment Solutions

Gardner: What do you mean by location services as pertains to the Ryder Cup? How challenging is that to manage?

Cole: One of the key benefits that the infrastructure will provide is an understanding of people and their behavior. So, we will be able to track the crowds around the course. We will be able to use that insight in terms of behaviors to create value — both for ourselves in terms of operational delivery, but also for our sponsors by delivering a better understanding of spectators and how they can convert those spectators into customers.

Big BYOD challenges 

Gardner: This is also a great example of how to support a bring-your-own-device (BYOD) challenge. Spectators may prefer to use their cellular networks, but those aren’t always available in these particular locations. What is it about the types of devices that these fans are using that also provides a challenge?

Cole: One of the interesting things that we recently found is the correlation between devices and people. So whilst we are expecting more than 51,000 people per day at the Ryder Cup, the number of devices could easily be double or triple that.

Typically, people these days will have two to three devices. So when we consider the Ryder Cup week [in September] and the fact that we will have more than 250,000 people attending – it’s even more devices. This is arguably the biggest BYOD environment on the planet this year, and that’s a challenge.

Gardner: What are you putting in place so that the end user experience is what they expect?

Cole: I use the term frictionless. I want the experience to be frictionless. The way they on-board, the way they access the Wi-Fi — I want it to be seamless and easy. It’s critical for us to maximize the number of spectators using the Wi-Fi infrastructure. It equally becomes a source of data andis useful for marketing purposes. So the more people that we can get onto the Wi-Fi, convert them into registering, and then receiving promotional activity – for both us and our partners — that’s a key measure of success.

It is critical for us to maximize the number of spectators using the WiFi infrastructure. It becomes a source of data and is useful for marketing. Iwant the experience to be frictionless.

Gardner: What you accomplish at the Ryder Cup will set the standard for going further for the broader European Tour. Tell us about the European Tour and how this sets the stage for extending your success across a greater distribution of golfing events.

Cole: This is without doubt the biggest investment that the European Tour has made in technology, and particularly for the Ryder Cup. So it is critical for us that the investment becomes our legacy as well. I am very much looking forward to having an adoption of technology that will serve our purposes, not only for the Ryder Cup, not only for this year, but in fact for the next four years, until the next Ryder Cup cycle.

For me it’s about an investment in a quadrennial period, and serving those 47 tournaments each year, and making sure that we can provide a consistency and quality beyond the Ryder Cup for each of our tournaments across the European Tour schedule.

Gardner: And how many are there?

Cole: We will run 47 tournaments in 30 countries, across five continents. Our down season is just three days. So we are operationally on the go every day, every week of the year.

Gardner: Many of our listeners and readers tend to be technologists, so let’s dig into the geek stuff. Tell us about the solution. How do you solve these scale problems?

Golf in a private cloud 

Cole: One of the critical aspects is to ensure that data is very much at the heart of everything we do. We need to make sure that we have the topology right, and that topology clearly is underpinned by the technological platform. We will be adopting a classic core distribution and access approach.

For the Ryder Cup, we will have more than 130 switches. In order to provide network ubiquity and overcome one of our greatest challenges of near 100 percent Wi-Fi coverage across the course, we will need 700 access switches. So this has scale and scope, but it doesn’t stop there.

We will essentially be creating our own private cloud. We will be utilizing the VMware virtual platform. We will have a number of on-premises servers and that will be configured across two network corporation centers, with full resiliency and duplicity between the two.

Having 100 percent availability is critical for my industry and delivery of golf across the operational period of three days for Ryder Cup or four days of a traditional golf tournament. We cannot afford any downtime — even five minutes is five minutes too much.

Gardner: Just to dwell on the edge technology, what is it about the Aruba technology from HPE that is satisfying your needs, given this extreme situation of hundreds of acres and hilly terrain and lots of obstacles?

Learn More About HPE

Media and Entertainment Solutions

Cole: Golf is unique because it’s a greenfield site, with a unique set of challenges. No two golf courses are the same in the world. The technology platform gives us a modular approach. It gives us the agility to deploy what is necessary where and when we need.

And we can do this with the HPE Aruba platform in a way that gives us true integration, true service management, and a stack of applications that can better enable us to manage that entire environment. That includes through the basic management of the infrastructure to security and on-boarding for the largest BYOD requirements on the planet this year. And it’s for a range of services that we will integrate into our spectator app to deliver better value and smarter insights for our commercial family.

Gardner: Tell us about Michael Cole. How did your background prepare you for such a daunting undertaking?

Cole: My background has always been in technology. I spent some 20 years with British Telecom (BT). More recently I moved into the area of sports and technology, following the London 2012 Olympics. I then worked for technology companies for the Rio 2016 Olympic Games. I have supported technology companies for the PyeongChang [South Korea] 2018 Winter Games, and also for the up and coming 2020 Tokyo Games, as well as the Pan American Games.

So I have always been passionate about technology, but increasingly passionate about the use of technology in sports. What I bring to the European Tour is the broader insight around multinational global sports and events and bringing that insight into golf.

Gardner: Where is the Ryder Cup this year?

Cole: It’s being held just outside Paris at Versailles, at Le Golf National. And there’s a couple of things I want to say on this. It’s the first time that the European Tour has been held in Europe outside of United Kingdom since 1997 at Valderrama in Spain.

The other interesting aspect, thinking about my background around the Olympics, is actually Le Golf National is the venue for the 2024 Paris Olympic Games; in fact, where the event of golf will be held. So, one of my key objectives is to create a compelling and sustainable legacy for those games in 2024.

Gardner: Let’s fast-forward to the third week of September 2018. What will a typical day in the life of Michael Cole be like as you are preparing and then actually executing on this?

Test-driven tech performance 

Cole: Well, there is no typical day. Every day is very different, and we still have a heavy schedule on our European Tour, but what is critical is the implementation phase and the run in to the Ryder Cup.

My team was on site to start the planning and early deployment some six months ago, in February. The activity now increases significantly. In the month of June, we took delivery of the equipment on site and initiated the Technology Operations Center, and in fact, the Wi-Fi is now live.

We also will adopt one of the principles from the Olympics in terms of test events, so we will utilize the French Open as a test event for the Ryder Cup. And this is an important aspect to the methodology.

I am very pleased with the way we are working with our partner, HPE, and its range of technology partners.

But equally, I am very pleased in the way that we are working with our partner, HPE, and its range of technology partners. In fact, we have adopted an eight-phase approach through staging, through design, and through configuration off site, on site. We do tech rehearsals.

So, the whole thing is very structured and methodical in terms of the approach as we get closer to the Ryder Cup in September.

Gardner: We have looked at this through the lens of technology uniqueness and challenge. Let’s look at this through the lens of business. How will you know you have succeeded through the eyes of your sponsors and your organization? It seems to me that you are going to be charting new ground when it comes to business models around location, sporting, spectators. What are some of the new opportunities you hope to uncover from a business model perspective?

Connect, capture, create

Cole: The platform has three key aspects to it, in my mind. The first one is the ability to create the concept of a connected golf course, a truly connected course, with near 100 percent connectivity at all times.

The second element is the ability to capture data, and that data will drive insights and help us to understand behavioral patterns of spectators on the course.

Learn More About HPE

Media and Entertainment Solutions

The third aspect, which is really the answer to your question, is how we utilize that intelligence and that insight to create real value for our sponsors. The days of sponsors thinking activation was branding and the hospitality program are long gone. They are now far more sophisticated in their approach and their expectations are taken to a new level. And as a rights holder we have an obligation to help them be successful in that activation and achieve their return on investment (ROI).

Moving from a spectator to a lead, to a lead to a customer, from customer to an advocate is critical for them. I believe that our choice of technology for the Ryder Cup and for the European Tour will help in that journey. So it’s critical in terms of the value that we can now deliver to those sponsors and not just meet their expectations — but exceed their expectations.

Gardner Being a New Englander, I remember well in 1999 when the Ryder Cup was in Brookline, Massachusetts at The Country Club. I was impressed not only by the teams from each continent competing, but it also seemed like the corporations were competing for prestige, trying to outdo one another from either side of the pond in how they could demonstrate their value and be part of the pageantry.

Are the corporations also competing, and does that give them a great platform to take advantage of your technology?

Collaborate and compete

Cole: Well, healthy competition is good, and if they all want to exceed and compete with each other that can only be good news for us in terms of the experience that we create. But it has to be exceptional for the fans as well.

So collaboration and competition, I think, are critical. I believe that any suite of sponsors needs to operate both as a family, but also in terms of that healthy competition.

Gardner: When you do your postmortem on the platform and the technology, what will be the metrics that you will examine to determine how well you succeeded in reaching and exceeding their expectations? What are those key metrics that you are going to look for when it’s over?

The technology platform now gives us the capability to go far. Critical to the success will be the satisfaction of the spectators, players, and our commercial family.

Cole: As you would expect, we have a series of financial measurements around merchandizing, ticket revenues, sponsorship revenue, et cetera. But the technology platform now gives us the capability to go far beyond that. Critical to success will be the satisfaction; the satisfaction of spectators, the satisfaction of players, and the satisfaction of our commercial family.

Statistical scorecard

Gardner: Let’s look to the future. Four years from now, as we know the march of technology continues — and it’s a rapid pace — more is being done with machine learning (ML), with utilizing data to its extreme. What might be different in four years at the next Ryder Cup technologically that will even further the goals in terms of the user experience for the players, for the spectators, and for the sponsors?

Cole: Every Ryder Cup brings new opportunities, and technology is moving at a rapid pace. It’s very difficult for me to sit here and have a crystal ball in terms of the future and what it may bring, but what I do know is that data is becoming increasingly more fundamental to us.

Historically, we have always captured scoring for an event, and that equates to about 20,000 data points for a given tournament. We have recently extended it. We now capture seven times the amount of data – including for weather conditions, for golf club types, through lie of the ball, and yardage to the hole. That all equates to 140,000 data points per tournament.

Learn More About HPE

Media and Entertainment Solutions

Over a schedule, that’s 5.5 million data points. When we look at the statistical derivatives, we are looking at more than 2 billion statistics from a given tournament. And this is changing all of the time. We can now utilize Internet of things (IoT) technologies to put sensors in anything that moves. If it moves, it can be tracked. If everything is connected, then anything is possible.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise

You may also be interested in:

Posted in artificial intelligence, big data, Business networks, Cloud computing, data analysis, data center, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

SAP Ariba’s chief data scientist on how ML and dynamic processes build an intelligent enterprise

The next BriefingsDirect digital business innovation interview explores how the powerful combination of deep analytics and the procurement function makes businesses smarter and more efficient.

When the latest data science techniques are applied to more data sets that impact supply chains and optimize procurement, a new strategic breed of corporate efficiency and best practices emerge.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how data-driven methods and powerful new tools are transforming procurement into an impactful intelligence asset, BriefingsDirect recently sat down with David Herman, Chief Data Scientist for Strategic Procurement at SAP Ariba. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why is procurement such a good place to apply the insights that we get from data science and machine learning (ML) capabilities?

David Herman

Herman

Herman: Procurement is the central hub for so many corporate activities. We have documents that range from vendor proposals to purchase orders and invoices to contracts, and requests for proposal (RFPs). Lots and lots of data happens here.

So the procurement process is rich in data, but the information historically has been difficult to use. It’s been locked away inside of servers where it really couldn’t be beneficial. Now we can take that information in its unstructured format, marry it with other data – from other systems or from big data sources like the news — and turn it into really interesting insights and predictions.

Gardner: And the payoffs are significant when you’re able to use analysis to cut waste or improve decisions within procurement, spend management, and supply chains.

Procurement analysis pays 

Herman: The very nature of spend analysis is changing. We implemented a neural network last year. Its purpose was to expedite the time it takes to do spend analysis. We dropped that time by 99 percent so that things that used to take days and weeks can now be done in mere hours and minutes.

Because of the technology that is available today, we can approach spend analysis differently and do it more frequently. You don’t really have to wait for a quarterly report. Now, you can look at spend performance as often as you want and be really responsive to the board, who these days, are looking at digital dashboard applications with real-time information.

Gardner: How is this now about more than merely buying and selling? It seems to me that when you combine these analytic benefits, it becomes about more than a transaction. The impact can go much deeper and wider.

Herman: It’s strategic — and that’s a new high plateau. Instead of answering historic questions about cost savings, which are still very important, we’re able to look forward and ask “what-if” kinds of questions. What is the best scenario for optimizing my inventory, for example?

That’s not a conversation that procurement would normally be involved in. But in these environments and with this kind of data, procurement can help to forecast demand. They can forecast what would happen to price sensitivity. There are a lot of things that can happen with this data that have not been done so far.

Gardner: It’s a two-way street. Not only does information percolate up so that procurement can be a resource. They are able to execute, to act based on the data.

Herman: Right, and that’s scary, too. Let’s face it. We’re talking about peoples’ livelihoods. Between now and 2025, things are going to change fundamentally. In the next two to three years alone, we are going to see positions [disappear], and then we’re going to have a whole new grouping of people who are more focused on analysis.

The reality is that of any kind of innovation — any kind of productivity — follows the same curve. I am not actually making this prediction because it’s the result of ML or artificial intelligence (AI). I am telling you every great increase in productivity has followed the same curve. Initially it impacts some jobs and then there are new jobs.

And that’s what we’re looking at here, except that now it’s happening so much faster. If you think about it, a five-year period to completely reshape and transform procurement is a very short period of time.

Gardner: Speaking of a period of time, your title, Chief Data Scientist for Strategic Procurement, may not have even made much sense four years ago.

Herman: That’s true. In fact, while I have been doing what I’m doing now for close to 30 years, it has had different names. Sometimes, it’s been in the area of content specialistor content lead. Other times, it’s been focused on how we are managing content in developing new products.

And so, really, this title is new. Yet it’s the most exciting position that I’ve ever had because things are moving so much faster and there is such great opportunity.

Gardner:I’m sure that the data scientists have studied and learned a lot about procurement. But what should the procurement people know about data science?

Curiosity leads the way

Herman: When I interview people to be data scientists, one of the primary characteristics I look for is curiosity. It’s not a technical thing. It’s somebody who just wants to understand why something has happened and then leverage it.

Procurement professionals in the future are going to have much more available to them because of the new analytics. And much of the analytics will not require that you know math. It will be something that you can simply look at.

For example, SAP Ariba’s solutions provide you with ML outcomes. All you do is navigate through them. That’s a great thing. If you’re trying to identify a trend, if you’re trying to look at whether you should substitute one product for another — those analytic capabilities are there.

SAP Ariba’s solutions provide you with ML outcomes. All you do is navigate through them. That’s a great thing.

As for a use case, I was recently talking to the buyer responsible for staffing at one of SAP’s data centers. He is also responsible for equipping it. When they buy the large servers that run S4/HANA, they have different generations of hardware that they leverage. They know the server types and they know what the chip lifecycles look like.

But they’ve never been able to actually examine their own data to understand when and why they fail. And with the kinds of things we’re talking about, now they can actually look to see what’s going on with different chipsets and their lifecycles — and make much more effective IT deployment decisions.

Gardner: That’s a fascinating example. If you extrapolate from that to other types of buying, you are now able to look at more of your suppliers’ critical variables. You can make deductions better than they can because they don’t have access to all of the data.

Tell us about how procurement people should now think differently when it comes to those “what-if” scenarios? Now that the tools are available, what are some of the characteristics of how the thinking of a procurement person should shift to take advantage of them?

Get smart

Herman: Anyone who’s negotiated a contract walks away, glad to be done. But you always think in the back of your head, “What did I leave on the table? Perhaps soon the prices will go up, perhaps the prices will go down. What can I do about that?”

We introduced a product feature just recently in our contracts solution that allows anyone to not only fix the price for a line item, but also make it dynamic and have it tied to an external benchmark.

We can examine the underlying commodities associated with what you are buying. If the commodities change by a certain amount – and you specify what that amount is — you can then renegotiate with your vendor. Setting up dynamic pricing means that you’re done. You have a contract that doesn’t leave those “what-ifs” on the table anymore.

That’s a fundamental shift. That’s how contracts get smart — a smart contract with dynamic pricing clauses.

Gardner: These dynamic concepts may have been very much at home in the City of London or on Wall Street when it comes to the buying and selling of financial instruments. But now we’re able to apply this much more broadly, more democratically. It’s very powerful — but at a cost that’s much more acceptable.

Is that a good analogy? Should we look to what Wall Street did five to 10 years ago for what is now happening in procurement?

Herman: Sure. Look, for example, at arbitrage. In supplier risk, we take that concept and apply it. When trying to understand supplier risk, begin with inherent risk. From inherent risk we try to reduce the overall risk by putting in place various practices.

Sometimes it might be an actual insurance policy. It could also be a financial instrument. Sometimes it’s where we keep the goods. Maybe they are on consignment or in a warehouse.

There are a whole host of new interesting ways that we can learn from the positives and negatives of financial services — and apply them to procurement. Arbitrage is the first and most obvious one. I have talked to 100 customers who are implementing arbitrage in various forms, and they are all a little bit different. Each individual company has their own goal.

For example, take someone in procurement who deals with currency fluctuations. That kind of role is going to expand. It’s not going to be just currency — it is also going to be all assets. It is ways to shift and extend risk out over a period of time. Or it could even be reeling in exposure after you have signed a contract. That’s also possible.

Gardner: It seems silly to think of procurement as a cost center anymore. It seems so obvious now — when you think about these implications — that the amount of impact to the top line and bottom line that procurement and supply chain management can accomplish is substantial. Are there still people out there who see procurement as a cost center, and why would they?

From cost to opportunity 

Herman: First of all, it’s very comfortable. We can demonstrate value by saving money, and it goes right to the bottom line. This is where it matters the most. The cost is always going to be a factor here.

As one chief procurement officer (CPO) recently told me, this has been a kind of a shell game because he can’t actually prove how much his organization has really saved. We can only put together a theoretical model that shows how much you saved.

As we move forward, we are going to find that cost remains part of the equation — I think it will always be part of the equation – yet the opportunity side of the equation with the ability to work more effectively with sales and marketing is going to happen. It’s actually happening now. So you will see more and more of it over the next three to five years.

We can demonstrate value by saving money, and it goes right to the bottom line. This is where it matters the most. The cost is always going to be a factor here.

Gardner: How are analytics being embedded into your products in such a way that it is in the context of such a value-enhancing process? How are you creating a user experience around analytics that allows for new ways to approach procurement?

Herman: Again, supplier risk is a very good example. When a customer adopts the SAP Ariba Supplier Risk solution, they most often come with a risk policy in place. In other words, they already know how to measure risk.

The challenges with measuring risk are commonly around access to the data. Integration is really hard. When we went about building this product we focused first on integration. Then we came up with a model. We take the historical data and come up with a reference model. We also really worked hard to make sure that any customer can change any aspect of that model according to their policy or according to whatever scenario they might be looking at.

If, for example, you have just acquired a company, you don’t know what the risks look like. You need to develop a good look at the information, and then migrate over time. With supplier risk management, both the predictive and descriptive models are completely under the control of our customers. They can decide what data flows in and becomes a feature of that model, how much it is weighted, what the impacts are, and how to interpret the impact when it’s finished.

We also have to recognize when you’re talking about data outside of the organization that is now flowing in via big data, that this is an unknown. It’s not uncommon for somebody look at the risk platform and say, “Turn off that external stuff so I can get my feet under the table to understand it — and then turn on this data that’s flowing through and let me figure out how to combine them.”

At SAP Ariba, that’s what we are doing. We are giving our customers the tools to build workflow, to build models, to measure them, and now with the advent of the SAP Analytics Cloud be able to integrate that into S/4HANA.

Gardner: When we think about this as a high-productivity benefit within an individual company, it seems to me that as more individual companies begin doing this that there is a higher level of value. As more organizations in a supply chain or ecosystem share information they gain mutual productivity.

Do you have examples yet of where that’s happening, of where the data analytics sharing is creating a step-change of broader productivity?

Shared data, shared productivity 

Herman: Sure, two examples. The first is that we provide a benchmarking program. The benchmarking program is completely free.  As long as you are willing to share data, we share the benchmarks.

The data is aggregated, it’s anonymous, and we make sure that the information cannot be re-identified. We take the proper precautions. Then, as a trusted party and a trusted host we provide information so that any company can benchmark various aspects of their specific performance.

You can, for example, get a very good idea of how long it takes to process a purchase order, the volumes of purchase orders, and how much spend is not managed because you don’t have a purchase order in place. Those kinds of insights are great.

When we look at analytics across industries we find that most supply chains have become brittle. As all of us become leaner organizations, ultimately we find that industries end up relying on one or two critical suppliers.

For example, black pigment for the automotive industry was provisioned for all of the major manufacturers by just one supplier. When that supplier had a plant fire and had to shut down their plant for three months it was a crisis because there was no inventory in the supply chain and because there was only one supplier. We actually saw that in our supplier risk product before it happened.

The industry had to come together and work with one another to solve that problem, to share their knowledge, just like they did during the 2008-2009 financial crisis.

In the financial crisis, we found that it was necessary to effectively help other company’s suppliers. Traditionally that would be called collusion, but it was done with complete transparency with the government.

When you look at such ways that information can be shared — and how industries can benefit collectively — that’s the kind of thing we see as emerging in areas like sustainability. With sustainability we are looking for ways to reduce the use of forced labor, for example.

In the fishing industry, shrimping companies have just gone through their industry association to introduce a new model that collectively works to reduce the tremendous use of forced laborin that industry today. There are other examples. This is definitely happening.

Gardner: What comes next in terms of capabilities that build on data science brought to the procurement process?

Contract evaluations 

Herman: One of the most exciting things we’re doing is around contracts. Customers this quarter are now able to evaluate different outcomes across all of their contracts. A prominent use case is that perhaps you have a cash flow shortage at the end of the year and it’s necessary to curtail spend. Maybe that’s by terminating contracts, maybe it’s by cutting back on marketing.

We picked an area like marketing so that we can drill down to evaluate rights and obligations and assess the potential impact to the company canceling those contracts. There is no way to do this today at scale other than manually.

If the chief financial officer (CFO) were to approach someone in procurement and ask this question about cash flow, they would bring in your paralegals and lawyers to begin reading the contracts. That’s the only way today.

Customers are now able to evaluate different outcomes across all of their contracts. We are teaching machines to interpret the data, to evaluate cause and effect and then classify the impact so decision makers can act quickly.

What we are doing right now is teaching machines to interpret that data, to evaluate the cause and effect — and then classify the impact so that the decision makers can take action quickly.

Gardner: You are able to move beyond blunt instruments into a more surgical understanding — and also execution?

Herman: Right, and it redefines context. We are now talking about context in ways that we can’t do today. You will be able to evaluate different scenarios, such as terminating relationships, push out delivery, or maybe renegotiating a specific clause in a contract.

These are just the very beginnings of great use cases where procurement becomes much more strategic and able to respond to the scenarios that help shape the health of the organization.

Gardner: We spoke before about how this used to be in the purview of Wall Street. They had essentially unlimited resources to devote to ML and data science. But now we are making this level of analysis as-a-service within an operating expense subscription model.

It seems to me that we are democratizing analysis so that small- to medium-size businesses (SMBs) can do what they never used to have the resources to do. Are we now bringing some very powerful tools to people who just wouldn’t have been able to get them before?

Power tools to the people

Herman: Yes. The cloud providers create all kinds of opportunities, especially for SMBs, because they are able to buy on demand. That’s what it is. I am able to buy what I need on demand, to negotiate the price based on whether it’s on peak or off peak and get to the answers that I need much more quickly.

SAP Ariba made that transition to a cloud model in 2008, and this is just the next generation. We know a lot about how to do it.

Gardner: For those SMBs that now have access to such cloud-based analytics services, what sort of skills and organizational adjustments should they make in order to take advantage of it?

Herman: It’s interesting. When I talk to schools, to undergraduates and graduate students, I find that many of those folks are coming out of school with the right skill sets. They have already learned Python, for example, and they have already built models. There is no mystery, there is no voodoo about this. They have built the models in the classroom.

Just like any other business decision, we want to hire the best people. So, you will want to maybe slip in a couple of questions about data sciences during your interviews, because it’s the kind of thing that a product manager, an analyst, and an IT leader need to know in the near future.

With the transition of the baby boomers into retirement, Millennials are coming up as this new group which is extremely talented. They have those skill sets and they are driven by opportunity. As you continue to challenge them with opportunities, my experience is that they continue to shine.

Gardner: David, we have talked about this largely through the lens of the buyers. What about the sellers? Is there an opportunity for people to use data in business networks to better position themselves, get new business, and satisfy their markets?

Discover new business together

Herman: We need a good platform to discover these kinds of opportunities. Having been a small business owner myself, I find that the ability for me to identify opportunities that trigger business is really essential. You really want to be able to share information with your customers and understand how you can generalize those.

I recently spoke to a small business owner who uses Google Sheets. At the end of every call, everybody on this team writes down what they had learned about the industry so they could share it among themselves. They would write down the new opportunities that they heard in a separate section of the sheet, in a separate tab. What were the opportunities they saw coming up next in their industry? That’s where they would focus their time in building a funnel, in building a pipeline around it.

When looking at it from that perspective, it’s really useful. Use the tools we have to get into these new areas of access — and you win.

Gardner: What should people expect in the not too distant future when it comes to the technologies that support data science? Are there any examples of organizations at the vanguard of their use? Can they show us what others should expect?

We now have to look at it differently. We need to look at how to use ML to validate your risks and assumptions and then concentrate investments. ML is going to help you find your answers faster.

Herman: Here’s the way I look at it: If we are going to think about how much money you could invest and bet on the future, maybe we have 7 percent of operating income to play with, and that’s about it. That has been in the common in the past, for us to spread that spending across four, five, or six different bets.

I think now we have to look at it differently. We need to look at how to use ML to validate your risks and assumptions, of how to validate your market and then concentrate investments. We can take that 7 percent and get more out of it. That’s how ML is going to help, it’s going to help you find your answers faster.

Gardner: How should organizations get themselves ready? What should organizations that want to become more intelligent — to attain the level of an intelligent enterprise, an intelligent SMB — what do you recommend that they do in order to be in a best position to take advantage of these tools?

Collaborate to compete 

Herman: Historically we asked, “What is your competitive advantage?” That’s something that we talked about in the 1980s, and then we later described learning as your core competency. Now in this time, it’s who you know. It’s your partnerships.

Going back to what Google learned, Google learned how to connect content together and make money. Facebook one-upped them by learning about the relationships, and they learned how to make money based on those relationships.

Going forward, customer networks and supply chains are yourdifferentiation. To plan for that future, we need to make sure that we have clear ways to collaborate. We can work to make the partners strategic, and to focus our energy and bets on those partners who we believe are going to make us effective.

When you look at what are the key enablers, it’s going to be technology. It’s going to be analytics. To me that’s a given in these situations. We want to find someone who is investing, looking forward, and who brings in these new capabilities — whether it’s bitcoin or something else that is transformative in how we make companies more network-driven.

Gardner: So perhaps a variation on the theme of Metcalfe’s Law — that the larger the network, the more valuable it is. Maybe it’s now the more collaboration — and the richer the sharing and mutually assured productivity — the more likely you are to succeed.

Herman: I don’t think Metcalfe’s Law is over yet. We are going to find between now and 2020, that’s where this is at.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, data analysis, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, ERP, machine learning, multicloud, procurement, risk assessment, SAP, SAP Ariba, Security, supply chain, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

HPE and Citrix team up to make hybrid cloud-enabled workspaces simpler to deploy

As businesses of every stripe seek to bring more virtual desktop infrastructure (VDI) to their end users, hyperconverged infrastructure (HCI) is proving a deployment back-end architecture of choice.

Indeed, HCI and VDI are combining to make one of the more traditionally challenging workloads far easier to deploy, optimize, and operate.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

The next BriefingsDirect hybrid IT solutions ecosystem profile examines how the benefits of HCI are being taken to managed cloud, hybrid cloud, and appliance deployment models for VDI as well.

To learn more about the future of VDI powered by HCI and hybrid cloud, we are joined by executives from two key players behind the solutions, Bernie Hannon, Strategic Alliances Director for Cloud Services at Citrix, and Phil Sailer, Director of the Software Defined and Cloud Group Partner Solutions at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Phil, what trends and drivers are making hybrid cloud so popular, and why does it fit so well into workspaces and mobility solutions?

Philip Sailer

Sailer

Sailer: People are coming to realize that the world is going to be hybrid for some time when you look at the IT landscape. There are attractive attributes to public cloud, but there are many customers that are not ready for it or are unable to move there because of where their data needs to be. Perhaps, too, the economics don’t really work out for them.

There is also a lot of opportunity to improve on what we do in private data centers or in private cloud. Private cloud implies bringing the benefits of cloud into the location of a private data center. As our executives at HPE say, cloud is not a destination — it’s a way to get things get done and how you consume IT.

Gardner: Bernie, how does hybrid cloud contribute to both opportunity and complexity?

Hannon: The premise of cloud has been to simplify everything. But in reality everybody knows that things are getting more and more complicated. A lot of that has to do with the fact that there’s an overwhelming need to access applications. The average enterprise has deployed more than 100 applications.

And users — who are increasingly mobile and remote, are trying to access all of these applications on all kinds of devices — they have different ways of accessing the apps and different log-in requirements. When they do get in, there are all sorts of different performance expectations. It has become more and more complicated.

Why hybrid cloud?

For the IT organization, they are dealing with securing all those applications – whether those apps are up in clouds or on premises. There are just so many different kinds of distributed organizations. And the more distribution, the more endpoints that have to be secured. It creates complexity — and complexity equals cost.

Our goal is to simplify things for real by helping IT securely deliver apps and for users to be able to have simpler work experiences, so they can get what they need — simply and easily from anywhere, on whatever device they happen to be carrying. And then lock everything down within what we call a secure digital perimeter.

Gardner: Before we look at VDI in a hybrid cloud environment, maybe we should explain what the difference is between a hybrid cloud and a private cloud.

Sailer: Let’s start with private cloud, which is simpler. Private clouds are within the company’s four walls, within their data centers, within their control. But when you say private cloud, you’re implying the benefits of cloud: The simplicity of operation, the capability to provision things very easily, even tear down and reconstruct your infrastructure, and consume resources on a pay-per-use basis. It’s a different financial model as well.

Learn More 

About HPE

SimpliVity HCI

So the usage and financial models are different, but it is still private. You also have some benefits around security and different economic benefits depending on the variety of parameters.

Hybrid cloud, on the other hand, is a mix between taking advantage of the economics and the flexibility you get with a public cloud provider. If you need to spin up some additional instances and resources for a short period of time, a very bursty requirement, for example, you may want a public cloud option.

In these environments you may have a mix of both hybrid and private clouds, because your workloads will have different requirements – a balance between the need for burstiness and for security, for example. So we see hybrid as being the most prevalent situation.

Gardner: And why is having that hybrid mix and choice a good thing when it comes to addressing the full desktop experience of VDI?

Bernie Hannon

Hannon

Hannon: Cloud is not one-size-fits-all. A lot of companies that originally started down the path of using a single public infrastructure-as-a-service (IaaS) cloud have quickly come to realize that they are going to need a lot of cloud, and that’s why multi-cloud is really the emerging strategy, too.

The ability to seamlessly allow companies to move their workloads where they need to — whether that’s driven by regulation requirements, governance, data sovereignty, whatever — gives users a seamless work experience through their workspace. They don’t need to know where those apps are. They just need to know that they can find the tools they need to be productive easily. They don’t have to navigate to figure out where stuff is, because that’s a constant battle and that just lessens productivity.

Gardner: Let’s dig into how HPE and Citrix specifically are working together. HPE and Citrix have talked about using the HPE SimpliVityHCI platform along with Citrix Cloud Services. What is it about your products — and your cloud approach — that delivers a whole greater than the sum of the parts?

Master cloud complexity

Hannon: HCI for the last several years has been adding a huge amount of value to customers that are deploying VDI. They have simplified the entire management process down to a single management stack, reducing all that complexity. So hyperconverged means you don’t need to have as much specialization on your IT staff to deploy VDI as you did in the past. So that’s great.

So that addresses the infrastructure side. Now we are dealing with the app delivery side, and that has historically been very complicated. To address that, we have packaged the control plane elements used to run Citrix and put them in a cloud, and we manage it as-a-service.

So now we have Citrix-as-a-service up in the cloud. We call that Citrix Cloud Services. We have HPE SimpliVity HCI on the on-premises side. And now we can bring them together. This is the secret sauce that has come together with SimpliVity.

We have built scripting and tools that automate the process for customers who are ready to use Citrix Cloud Services. With just a few clicks, they get the whole process initiated and start to deploy Citrix from the cloud onto SimpliVity infrastructure. It really makes it simple, fast, and easy for customers to deploy the whole stack.

Gardner: We have seen new applications designed of, by, and for the cloud in a hybrid environment. But there are an awful lot of organizations that would like to lift and shift legacy apps and take advantage of this model, too. Is what you are doing together something that could lead to more apps benefiting from a hybrid deployment model?

Making hybrid music together 

Sailer: I give Citrix a lot of credit for the vision that they have painted around hybrid cloud. By taking that management plane and that complexity away from the customer –that is singing right off our song sheet when it comes to HPE SimpliVity.

We want to remove the legacy complexity that our customers have seen and get them to where they need to go much faster. Then Citrix takes over and gets them the apps that they need.

As far as which apps, there aren’t any restrictions on what you can serve up.

Gardner: Citrix has been the bellwether on allowing apps to be delivered over the wire in a way that’s functional. This goes back some 20 years. Are we taking that same value that you pioneered from a client-server history and now extended to the hybrid cloud?

Hannon:One of the nice things about Citrix Cloud Services is that after we have established the relationship between the cloud service up in the cloud and the SimpliVity HCI on-premises — everything is pretty much as it was before. We are not really changing the dynamics about how desktops and applications are being delivered. The real difference is how customers deploy and manage it.

Learn More 

About HPE

SimpliVity HCI

That said, customers are still responsible for managing their apps.  Customers need to modernize their apps and prepare them for delivery via Citrix, because that is a huge challenge for customers, and it always will be. Historically, everything needs to be brought forward.

We have tools like App Layering that help automate the process of taking applications that are traditionally premises-based — not virtualized, and not available through app delivery — and package them for virtual app and desktop delivery. It really amplifies the value of Citrix by being able to do so.

Gardner: I want to go back to my earlier question: What kinds of apps may or may not be the right fit here?

ROI with the right apps 

Sailer: Bernie, can you basically turn a traditional app into a SaaS app that’s delivered through the cloud, in a sense, though not a traditional SaaS app, like a Salesforce or Asana or something like that? What are your thoughts?

Hannon: This is really something that is customer-driven. Our job is to make sure that when they want to make a traditional legacy application available either as a server-based app or as a virtual app on a virtual desktop — that it is possible for them to do that with Citrix and to provide the tools to make that as easy as possible to do.

Which apps exactly are the best ones to do? That’s really looking at best practices. And there are a lot of forums out there that discuss which apps are better than others. I am not personally an expert on trying to advise customers on whether you should do this app versus that app.

Our job is to make a traditional legacy application available either as a server-based app or as a virtual app on a virtual desktop, and to make that as easy as possible.

But we have a lot of partners in our ecosystem that work with customers to help them package their apps and get them ready to be delivered. They can help them understand where the benefits are going to be, and if there a return on investment (ROI) for doing certain apps versus others.

Gardner: That’s still quite an increase from what we hear from some of the other cloud providers, to be honest. The public clouds make promises about moving certain legacy apps and app modernization, but when the rubber hits the road … not so much. You are at least moving that needle quite a bit forward in terms of letting the customer decide which way to go.

Hannon: Well, at the end of the day just because you can, doesn’t always mean you should, right?

Gardner: Let’s look at this through the lens of use cases. It seems to me a killer appfor these app delivery capabilities would be the whole desktop, VDI. Let’s start there. Where does this fit in? Perhaps Windows 10 migration? What are the other areas where you want to use hybrid cloud, with HPE SimpliVity on private and Citrix cloud on hybrid to get your whole desktop rationale process juiced up?

Desktop migration pathways

Hannon: The tip of the spear is definitely Windows 10 migration. There are still tens of millions of desktops out there in need of being upgraded. Customers are at a real pivot point in terms of making a decision: Do they continue down the path that they have been on maintaining and supporting these physical desktops with all of the issues and risks that we hear about every day? Do they try and meet the needs of users, who frankly like their laptops and take them with them everywhere they go?

We need to make sure that we get the right balance — of giving IT departments the ability to deliver those Windows 10 desktops, and also giving users the seamless experience that makes them feel as if they haven’t lost anything in the process.

So delivering Windows 10 best is at the top of the list, absolutely. And the graphics requirements that go with Windows 10, of being able to deliver that as part of the user experience is very, very important. This is where HPE SimpliVity comes in and partners like NVIDIA who help us virtualize those capabilities, keeping the end users happy however they get their Windows 10 desktop.

Gardner: To dwell just for a moment on Windows 10 migration, cost is always a big factor. When you have something like HPE SimpliVity — with its compression, with its de-dupe, with its very efficient use of a flash drives and so forth — is there a total cost of ownership (TCO) story here that people should be aware of when it comes to using HCI to accomplish Windows 10 migrations?

Sailer: Yes, absolutely. When you look at HCI you have to do a TCO analysis. When I talk to our sellers and our customers and ask them, “Why did you choose SimpliVity, honestly, tell me?” It’s overwhelmingly the ones that really take a close look at TCO that move to a SimpliVity stack when considering HCI.

Keeping the cost down, keeping the management cost down as well, and then having the ability to scale the infrastructure up and down the way they need — and protect the data — all within the same virtualized framework — that pays off quite well for most customers.

Gardner:We talked about protecting data, so security impacts. What are some other use cases where you can retain control over desktops, control over intellectual property (IP), and with centralized and policy-driven management over assets? Tell us how hybrid cloud, private cloud, HPE SimpliVity, and Citrix Cloud work together in regard to privacy and security.

How much security is enough?

Hannon: The world is going remote, and users want to access their workspaces on whatever device they are most comfortable with. And IT is responsible for managing the policies – of who is using what on whatever devices. What’s needed, and what we deliver at Citrix, is the ability for these users to come in on any device that they have and uniformly be able to provide the same level of security.

Because how much security is enough security? The answer is there is never enough. Security is a huge driver for adoption of this hybrid cloud app delivery model. It allows you to keep your apps and data under lock and key, where you need them; on-premises is usually the answer we get.

But put the management up in the cloud because that’s where the ease of delivering everything is going to occur. Then provide all of the great tools that come through a combination of Citrix, together with HPE SimpliVity, and our partners to be able to deliver that great user experience. This way the security is there, and the users don’t feel like they are giving up anything in order for that security to happen.

Learn More 

About HPE

SimpliVity HCI

Gardner: Let’s pursue another hybrid cloud use case. If you’re modernizing an entire data center, it might be easier to take everything and move it up into a public cloud, keep it there for a while, re-architect what you have on-premises and then bring it back down to have either private or hybrid production deployments.

Is there a hybrid benefit from the HPE and Citrix alliance that allows a larger migration of infrastructure or a refresh of infrastructure?

Opportunities outside the box

Hannon: We know that a lot of customers are still using traditional infrastructure, especially where VDI is concerned. Hyperconverged has been around for a few years, but not that many customers have adopted it yet.

As the infrastructure that they have deployed VDI on today begins to come to end of life, they are starting to make some decisions about whether or not they keep the traditional types of infrastructure that they have — or move to hyperconverged.

And more and more we are seeing our customers adopt hyperconverged. At the same time, we are presenting the opportunity for them to think out of the boxand consider using a hybrid cloud model. This gets them the best of both — the hyperconverged simplicityandrelieves the IT department of having to manage the Citrix environment, of constantly doing updates, patches, and watching over operations. They let Citrix do that, and let the customers get back to managing the things that are really important — and that’s their applications, data, and security.

Gardner: Speaking of management, we are seeing the need as complexity builds around hybrid models for better holistic management capabilities across multi-cloud and hybrid cloud environments. We have heard lately from HPE about OneSphere and even OneSphere-as-a-service, so HPE GreenLake Hybrid Cloud.

There is probably no end to the things that are possible after this. We are going to start mapping out a roadmap of where we want to go.

Is this an area where the requirements of your joint customers can benefit, around a higher-order cloud management capability?

Hannon: We have just stuck our toe in the water when it comes to hybrid cloud, VDI, and the relationship that we have with HPE as we deploy this workspace appliance capability. But there is probably no end to the things that are possible after this.

We are going to start mapping out a roadmap of where we want to go. We have to start looking at the capabilities that are inside of HPE that are untapped in this model — and there are a lot of them.

Take, for example, HPE’s recent acquisition of Plexxi. Now, software-defined networking has the potential to bring an enormous amount of benefit to this model. We have to sit down and think about how we can apply that and then work together to enable that in this hybrid cloud model. So I think there is a lot of opportunity there.

More to come

Gardner: So we should be looking for more to come along those lines?

Hannon: Watch this space.

Gardner: Before we sign off, there was some news at the recent Citrix Synergy show and there has been news at recent HPE shows, too. What are the specific products in the workspaces appliances space? What has been engineered that helps leverage HPE SimpliVity and takes advantage of Citrix?

Sailer: The Citrix Workspace Appliance Program enables customers to connect to the Citrix Cloud Services environment as easily as possible. We stuck with our traditional mantra that the interface should live where the administrator lives, and that’s within System Center Virtual Machine Manager, or within vSphere, depending on what your hypervisor choice is.

So in both locations we place a nice Citrix connector button right next to our own SimpliVity button. Within a few clicks, you are connected up into the cloud, and we just maintain that level of simplicity. Even through the process of setting all of this up, it’s a very easygoing on-ramp to get connected into the cloud. And that ease of management continues right through the cloud services that Citrix provides.

We had this available in tech preview at the recent HPE Discover show, and we will be releasing later in the year the plug-ins.

Gardner: Bernie, tell us about your vision for how this appliance approach can be a go-to-market benefit. How should people be thinking about such ease in deployments?

Your journey to the cloud, at your pace 

Hannon: At the end of the day, customers are looking for options. They don’t want to be locked in. They want to know that their journey to the cloud, as Phil said, is not a destination; it’s a journey. But they are going to go at their own pace on how they adopt cloud. In some cases they will do it wholesale, and others they will do it in small, little steps.

These kinds of appliance capabilities add features that help customers make choices when they get to a fork in the road. They ask, “If I go hybrid cloud now, do I have to abandon all the infrastructure that I have?”

Learn More 

About HPE

SimpliVity HCI

No, your infrastructure is going to take you on that journey to the cloud, and that’s already built in. We will continue to make those capabilities integrated and built-in, to make it possible for customers to just elect to go in that direction when they are ready. The infrastructure will be simplified and enable that to happen.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise

You may also be interested in:

Posted in application transformation, Citrix, Cloud computing, Cyber security, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Security, vdi, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

SAP Ariba’s President Barry Padgett on building the intelligent enterprise

The next BriefingsDirect digital business innovation interview explores how critical business functions provide unique vantage points from which to derive and act on enterprise intelligence.

These classic functions — like procurement and supply chain management — are proven catalysts for new value as businesses seek better ways to make sense of all of their data and to build more intelligent processes.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help us explore how businesses can best operate internally — and across extended networks — to extract insights and accelerate decision-making is Barry Padgett, President of SAP Ariba. The interview is conducted by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Businesses want to make more sense of the oceans of data they create and encounter, and even to automate more of what will make them increasingly intelligent. Why are traditional core processes — like procurement, spend management, and supply chain management — newly advantageous when it comes to allowing businesses to transform? 

Padgett: We have had a really great run over the last several years in terms of bringing intelligence and modernization to the front ends of our businesses. We have focused on customer experience, customer engagement, and linking our business activities directly to the customer. This has also created transparency. Now, we’re seeing a sea change in terms of putting the magnifying glass on some of the traditional back-end activities.

Barry Padgett

Padgett

In particular, when we think about an intelligent enterprise, better connecting to that front-end customer is super important. But we’re also seeing real demand for connected supply chains. Part of the reason is that when you look at the statistics — depending on the industry and geography — about 60 to 70 percent of the value of any company is in their supply chain. It’s in the stuff that they procure, to buy materials, to ultimately produce the goods that they sell. Or it’s in procuring the services and the people to deliver the end-customer services they provide.

There is an opportunity now to link that 60 to 70 percent of the value of the company back into the intelligent enterprise. That begins to drive lots of efficiency, lots of modernization, and to gain some huge business benefits.

Gardner: As we appreciate the potential for such end-to-end visibility, it offers the opportunity to act in new ways. And it seems like becoming such an intelligent enterprise – of gaining that holistic view — must become a core competency for companies.

How does SAP Ariba specifically help companies gain this capability?

Padgett: When we think about connected supply chains and collaboration across our different business units at the companies that we all represent, it’s super important that we think about scale. One of the things that SAP Ariba has focused on over the last couple of decades is really building scale as relates to supply chain and the supplier networks.

To give you a sense of the size, we currently have about 3.5 million suppliers transacting on the Ariba Network, and the volume that they put across that network every year is about $2.1 trillion. To put that into context, if you add up all of the global volume going through AlibabaeBay, and Amazon combined — it’s a little over $800 billion. And so the Ariba Network generates almost three times the business of those other three platforms combined.

When we think about the sheer volume of the events, transactions, and insights available to us now on such a network of that size, we can then combine that with cloud-based applications — combining it also with SAP’s capabilities and their new S/4HANA core — and you can unlock some real value.

Delivering this intelligence around context and processes — as it relates to everything from sourcing, to managing your vendors and their contracts, and to managing risk – ultimately drives a ton of cost savings, efficiency, and transparency through to all those buyers and suppliers.

Gardner: It’s a new set of attributes that companies can’t refuse to take advantage of. If their competitors do it better than they do, they could lose out on the new efficiency and innovation.

Let’s step back and take a look at some of the trends that are making this a unique time for the concept of an intelligent enterprise. What makes this the opportune time for companies to place the emphasis on being more intelligent?

The third wave 

Padgett: We are in the third wave of intelligence, insomuch as I think the first wave was when everyone recognized the analogies that were used, like data is the new currency, data is the new oil, and big data. We had this unbelievable excitement around being able to unlock and gain visibility into the massive repositories of data that sit around our businesses, and around the applications that we use in our businesses.

Then we had the second wave, which was the realization that this huge amount of data — this vast set of attributes that we had to go and gain intelligence from — was maybe a little bit more challenging than first met the eye in terms of how we get access to it. Some of it was structured, some of it was unstructured, some of it was in one database, another was in a different database, and so we began creating data lakes and data warehouses.

Now we are in a third wave, which is that — even recognizing that we finally have the data together in some sort of consumable format — we really need outcomes. And so we are looking to our vendors that we use and the application suites that we use at our companies to help us to drive new outcomes.

It’s less about, “Show me all the data!” And it’s less about, “Help me, I can’t get my arms around the data!” It’s now more around, “How can we use some of these latest technologies that we keep hearing about — artificial intelligence (AI)neural networksmachine learning (ML)blockchain — to start to actually drive business outcomes?”

How can we use some of these latest technologies that we keep hearing about — AI, neural networks, ML, blockchain — to start to actually drive business outcomes?

We are getting fatigued around the actual words themselves: big data, AI, ML. And now we are driving more toward the actual business outcomes.

I liken it to any kind of new technology that comes along. We get very excited about it, but then, ultimately, we begin talking about what impact it can have for our businesses. And so, whether that was the initial wave of moving to the cloud by taking advantage of things like HTML or .NET in the early days, we talked a lot about the technology. And now that cloud transformation is fairly mature and robust, we really don’t talk about the technology beneath it anymore. Now we talk about the advantages that cloud offers our business in terms of actionable insights, real-time data, and the cost benefits.

We are now seeing that same kind of maturation cycle now as relates to the intelligent enterprise, and certainly the data that powers it.

Gardner: Allowing more people to take advantage of this intelligence in their work processes, that seems to be where SAP Ariba is headed for the procurement professionals, and for those evaluating supply chains. It brings that intelligence right into their applications, into their workflow.

What is required for enterprises to better bring group intelligence into their business processes?

Collaboration time 

Padgett: You hit the nail on the head. It’s now less about integration, and more about collaboration. Where we see our customers collaborating across their businesses. It drives real benefits across their organizations. That’s certainly system-to-system, so both with other SAP assets as well as non-SAP assets, in a heterogeneous environment.

But it also means engaging the various business units and organizations. We are seeing a lot of companies move procurement and the chief procurement officer (CPO) to become much more of the hub of broad collaboration.

We like to say that our CPOs are now becoming our chief collaboration officers, because with the transformation we see across supply chains and procurement, we gain the opportunity to bring every component of our business together and begin to have a dialogue around where we can drive new value.

Whether that’s in the marketing team, or the sales team, or the operations team, or whatever it happens to be — we end up procuring a lot of goods and services and adhering whatever it is that we’re procuring to the outcomes we are looking to drive. That can be customer adoption and retention, or innovation, or whatever core mission that we have at our company. It could be around purpose and ethical supply chains and business practices. It really all comes back to this central hub of how we are spending our money, who are we spending it with, and how can we leverage it better to do even more with it.

Gardner: In order to empower that CPO to become more the collaboration officer, an avalanche of data isn’t going to do it. Out-of-context intelligence isn’t going to do it.

What is it that SAP Ariba uniquely brings that allows for a contextual injection, if you will, of the right intelligence at the right time that empowers these people, but does not overwhelm them?

Deep transparency

Padgett: First and foremost, its transparency. There is a very good chance that a lot of our prospects — and certainly a lot of your listeners — won’t be able to put their finger on exactly what they spend, who they spend it with, and whether it’s aligned to what outcomes they are trying to drive at.

Some of that is a first-line defense of, “Let’s actually look at our suppliers. Are we completely and fully automated with those suppliers so that we can transact with them electronically and cut out a lot of the manual process and some of the errors and redundancy that exists at our organizations?” There are some cost savings there. For sure, there is some risk management.

And then, when we go a step deeper, it’s, “How do we make sure that the suppliers that we are doing business with are who they say they are? Do they support the kinds of attributes and characteristics that we want within our suppliers?”

Then we can go deeper, looking at the suppliers of those suppliers. As we go two, three, four, five rungs deep into the supply chain, we can make sure that we are marrying, if you like, the money that we are spending with the outcomes we are trying to drive at for our companies.

For the buy side and the supply side, SAP Ariba makes sure they get transparency into everything. Then they can take risk out of their businesses and link spend to core mission and purpose.

That’s what the Ariba does, not only on the supply side — to make it easy for suppliers to do business with their customers — but also for our buy-side customers, the procurement customers, to make sure that they are getting transparency into everything. And that extends from their contracts, to making sure that they are administering and managing those contracts effectively, to also ensuring that they performance-manage those suppliers. They are then able to take risk out of their businesses, and ultimately link the dollars or the Euros they spend as a company with the core mission and purpose that they have.

Those missions can be ensuring that they have the right kinds of environmental sustainability and impact or looking to drive forced labor and slave labor out of their supply chains. Or they simply could be trying to ensure a diverse supplier base, including empowering minority and female-owned businesses or LGBT businesses. There’s really an opportunity there, but it all comes back to that very first point I made around first creating the transparency. Then you can go unleash the opportunities and innovation that you seek, once you have the transparency.

Gardner: Let’s go to some examples or use cases as we define the outcomes that are possible when you combine these cultural changes, attributes, the powerful tools and insights from an organization like SAP Ariba.

I recently saw some information about a digital manufacturing capability, using both the Ariba Network as well as SAP services. Is this a good example of bringing more intelligence and showing collaboration across an entire manufacturing ecosystem?

Share creativity, innovation 

Padgett: One of the best manufacturing networks out there is the SAP Manufacturing Network. It’s connected to the Ariba Network. There are about 30,000 discrete suppliers connected to that network, specifically focused on manufacturing. And again, when you open up this kind of collaborative community on a network, we can start to do really neat things.

Let’s say you’re trying to create a new product, or you want a new part manufactured. With this kind of collaborative network, you can throw up a 3-D drawing, collaborate in real-time with whatever subset of those 30,000 discrete suppliers you want, and start to drive innovations that you wouldn’t have been able to do on your own.

It’s about how to harness the creative genius that exists outside of the four walls of your business when you are embarking on new projects. It means having a network available to you that operates in real-time to change the paradigm and the way you think about innovation at your company.

You can find vendors very quickly. You get to manage those vendors in completely new ways. You can collaborate in real-time, which allows you to do more in less time. It provides an edge in terms of when you think about competitive differentiation. This is no longer, “How do we make our back-end more efficient?” It’s more about how to drive competitive differentiation across an industry, to be agile, and to do things — particularly in the manufacturing network — that you haven’t been able to do before. That means such things as linking the operations centers on a factory floor to the supply chain in real-time, as well as to your warehouses, across the globe.

There are a lot of really great examples in all industries, but manufacturing has some particular opportunities given that we are making such a quantum leap from how we used to do things. It’s a new paradigm, an intelligent enterprise.

Gardner: Manufacturing capabilities and efficiencies also shine light on why having a mission-critical network is important. Because you are dealing with intellectual property — such as designs of new products and sharing of secrets — if you don’t do that in a secure way, with compliance built-in, then you could certainly run into trouble.

Why is having this in the right network – one built for compliance and security — so important?

Mission-critical manufacturing

Padgett: Yes, you mentioned the idea of mission critical. A lot of what we think of traditionally as back-of-the-house process around procurement may have been looked at as business critical.

But we need it to think about them, too, as mission critical. We need to think differently because of things like manufacturing networks, using the intelligence available to us via the Internet of things (IoT) on our factory floors, and when there is an urgent requirement for parts when there is a failure. We need to be ready when it happens or about to happen.

We need to link immediately in real-time to our supply chains, our suppliers, and our warehouses around the world. We can now keep those machines up and running much more efficiently without downtime, which drives competitive differentiation and top-line revenue growth for the company. This is a really good example of the difference between business critical and mission critical.

Gardner: How does the intelligent enterprise help engender a richer ecosystem of partners and alliances? How do third-parties now become an accelerant or a force-multiplier to how businesses react in their markets?

Padgett: The whole paradigm around a network fundamentally has a requirement that all the parties are participating. There has to be value for all parties, otherwise it falls apart and it doesn’t work. If it’s too heavily buy-side focused, you don’t have suppliers there. If it’s too heavily supply-side then you don’t attract the buyers. So it’s like a flywheel — and all aspects have to be in balance, meaning that everybody is winning.

You are a better supplier by being able to work with your buyers and get fundamentally more visibility and transparency into their planning and buy cycles. Ultimately you can anticipate the demand your customers will have.

When you look at the intelligent enterprise, it has to extend to both sellers as well as buyers. The cool thing is that in these networks, sellers can use the same technologies. They get to analyze data from millions of sources, they get a 360-degree-view of buyers, and of their health. They get to get embedded into their demand chains, and not just the supply chain.

You are a far better supplier by being able to work with your buyers and get fundamentally more visibility and transparency into their planning and buy cycles, and ultimately be able to anticipate in real-time the kinds of demand your customer is having or will have.

This allows you to plan and ensure that you can meet their requirements, and hopefully exceed them. And that’s new. That’s not the kind of collaboration that existed in the past. This is an evenly weighted, balanced scorecard in terms of making sure buyers and sellers all see value and a reason to participate.

Other examples would be a seller quickly and easily getting simple information like a change-in-payment status, updates on a decline in sales, changes in leadership, pricing fluctuations around commodities or supply, and being able to look at those in real-time and cross-reference them. They can analyze that, not only with things they’ve done in the past, but also what’s happening in the marketplace overall.

There is a lot of value here. Being able to tap into these opportunities is super important. So, suppliers should also want to participate. Would they see this as a tax, or just something else that we are asking suppliers to do in order to get more business?

The 3.5 million suppliers active on the Ariba Network see the opportunity for new business and for discovery. They join these networks because it’s not only an opportunity to service their existing customers in a better and more modern way, but because there’s an opportunity to attract new customers.

It speaks to collaboration and it speaks to the discovery process available to buyers so they source a really diverse and rich set of suppliers for their community.

Gardner: As procurement professionals elevate themselves to a more strategic level and add value via collaboration and intelligence, they are clearly less of a cost center. Are we at a pivot point where the notion of procurement as a cost center needs to be reevaluated?

Profitable procurement goals

Padgett: That’s certainly the ambition, the goal, and the aspiration. The best business case an organization has for driving savings within their organization is through the procurement business case.

We’re finding that a ton of the digital transformation projects happening right now around the world are led via a procurement project. You start with modernizing and creating intelligence in your supply chain and in your procurement processes. The savings that come out of those projects, which are materially in the 4 to 8 percent of what a company spends in total, forms the driving force that then helps fund the rest of the digital transformation.

Certainly there is an opportunity for the CPO between being a cost and value center. But the thing that gets this off the ground and funded is the fact that there are a ton of efficiency and process opportunities in cost savings that exist within procurement. That’s kind of table stakes, and the blocking and tackling of getting started.

But once you get started, your observation is right on. Once we’ve saved a huge amount money and optimized the process and transparency in our businesses, we can extend that and create more value and differentiation for our organizations on the basis that we now have a ton of new tools and transparency available to us.

Gardner: There is still more to come, of course, in terms of what new technologies can provide. What should people be thinking about in terms of products that will soon enable this intelligence to become more practical?

Insightful intelligence evaluates risk

Padgett: We recently launched products like the Ariba Supplier Risk capability, which allows our customers to go in and evaluate their supply chain and look for areas where they have risk or exposure. That can use our data, the customer’s data, or third parties connected to the Ariba Network, such as Verisk Maplecroft or EcoVadis.

They basically deliver insights into environmental and sustainability risk factors. Another third-party connected to the network is Made in a Free World, and they score and detect forced labor in your supply chains. There are really interesting opportunities in terms of managing risk.

Then there are more meat-and-potatoes kinds of opportunities. We’re partnering with IBM and utilizing their Watson capabilities as well as the SAP Leonardo intelligence suite to do things like drive smarter contracts and build out more powerful intelligence capabilities within the ecosystem.

We’re partnering with IBM and using Watson as well as SAP Leonardo to do things like drive smarter contracts and build more powerful intelligence into the ecosystem.

That could be simple things like making sure we don’t have duplicate payments across our businesses or looking at the hundreds or potentially thousands of contracts that we manage in our organizations and ensure that we apply intelligence so we’re being notified proactively if there are risk factors. Maybe there is an exchange-rate clause, for example, in some of the contracts that we manage; whether some action that’s required or a threshold that activates a different clause in our contract.

We can’t expect across the thousands of contracts that we manage for a contract manager to remember all of those. And since they’re all usually in different formats and archived in different locations, we can use intelligence to drive efficiency, manage risk, and ultimately contribute to the bottom-line, which helps us to then reinvest those bottom-line savings into some top-line initiatives.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, data analysis, Enterprise transformation, Information management, Internet of Things, machine learning, Networked economy, procurement, SAP, SAP Ariba, Security, Spot buying, supply chain, User experience | Tagged , , , , , , , , , , , , | Leave a comment

How one MSP picked Bitdefender’s layered security approach for cloud-to-client protection

Solving security concerns has risen to the top of the requirements list for just about everyone, yet service providers in particular have had to step up their game.

Just as cloud models and outsourcing of more data center functions have become more popular, the security needs, too, across cloud models have grown even more fast-paced and pressing.

And as small- to medium-sized businesses (SMBs) have turned to such managed service providers (MSPs) to be in effect their IT departments, they are also seeking those MSPs to serve as their best defenses against myriad and changing security risks.

The latest BriefingsDirect security insights discussion examines how MSPs in particular are building better security postures for their networks, data centers, and clients’ operations.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to discuss the role of the latest security technology in making MSPs more like security services providersare Brian Luckey, Director of Managed Services, and Jeremy Wiginton, Applications Administrator, both at All Covered, IT Services from Konica Minolta, in Ramsey, New Jersey. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the trends that have been driving the need for MSPs like yourselves to provide even more and better security solutions?

Luckey: As MSPs, we are expected, especially for SMBs, to cover the entire gamut when it comes to managing or overseeing an organization’s IT function. And with IT functions come those security services.

It’s just an expectation at this point when you are managing the IT services for our clients. They are also expecting that we are overseeing the security part of that function as well.

Gardner: How has this changed from three to five years ago? What has changed that makes you more of a security services provider?

Luckey: A major driver has been the awareness of needing heightened security. So all of the news, the different security breach events — especially over the last 12 months, let alone the last couple of years — with WannaCry and Petya.

Brian Luckey

Luckey

Now, not only companies but the owners and executives are more in tune with the risks. This has sparked more interest in making sure that they are protected and feel like they are protected. This has all definitely increased the need for MPSs to provide these IT services.

Gardner: As we have had more awareness, more concerns, and more publicity, then the expectations are higher.

Jeremy, what are some of the technical advances that you feel are most responsible for allowing you as an MSP to react better to these risks?

Wiginton: The capability for the fast analytics, the fast reporting, and the fast responses that we can get out of the products and solutions that we use helps tremendously in making sure that our clients are well-protected and that security — any security issues that might pop up — are mitigated very, very quickly.

Gardner: The role of compliance requirements has also risen. Are your clients also seeking more security and privacy control around such things as the Health Insurance Portability and Accountability Act (HIPAA) or Payment Card Industry Data Security Standard (PCI DSS) and nowadays the General Data Protection Regulation (GDPR)?

Tools and regulations

Luckey: Oh, absolutely. We provide IT services to a variety of different industries like the financial industry, insurance, and health care. In those industries, they have high regulations and compliance needs, which are growing more and more. But there are also companies that might not fall into those typical industries yet have compliance needs, too — like PCI and GDPR, as you mentioned.

Gardner: As Jeremy mentioned, speed is critical here. What’s been a challenge for organizations like yours to react to some of these issues — be it security or compliance?

Luckey: That’s a great question. There are a couple of things. One is technology; having the right technology that fits not only our business needs, but also our clients’ needs. That’s a huge impact for us — being able to provide the right service and the right fit.

But also integration points are important. Disparate systems that don’t integrate well or work well together can be a difficulty for us to service our clients inappropriately. If we have internal chaos, how can we provide a great service to our clients?

The proper progression and adoption of services and solutions is also key. We are in a technological world, for sure, and as technology progresses it only gets better, faster, cheaper, and smarter. We need to be able to use those solutions and then pass along the benefits to our clients.

Gardner: Jeremy, we have been speaking generally about data center services, IT services, but what about the applications themselves? Is there something specific to applications hosting that helps you react more quickly when it comes to security?

Quick reactions

Wiginton: Most assuredly. A lot of things have been getting bloated. And when things get bloated, they get slowed down, and clients don’t want them on their machines — regardless of how it impacts their security.

So being able to deliver a modern security product that is lightweight, and so fast that the clients don’t even notice it, is essential. These solutions have come a long way compared to when you were constantly doing multiple things just to keep the client happy and having to compromise things that you may not have wanted to compromise.

Gardner: It wasn’t that long ago that we had to make some tough trade-offs between getting security, but without degrading the performance. How far do you think we have come on that? Is that something that’s essentially solved nowadays?

Being able to deliver a modern security product that is lightweight, and so fast that the clients don’t even notice it, is essential. These solutions have come a long way.

Wiginton: For the most part, yes. We have a comprehensive solution, where one product is doing the job of many, and the clients still don’t notice.

Gardner: Tell us more, Brian, about All Covered. You have 1,200 employees and have been doing this since 1997. What makes you differentiated in the MSP market?

Longevity makes a difference 

Luckey: We have been around a long time. I think our partnership and acquisition by Konica Minolta many years ago has definitely been a huge differentiator for us. Being focused on the office workplace of the future and being able to have multiple different technologies that serve an organization’s needs is definitely critical for us and the differentiating factor.

We have been providing computing and networking services, and fulfilling different application needs across multiple vertical industries for a long time, so it makes us one of the major MSP and IT players.

Gardner: But, of course Konica Minolta is a global company. So you have sister properties, if you will, around the globe?

Luckey: That is correct, yes.

Gardner: Let’s find out what you did to solve your security and performance issues and take advantage of the latest technology.

Luckey: We set out to find a new endpoint security vendor that would meet the high demands of not only our clients, but also our internal needs as well to service those clients appropriately.

We looked at more than a dozen different solutions covering the endpoint security marketplace. Over about six months we narrowed it down to the final three and began initial testing and discussions around what these three endpoint security vendors would do for us and what the success factors would look like as we tested them.

We eventually choose Bitdefender Cloud Security for MSPs.

Gardner: As an MSP, you are concerned not only with passing along great security services, but you have to operate on a margin basis, and take into consideration how to cut your total cost over time. Was there anything about the Bitdefender approach that’s allowed you to reduce man hours or incidents? What has been impactful from an economic standpoint, not just a security posture standpoint?

A streamlined security solution 

Luckey: Bitdefender definitely helped us with that. Our original endpoint security solution involved three different solutions, including an anti-malware solution. And so just being able to condense those into one — but still providing the best protection that we could find — was important to us. That’s what we found with Bitdefender. That definitely saved us some costs from the reduction of overall number of solutions.

But we did recognize other things in choosing Bitdefender, like the reduction of incidents; I think we reduced them by about 70 percent. That translated into a reduction of people and manpower needed to address issues. That, too, was a big win for us. And having such a wide diversity of clients — and also a large endpoint base — those were big wins for us when it came down to choosing Bitdefender.

Gardner: Jeremy, we’re talking about endpoint security, and so that means the movement of software. It means delivery of patches and updates. It means management of those processes. What was it about Bitdefender along the logistical elements of getting and keeping the security in place.

Bitdefender definitely helped us. It saved us costs due to the reduction of the overall number of solutions. We had a reduction of incidents, a 70 percent reduction.

Wiginton: Having everything managed, a single pane of glass interface for the endpoint security side, that has saved a ton of time. We are not having to go look in three different places. We are not having to deal with some of our automated things that are going on. We are not having to deal with two or three different APIs to try and get the same information or to try and populate the same information.

We have one consistent product to work with, a product that, as Brian said, has cut down on the number of things that come across our desks by at least 70 percent. The incidents still occur, but they are getting resolved faster and on a more automated basis with Bitdefender than they were in the past with our other products.

Gardner: Brian, where you are in your journey of this adoption? Are you well into production?

Luckey: We are well into the journey. We chose Bitdefender in mid-2016, and we were deployed in January 2017. It’s been about a year-and-a-half now, and still growing.

We have grown our endpoints by about 30 percent from the time that we originally went live. Our business is growing, and Bitdefender is growing with us. We have continued to have success and we feel like we have very good protection for our clients when it comes to endpoint security.

Gardner: And now that you have had that opportunity to really evaluate and measure this in business terms, what about things like help desk, remote patch management, reporting? Are these things that have changed your culture and your business around security?

Reporting reaps rewards

Luckey: Yes, absolutely. We have been able to reduce our incidents, and that’s obviously been a positive reflection on the service desk and help desk on taking calls and those type of issues.

For patching, we have a low patch remediation rate, which is great. I’m sure that Bitdefender has been a strong reflection on that.

And for reporting, it’s big for us. Not only do we have more in-depth and detailed reporting for our clients, but we also have the capability to give access to our clients to manage their own endpoints, as well as to gain reports on their own endpoints.

Gardner: You’re able to provide a hybrid approach, let them customize — slice and dice it the way they want for those larger enterprise clients. Tell us how Bitdefender has helped you to be a total solution provider to your SMB clients?

Luckey: Endpoint security has become a commodity business. It’s one of those things you just have to do. It’s like a standard requirement. And not having to worry about our standard offerings, like endpoint security — we just know it works, we know how it works, we are very comfortable on how it works, and we know it inside and out. All of that makes life easier for us to focus on the other things, such as the non-commodity businesses or the more advanced items like security information management (SIM) and manage unified threat management (UTM).

Gardner: What now can you do now with such value-added services that you could not do before?

Luckey: We can focus more on providing the advanced types of services. For example, we recently acquired a [managed security services and compliance consulting] company, VioPoint, that focuses solely on security offerings. Being able to focus on those is definitely key for us.

Gardner: Jeremy, looking at this through the applications lens again, what do you see as the new level of value-added services that you can provide?

Fewer fires to extinguish 

Wiginton: We are bringing in and evaluating Bitdefender technologies such as Full Disk Encryption. It has been a nice little product. I have done some testing with it, they let me in on their beta of it, which was really nice. It’s really easy to use.

Also, [with Bitdefender], because there’s a lot less remediation needed on security incidents, we have seen a great drop in things like ransomware. As a result, I am able to focus more on making sure that our clients are well protected and making sure that the applications are working as intended — as opposed to having to put out a fire because the old solution let something in that it shouldn’t have.

Gardner: It’s been great to talk about this in the abstract, but it’s very powerful too if we can get more concrete examples.

Do you have any use cases for your MSP endpoint security and management capabilities that you can point to?

When we rolled out Bitdefender to replace older security systems for a client, their business stopped. Malware was newly discovered. The previous solutions did not catch that.

Luckey: The one that comes to mind, and always sticks with me, is a legal client of ours. When we rolled out Bitdefender to replace the older security solutions they had, their business stopped. And the reason their business stopped is there was malware being detected, and we couldn’t find out where it was coming from.

After additional research, we found that their main application to manage their clients and to manage billing — basically to run their business — the executable file that they would take and copy and actually install that application on every desktop, that had malware in it.

The previous solutions didn’t catch that. Every time they were deploying this application to new users, or if they had to redeploy it, they were putting malware on every machine, every time. We weren’t able to detect it until we had Bitdefender deployed. Once Bitdefender detected it, it stopped the business, which is not good. The better part was that we were able to detect the malware that was being spread across the different machines.

That’s one example that I always remember because that was a big deal, obviously by stopping the business. But the most important part was that we were able to detect malware and protect that company better than they had been protected before.

Gardner: The worst kind of problem is not knowing what you don’t know.

Luckey: Exactly! Another example is a large client that has many remote offices for its dental services, all across the US. Some offices had spotty Internet access, so deploying Bitdefender was challenging until we used Bitdefender Relay. And Relay allowed us to deploy it once to the company and then deploy most of the devices with one deployment, instead of having to deploy one agent at a time.

And so that was a big benefit that we didn’t have in the past. Being able to deploy it once and then have all the other machines utilize that Relay for the deployments made it a lot easier and a lot faster due to the low bandwidth that was available in those locations.

Wiginton: We had a similar issue at a company where they would not allow their servers to have any Internet access whatsoever. We were able to set up a desktop as the Relay and get the servers connected to the Relay on the desktop to be able to make sure that their security software was up-to-date and checking in. It was still able to do what it was supposed to, as opposed to just sitting there and then alerting whenever its definitions became out of date because it didn’t have Internet access.

Gardner: Let’s look to the future and what comes next. We have heard a lot about encryption, as you mentioned, Jeremy. There’s also a of research and development being done into things like machine learning (ML) to help reduce the time to remediation and allow the security technology to become more prescriptive, to head things off before they become a problem.

Brian, what are you looking for next when it comes to what suppliers like Bitdefender can do to help you do your job?

Future flexibility and functionality 

Luckey: We have already begun testing some of the newer functionality being released to the Bitdfender MSP Cloud Security suite this month. We are looking into the advanced security and ML features, and some new functionality they are releasing. That’s definitely our next approach when it comes to the next generation of the Bitdefender agent and console.

And in addition to that, outside of Bitdefender, we are also expanding the services from our new security acquisition, VioPoint, and consolidating those to provide best-in-class security offerings to our clients.

Gardner: Jeremy, what entices you about what’s coming down the pike when it comes to helping to do your job better?

Wiginton: I’m really looking forward to Bitdefender’s Cloud, which allows us a lot more flexibility because we are not having to allocate our own internal resources to try and do the analytics. So their Sandbox Analyzer and things that are coming soon really do interest me a lot. I am hoping that that will further chop down the number of security incidents that come across our desk.

Gardner: What would you suggest in hindsight, now that you have made a big transition from multiple security providers to more of a consolidated comprehensive approach? What have you learned that you could share with others who are maybe not quite as far along in the journey as you?

Testing, testing

Luckey: Number one is testing. We did a pretty good job of testing. We took a three-pronged approach of internal, external, and then semi-internal, so our help desk folks. Make sure that you have a comprehensive test plan to test out how many bad guys are being protected, what kind of malware is being blocked, and the functionality. That’s the big one … test, test, and test some more.

Choosing the right partner and the right vendor, if you will, is key. I believe in having partners instead of just vendors; vendors just supply products, but partners work together to be successful.

It’s kind of like dating, date the right partner until you find the right one — and Bitdefender has definitely been a great partner for us.

It’s kind of like dating. Date the partners until you find the right one — and Bitdefender has definitely been a great partner for us. Once we knew what we wanted, the rest fell into place.

Otherwise, have your requirements set up for what success looks like, those are all important. But the testing — and making sure you find the right partner – those were key for us. Once we knew what we wanted, the rest of it fell into place.

Gardner: Jeremy, from your perspective, what advice could you give others who are just starting out?

Wiginton: Make sure that you are as thorough as possible in your testing, and get it done sooner rather later. The longer you wait, the more advanced threats are going to be out there and the less likely you are going to catch them on an older solution. Do your homework and you have to be on the ball with it.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, data center, Enterprise architect, enterprise architecture, Enterprise transformation, managed services, multicloud, Security, server, Software, Software-defined storage, storage | Tagged , , , , , , , , , , , | Leave a comment

Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture

One of the key elements of digital transformation is aligning core, cloud, and edge computing using the latest architectures and efficiencies. Yet new levels of simplicity are needed to satisfy all the requirements for both end users and IT operators.

The next BriefingsDirect IT solutions ecosystem profile interview examines how Citrix and Hewlett Packard Enterprise (HPE) are specifically aligned to help bring such digital transformation benefits to a broad, global market.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the venerable and always-innovative Citrix-HPE partnership, BriefingsDirect sat down withexecutives Jim Luna, Senior Director for Global Alliances at Citrix, and Jeff Carlat, Senior Director of Global Alliances at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jim, what trends are driving the need for more innovation around mobile workspaces?

Jim Luna

Luna

Luna: As customers embark through digital transformation they still need to access their apps, data, and desktops from anywhere. With the advent of 5G wireless, and new network connectivity, we need to allow customers to be able to get their data and apps from any device as well. So we see a transformation in the marketplace.

Carlat: We are also looking at a new workforce coming in, the millennials, and they realize the traditional way of going to your job is totally being changed. To be able to be at work anytime, anyplace, anywhere — and removing the barriers of where work is – that is driving us to co-innovate. We are delivering solutions that allow the freedom to be more efficient anywhere.

Gardner: There’s a chicken-and-egg relationship. On one hand, the core, cloud, and edge can work in tandem to allow for safe, secure, and compliant data and applications sharing activities. And that encourages people to change their work behaviors, to become more productive. It’s hard to decide, which is leading which?

Work anywhere, anytime on any device 

Luna: Traditionally, people had a desktop with applications, and they wanted that particular image replicated throughout their environment. But with the advent of software-as-a-service (SaaS) applications that are web-delivered, they now need more of a management workspace, if you will, that allows them to work with any type of application — whether it’s being delivered locally, on-premises, or through a cloud-based SaaS application. Delivering a unified workspace anywhere becomes critical for them.

Carlat: We also have requirements around security — increasing the security of data and personal files. This forces us to work together, to take that workspace but not have it sitting in a vulnerable laptop left in a Starbucks. Instead that desktop sits back in the comfort and safety of a locked-up data center.

Luna: People want a simple experience. They don’t want a complicated experience as they access their apps and data. So, simplicity becomes key. User experience (UX) becomes key. And choice becomes key as well.

Learn More About

the HPE-Citrix

Strategic Alliance

Carlat: On expectations of simplicity and UX, if I find it hard to log-in to SharePoint I may just give up and say, “Well, I’m not going to be using those services.” It’s so easy to just move to the next item on your list.

Jeff Carlat

Carlat

Like I said, with millennials, that’s the expectation. It’s a mandatory requirement. If we can’t deliver that ease of experience to them, others will.

Gardner: User expectations are higher. They want flexibility. They want to be more productive anywhere. We know the technologies are available to accomplish this.

What’s holding back organizations from executing? How are Citrix and HPE together forming a whole greater than the sum of the parts to help businesses execute on this vision?

Collaborate to simplify 

Luna: Traditionally it’s been the complexity of the deployment of the architecture — both on the hardware side, as well as on the software side. The things that we are doing together are simplifying that process from a deployment perspective, from a manageability perspective, from a support perspective, as well as the other features of experience, security, and choice.

We are working to simplify the experience — not just in terms of managing and deploying, but also to make sure that that end-user experience is simplified as well.

Gardner: Virtual desktop infrastructure (VDI) has been around for some time, but earlier there were issues around network capacity, and certain media formats lagged. These performance issues have been largely put to rest. How does that factor into accelerating mobile workspaces adoption?

Carlat: In the 22 years of my IT experience at Compaq and HPE, I’ve seen the processor compute power increase significantly. The network, storage, and other inhibitors, from a technology standpoint, are pretty much gone now.

It moves the problem away from the infrastructure to the complexity issue. How do you unleash that potential in a manner that is easy to consume. That’s the next level.

Luna: One of the other things our partnership allows is more choice. With HPE infrastructure, we have a variety of different choices available to customers, according to their unique requirements. There is now choice in terms of the architecture that better suits their deployment requirements.

Gardner: We’ve heard about hyperconverged infrastructure (HCI) helping people on deployments. We’ve heard about appliance models. Are these part of that new choice?

Carlat: Yes, that’s why we have come together. We are delivering workspace appliances with Citrix on top of our HPE SimpliVity HCI portfolio.

Not only is a customer going to capture the benefits of everything that’s gone into our SimpliVity HCI platform, but we marry it with the world that Citrix provides for VDI, virtual applications, and mobile desktops.

Luna: On one hand, we’re making it easier for established customers to manage their Citrix environments through a simplified management plane with Citrix Cloud Services. But by having the security of that data sitting locally on a SimpliVity appliance — that’s really good for customers in terms of data governance, data control, and data security.

But there are other architectures for other segments, like in the financial services industry, where we have trader workstations that provide multi-monitor support and high graphics capabilities. So, choice is key.

By having the security of the data sitting locally on an HPE SimpliVity appliance — that’s good for customers in terms of governance, data control, and data security.

Carlat: Yes, as these traders are executing trades, any latency is going to eliminate your technology from being used. So there are very, very strict in requirements around latency or performance, as well as security. There are also benefits on total cost, space, and being able to deliver a very rich media environment. Sometimes it’s upward of six monitors, they have to be patched into this, too.

Through the capabilities we have coming together – of bridging our leading infrastructure with the Citrix portfolio — it makes a magical combination that can be easily deployed, and it just works.

Gardner: As I mentioned, we want to provide more simplicity for IT operators. One of the things that Citrix has been working out for years is intelligent network capabilities. How Citrix is addressing simplicity around these requirements?

Cloud-control solutions

Luna: Citrix is moving to a cloud service model where these technologies are available through a cloud-control plane, whether that’s VDI, or gateway-as-a-service, or a load-balancer-as-a-service. All of those things can be provisioned from a central plane, on-premises or on a customer’s device. And those are solutions we can deliver whether it is on a standard HPE ProLiant DL380 server, or whether it’s SimpliVity HCI, or whether that’s on HPE Moonshot or a Synergy composable infrastructure environment. Those architectures simply can be delivered and managed through a cloud service onto HPE infrastructures.

Gardner: We’ve also been hearing about complexity of hybrid IT models. Not only we are asking folks to implement things like VDI in workspaces, but now they have to make choices about private, public cloud, or some combination.

How does the Citrix and HPE alliance help manage the gap between public and private cloud?

Learn More About

the HPE-Citrix

Strategic Alliance

Carlat: We are aligned, HPE and Citrix, in our view of how IT and consumers are going to bridge and use public cloud resources. We believe there is a right mix, a hybrid approach, where you are going to have both on-premises and the cloud.

At HPE we have several tools to help the brokering of applications between on-premises to off-premises. And we provide that flexibility and choice in an agnostic manner.

Luna: We’ve recognized that the world is a hybrid cloud world. The public cloud has a lot more complexity due to the number and choice of public cloud providers. So we are not only driving hybrid cloud solutions — we also have value-added services such as HPE Pointnext that allows customers to incrementally define their architecture, better deploy that architecture, and better manage those services to allow for a better customer experience overall.

Gardner: We are also thinking nowadays about the edge for many kinds of devices, such as sensors on a factory floor. Is this an area where the alliance between Citrix and HPE can be brought to bear? How does the Internet of things (IoT) relate to what you do?

Explosion at the edge

Carlat: We see exploding growth at the edge. And we define the edge as anything not in the data center. Increasingly more-and-more of the analytics and the insights will be derived at the edge. We are already doing a lot with Citrix.

A major financial institution with hundreds of thousands of clients is using the edge and our HPE and Citrix technologies together. This market is only going to grow — and the requirements increase from scalability to usability.

The edge can also be grimy; it can be a very difficult physical environment. We take all of that into account across the whole solution stack to ensure that we are providing the expected experience.

Luna: Performance is key. As we look at the core to edge, we have a distributed model that allows for data to stay as close as possible to that end-customer — and therefore provide the best performance and experience. And the best analytics.

We must consider, can we grab the data necessary that’s being accessed at that particular endpoint and transmit that data back? Can we provide telemetry to the customer for managing that environment and making that environment even better for the customer?

In our case, the Citrix Analytics Service is part of our offering. To pull that data and serve that up to the customer in a manner that they are able to manage in that environment is a plus.

Analytics offer insight

Gardner: Analytics certainly becomes an important requirement. We have analytics at the edge; we have analytics in the cloud. We are not just talking about delivering apps; we are talking about first managing data — and then taking that data and making it actionable. How does the data and the data analysis factor into what you are doing?

Carlat: Increasingly we see the shift to a consumption-based delivery of IT.  Our HPE GreenLake services provide capabilities for customers to not be mired in maintaining and monitoring all the infrastructure — but actually just consume it on an as-needed basis. So that’s a one-key element.

Luna: Citrix is coming out with a Citrix Analytics Service, and we started that with VDI. But now that is expanding across the entire set of product portfolios from ShareFile, to NetScalerGatewaysLoad Balancers, et cetera. The idea is to unify all that data so that it is seamless to the customer. Now, that combines with all the analytics data coming out of the infrastructure to provide the customer with a one-pane-of-glass view.

It all comes down to taking advantage of the technology and progress we have made together to deliver insights and business benefits without jacking up the complexity that acts as a barrier to adoption.

Carlat: Using the data and analytics allow you to derive insights, and more accurate insights. We want to give a competitive leg up to our customers, clients, and partners. Those who have a leg up win more, make more money, are more efficient, and have happier clients. Therefore it all comes down to taking advantage, if you will, of the technology and the progress we have and pushing the edge of that envelope, bringing it into a package that delivers insights and business benefit without jacking up the complexity that makes it be a barrier to adoption.

Luna: You’re really empowering the customer to have better knowledge about their environment. And with better knowledge comes better performance in their manageability overall.

Gardner: Where are organizations leveraging the HPE-Citrix alliance in such a way that we can point to them and say, this is how it works?

Real-world success stories 

Carlat: One example is in engineering design. Imagine the amount of horsepower it takes in workstations to do computer-aided design (CAD) and computer-aided manufacturing (CAM). There’s solids modeling and major computational design elements. To purchase the infrastructure and have it at your desk can be quite expensive, and it can increase security risk.

Citrix and HPE have offerings, combined with our Edgeline and HCI systems, that provide the right experience, and really rich graphics and content. And we are able to provide that securely, with the data contained in a structured environment.

Luna: Another segment is healthcare. Because of HIPAA regulations, Citrix VDI is consumed in many healthcare organizations today, whether it’s large hospitals or clinics. That’s one of the environments where we see an opportunity to deliver on the power of both HPE and Citrix, by allowing that data to be secured and managed centrally yet providing the performance and the access on any device — whether it’s the patient room, or the doctor’s clinic, or anywhere.

Gardner: Let’s look to the future. As we seek to head off complexity, how will HPE OneSphere bolster your alliance?

Trusted together over time

Luna: We are always looking at innovating together. We are looking at the possibilities for joint work and development. HPE OneSphere presents an opportunity where we provide a single pane of glass view of customers as they look to deploy Citrix workloads. That could be through a central management plane, like OneSphere, or going onto a public cloud and being able to compare pricing and workloads.

It can also be about managing a hybrid cloud through HPE infrastructure, and managing all of that seamlessly, whether it’s in a private-hybrid cloud environment or through a public cloud and providing analytics. So we are continuing to look at those solutions that provide innovation for our customers.

Gardner: Jeff it seems that the opportunity to manage a multi-cloud world is certainly an attractive opportunity for you going out to alliance partners like Citrix.

Learn More About

the HPE-Citrix

Strategic Alliance

Carlat: Yes, exactly. That’s an expectation of what consumers will be moving to in the future. It’s not a one-stop shop. We need to be agnostic. To me, HPE and Citrix are totally aligned to where we see the future going with regards to hybrid cloud. And by first having that commonality of strategy and vision, it just makes is easy to snap our stuff together and create these solutions that are delighting our customers.

Luna: I think at the end of the day our mission is to make Citrix hybrid cloud as best as possible on HPE gear and infrastructure, and that’s what we aim to deliver for our customers.

Gardner: And I suppose it’s important for us to point out that this isn’t a Johnny-come-lately relationship. You have been working together for some time. A great deal of the installed base for Citrix is on HPE kit.

Carlat: Yes, our relationship is built on 22 years of history between us. We’ve been blessed by customers desiring to land their infrastructure on HPE.

A large portion of Citrix customers run today on HPE. That’s a testament to the trust and collaboration within the partnership. It’s been a good partnership.

We have an installed base out there of customers who have chosen us in the past and continue to use us. For those customers, we want to provide them a seamless transformation to a new generation of architectures. The natural evolution is there for us to harvest, we just have to do it in ways that meet expectations around usability and experience.

Luna: A large portion of our customers today run their Citrix VDI environments on HPE infrastructure. That’s just a testament to the trust and the collaboration within the partnership. We have had innovation together over the years. That’s been collaboration between our teams, as well the leadership, in bringing new platforms and new solutions out to the marketplace. It’s been a good partnership.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise

You may also be interested in:

Posted in application transformation, Citrix, Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud, User experience | Tagged , , , , , , , , , , | Leave a comment

New strategies emerge to stem the costly downside of complex cloud choices

The next BriefingsDirect hybrid IT management strategies interview explores how jerry-rigged approaches to cloud adoption at many organizations have spawned complexity amid spiraling — and even unknown — costs.

We’ll hear now from an IT industry analyst about what causes unwieldy cloud use, and how new tools, processes, and methods are bringing insights and actionable analysis to regain control over hybrid IT sprawl.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help us explore new breeds of hybrid and multicloud management solutions is Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What makes hybrid and multicloud adoption so complex?

Dillingham: Regardless of how an enterprise has invested in public and private cloud use for the last decade, a lot of them ended up in a similar situation. They have a footprint on at least one or multiple public clouds. This is in addition to their private infrastructure, in whatever degree that private infrastructure has been cloud-enabled and turned into a cloud API-available infrastructure to their developers.

Rhett-Dillingham-MIS-site-pic-150x150

Dillingham

They have this footprint then across the hybrid infrastructure andmultiple public clouds. Therefore, they need to decide how they are going to orchestrate on those various infrastructures — and how they are going to manage in terms of control costs, security, and compliance. They are operating cloud-by-cloud, versus operating as a consolidated group of infrastructures that use common tooling. This is the real wrestling point for a lot of them, regardless of how they got here.

Gardner: Where are we in this as an evolution? Are things going to get worse before they get better in terms of these levels of complexity and heterogeneity?

Dillingham: We’re now at the point where this is so commonly recognized that we are well into the late majority of adopters of public cloud. The vast majority of the market is in this situation. We’re going to get worse from an enterprise market perspective.

We are also at the inflection point of requiring orchestration tooling, particularly with the advent of containersContainer orchestration is getting more mature in a way that is ready for broad adoption and trust by enterprises, so they can make bets on that technology and the platforms based on them.

Control issues

On the control side, we’re still in the process of sorting out the tooling. You have a number of vendors innovating in the space, and there have been a number of startup efforts. Now, we’re seeing more of the historical infrastructure providers invest in the software capabilities and turning those into services — whether it’s Hewlett Packard Enterprise (HPE)VMware, or Cisco, they are all making serious investments into the control aspect of hybrid IT. That’s because their value is private cloud but extends to public cloud with the same need for control.

Gardner: You mentioned containers, and they provide a common denominator approach so that you can apply them across different clouds, with less arduous and specific work than deploying without containerization. The attractiveness of containers comes because the private cloud people aren’t going to help you deal with your public cloud deployment issues. And the public clouds aren’t necessarily going to help you deal with other public clouds or private clouds. Is that why containers are so popular?

Learn More About

HPE OneSphere

Dillingham: If you go back to the fundamental basis of adoption of cloud and the value proposition, it was first and foremost about agility — more so than cost efficiency. Containers are a way of extending that value, and getting much deeper into speed of development, time to market, and for innovation and experimentation.

Containerization is an improvement geared around that agility value that furthers cloud adoption. It is not a stark difference from virtual machines (VMs), in the sense of how the vendors support and view it.

So, I think a different angle on that would be that the use of VMs in public cloud was step one, containers was a significant step two that comes with an improved path to the agility and speed value. The value the vendor ecosystem is bringing with the platforms — and how that works in a portable way across hybrid infrastructures and multi-cloud — is more easily delivered with containers.

There’s going to be an enterprise world where orchestration runs specific to cloud infrastructure, public versus private, but different on various public clouds. And then there is going to be more commonality with containers by virtue of the Kubernetes project and Cloud Native Computing Foundation (CNCF) portfolio.

That’s going to deliver for new applications — and those lifted and shifted into containers — much more seamless use across these hybrid infrastructures, at least from the control perspective.

Gardner: We seem to be at a point where the number of cloud options has outstripped the ability to manage them. In a sense, the cart is in front of the horse; the horse being hybrid cloud management. But we are beginning to see more such management come to the fore. What does this mean in terms of previous approaches to management?

In other words, a lot of organizations already have management for solving a variety of systems heterogeneity issues. How should the new forms of management for cloud have a relationship with these older management tools for legacy IT?

Dillingham: That is a big question for enterprises. How much can they extend their existing toolsets to public cloud?

A lot of the vendors from the private [infrastructure] sector invested in delivering new management capabilities, but that isn’t where many started. I think the rush to adoption of public cloud — and the focus on agility over cost-efficiency — has driven a predominance of the culture of, “We are going to provide visibility and report and guide, but we are not going to control because of the business value of that agility.”

The tools have grown up as a delivery on visibility but not the control of the typical enterprise private infrastructure approach, which is set up for a disruptive orientation to the software and not continuity.

And the tools have grown up as a delivery on that visibility, versus the control of the typical enterprise private infrastructure approach, which is set up for a disruptive orientation to the software and not continuity. That is an advantage to vendors in those different spheres. I see that continuing.

Gardner: You mentioned both agility and cost as motivators for going to hybrid cloud, but do we get to the point where the complexity and heterogeneity spawn a lack of insight and control? Do we get to the point where we are no longer increasing agility? And that means we are probably not getting our best costs either.

Are we at a point where the complexity is subverting our agility andour ability to have predictable total costs?

Growing up in the cloud 

Dillingham: We are still a long away from maturity in effective use of cloud infrastructure. We are still at a point where just understanding what is optimal is pretty difficult across the various purchase and consumption options of public cloud by provider and in comparing that to an accurate cost model for private infrastructure. So, the tooling needs to be in place to support this.

There has been a lot of discussion recently about HPE OneSphere from Hewlett Packard Enterprise, where they have invested in delivering some of this comparability and the analytics to enable better decision-making. I see a lot of innovation in that space — and that’s just the tooling.

There is also the management of the services, where the cloud managed service provider market is continuing to develop beyond just a brokering orientation. There is more value now in optimizing an enterprise’s footprint across various cloud infrastructures on the basis of optimal agility. And also creating value from services that can differentiate among different infrastructures – be it Amazon Web Services (AWS) versus Azure, and Google, and so forth – and provide the cost comparisons.

Gardner: Given that it’s important to show automation and ongoing IT productivity, are these new management tools including new levels of analytics, maybe even predictive insights, into how workloads and data can best become fungible — and moved across different clouds — based on the right performance and/or cost metrics?

Is that part of the attractiveness to a multi- and cross-cloud management capability? Does hybrid cloud management become a slippery slope toward impressive analytics and/or performance-oriented automation?

Dillingham: We’ve had investment in the tooling from the cloud providers, the software providers, and the infrastructure providers. Yet the insights have come more from the professional services’ realm than they have from the tooling realm. That’s provided a feedback loop that can now be applied across hybrid- and multi-cloud in a way that hasn’t come from the public cloud provider tools themselves.

Learn More About

HPE OneSphere

So, where I see the most innovation is from the providers that are trying to address multi-cloud environments and best feed innovation from their customer engagements from professional services. I like the opportunity HPE has to benefit from their acquisitions of Cloud Technology Partners and RedPixie, and then feeding those insights back into [product development]. I’ve seen a lot of examples about the work they’re doing in HPE OneSphere in moving those insights into action for customers through analytics.

Gardner: I was also thinking about the Nimble acquisition, and with InfoSight, and the opportunity for that intellectual property to come to bear on this, too.

Dillingham: Yes, which is really harvesting the value of the control and insights of the private infrastructure and the software-defined orientation of private infrastructure in comparison to the public cloud options.

Gardner: Tell us about Rhett Dillingham. You haven’t been an IT industry analyst forever. Please tell us a bit about your background.

Dillingham: I’ve been a longtime product management leader. I started in hardware, at AMD, and moved into software. Before the cloud days, I was at Microsoft. Next I was building out the early capabilities at AWS, such as Elastic Compute Cloud (EC2) and Elastic Block Store (EBS). Then I went into a portfolio of services at Rackspace, building those out at the platform level and the overall Rackspace public cloud. As the value of OpenStack matured into private use, I worked with a number of enterprises on private OpenStack cloud deployments.

As an analyst, I support project management-oriented, consultative, and go-to-market positioning of our clients.

Gardner: Let’s dwell on the product management side for a bit. Given that the market is still immature, given what you know customers are seeking for a hybrid IT end-state, what should vendors such as HPE be doing in order to put together the right set of functions, processes, and simplicity — and ultimately, analytics and automation — to solve the mess among cloud adoption patterns and sprawl?

Clean up the cloud mess 

Dillingham: We talked about automation and orchestration, talked about control of cost, security, and compliance. I think that there is a tooling and services spectrum to be delivered on those. The third element that needs to be brought into the process is the control structure of each enterprise, of what their strategy is across the different infrastructures.

Where are they optimizing on cost based on what they can do in private infrastructure? Where are they setting up decision processes? What incremental services should be adopted? What incremental clouds should be adopted, such as what an Oracle and an IBM are positioning their cloud offerings to be for adoption beyond what’s already been adopted by a client in AWS, Google, and Azure?

The third element that needs to be brought into the process is the control structure of each enterprise, of what their strategy is across the different infrastructures.

I think there’s a synergy to be had across those needs. This spans from the software and services tooling, into the services and managed services, and in some cases when the enterprise is looking for an operational partner.

Gardner: One of the things that I struggle with, Rhett, is not just the process, the technology and the opportunity, but the people. Who in a typical enterprise IT organization should be tasked with such hybrid IT oversight and management? It involves more than just IT.

To me, it’s economics, it’s procurement, it’s contracts. It involves a bit more than red light, green light … on speed. Tell me about who or how organizations need to change to get the right people in charge of these new tools.

Who’s in charge?

Dillingham: More than the individuals, I think this is about the recognition of the need for partnerships between the business units, the development organizations, and the operational IT organization’s arm of the enterprise.

The focus on agility for business value had a lot of the cloud adoption led by the business units and the application development organizations. As the focus on maturity mixes in the control across security and compliance, those are traditional realms of the IT operational organization.

Now there’s the need for decision structure around sourcing — where how they value incremental capabilities from more clouds and cloud providers is a decision of tradeoffs and complexity. As you were mentioning, of weighing between the incremental value of an additional provider and an incremental service, and portability across those.

What I am seeing in the most mature setups are partnerships across the orientations of those organizations. That includes the acknowledgment and reconciliation of those tradeoffs in long-term portability of applications across infrastructures – against the value of adoption of proprietary capabilities, such as deeper cognitive machine learning (ML) automation and Internet of Things (IoT) capabilities, which are some of the drivers of the more specific public cloud platform uses.

Gardner: So with adopting cloud, you need to think about the organizational implications and refactor how your business operates. This is not just bolting on a cloud capability. You have to rethink how you are doing business across the board in order to take full advantage.

Dillingham: There is wide recognition of that theme. It gets into the nuts and bolts as you adopt a platform and you determine exactly how the operations function and roles are going to be defined. It means determining who is going to handle what, such as how much you are going to empower developers to do things themselves. With the accountability that results, more tradeoffs are there for them in their roles. But it’s almost over-rotation and focus to that out of recognition of it and lack of valuation of that more senior-level decision making in what their cloud strategy is.

Learn More About

HPE OneSphere

I hear a lot of cloud strategies that are as simple as, “Yes, we are allowing and empowering adoption of cloud by our development teams,” without the second-level recognition of the need to have a strategy for what the guidelines are for that adoption – not in the sense of just controlling costs, but in the sense of: How do you view the value of long-term portability? How do you value strategic sourcing and the ability to negotiate across these providers long-term with evidence and demonstrable portability of your application portfolio?

Gardner: In order to make those proper calls on where you want to go with cloud and to what degree, across which provider, organizations like HPE are coming up with new tools.

So we have heard about HPE OneSphere. We are now seeing HPE’s GreenLake Hybrid Cloud, which is a use of HPE OneSphere management as a service. Is that the way to go? Should we think of cloud management oversight and optimization as a set of services, rather than a product or a tool? It seems to me that a set of services, with an ecosystem behind them, is pretty powerful.

A three-layer cloud

Dillingham: I think there are three layers to that. One is the tool, whether that is consumed as software or as a service.

Second is the professional consultative services around that, to the degree that you as an enterprise need help getting up to speed in how your organization needs to adjust to benefit from the tools and the capabilities the tools are wrangling.

And then third is a decision on whether you need an operational partner from a managed service provider perspective, and that’s where HPE is stepping up and saying we will handle all three of these. We will deliver your tools in various consumption models on through to a software-as-a-service (SaaS) delivery model, for example, with HPE OneSphere. And we will operate the services for you beyond that SaaS control portal into your infrastructure management, across a hybrid footprint, with the HPE GreenLake Hybrid Cloud offering. It is very compelling.

HPE is stepping up with OneSphere and saying they will handle delivery of tools, SaaS models, and managed cloud services — all through a control portal.

Gardner: With so many moving parts, it seems that we need certain things to converge, which is always tricky. So to use the analogy of properly intercepting a hockey puck, the skater is the vendor trying to provide these services, the hockey puck is the end-user organization that has complexity problems, and the ice is a wide-open market. We would like to have them all come together productively at some point in the future.

We have talked about the vendors; we understand the market pretty well. But what should the end-user organizations be starting to do and think in order for them to be prepared to take advantage of these tools? What should be happening inside your development, your DevOps, and that larger overview of process and organization in order to say, “Okay, we’re going to take advantage of that hockey player when they are ready, so that we can really come together and be proficient as a cloud-first organization?”

Commit to an action plan

Dillingham: You need to have a plan in place for each element we have talked about. There needs to be a plan in place for how you are maturing your toolset in cloud-native development… how you are supporting that on the development side from a continuousintegration (CI) and continuous delivery (CD) perspective; how you are reconciling that with the operational toolset and the culture of operating in a DevOps model with whatever degree of iterative development you want to enable.

Is the tooling in place from an orchestration and development capability and operations perspective, which can be containers or not? And that gets into container orchestration and the cloud management platforms. There is the control aspect. What tooling you are going to apply there, how you are going to consume that, and how much you want to provide it as a consultative offer? And then how much do you want those options managed for you by an operational partner? And then how you are going to set up your decision-making structure internally?

Every element of that is where you need to be maturing your capabilities. A lot of the starting baseline for the consultative value of a professional services partner is walking you through the decision-making that is common to every organization on each of those fronts, and then enabling a deep discussion of where you want to be in 3, 5, or 10 years, and deciding proactively.

More importantly than anything, what is the goal? There is a lot of oversimplification of what the goal is – such as adoption of cloud and picking of best-of-breed tools — without a vision yet for where you want the organization to be and how much it benefits from the agility and speed value, and the cost efficiency opportunity.

Gardner: It’s clear that those organizations that can take that holistic view, that have the long-term picture in mind, and can actually execute on it, have a significant advantage in whatever market they are in. Is that fair?

Learn More About

HPE OneSphere

Dillingham: It is. And one thing that I think we tend to gloss over — but does exist — is a dynamic where some of the decision-makers are not necessarily incentivized to think and consider these options on a long-term basis.

The folks who are in role, often for one to three years before moving to a different role or a different enterprise, are going to consider these options differently than someone who has been in role for 5 or 10 years and intends to be there through this full cycle and outcome. I see those decisions made differently, and I think sometimes the executives watching this transpire are missing that dynamic and allowing some decisions to be made that are more short-term oriented than long-term.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise

You may also be interested in:

Posted in artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Docker, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning, multicloud | Tagged , , , , , , , , , | Leave a comment

Huge waste in public cloud spend sets stage for next wave of total cloud governance solutions, says 451’s Fellows

IT architects and operators face an increasingly complex mix of identifying and automating for both the best performance and the best price points across their cloud options.

The modern IT services procurement task is made more difficult by the vast choices public cloud providers offer — literally, hundreds of thousands of service options.

New tools to help optimize cloud economics are arriving, but in the meantime, unchecked waste is rampant across the total spend for cloud computing, research shows.

The next BriefingsDirect Voice of the Analyst hybrid IT and multicloud management discussion explores the causes of unwieldy cloud use and how new tools, processes, and methods are bringing insights and actionable analysis to gain control over hybrid IT sprawl.

Listen to the podcastFind it on iTunesGet the mobile app. Read a full transcript or download a copy. 

Here to help explain the latest breed of cloud governance solutions is William Fellows, Founder and Research Vice President at 451 Research. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How much waste is really out there when it comes to enterprises buying and using public cloud services?

Fellows: Well, a lot — and it’s growing daily. Specifically this is because buyers are now spending thousands, tens of thousands, and even in some cases, millions of dollars a monthon their cloud services. So, the amount of waste goes up as the bill goes up.

William Fellows

Fellows

As anyone who works in the field can tell you, by using some cost optimization and resource optimization tools you can save the average organization about 30 percent of the cost on their monthly bill.

If your monthly bill is a $100, that’s one amount, but if your monthly bill is a million dollars, then that’s another amount. That’s the kind of wastage in terms of percentage terms that is being seen out there.

What we are really talking about here is the process and how it comes to be that there is such a waste of cloud resources. These are driven by things that can be done fairly easily in terms of better managing to rein in these wasteful charges.

Gardner: What are the top reasons for this lack of efficiency and optimization? Are these just growing pains, that people adopted cloud so rapidly that they lost control over it? Or is there more to it?

Fellows: There are a couple of reasons. At a high level, there is massive organizational dysfunction around cloud and IT. This is driven primarily because cloud as we know is usually via decentralized purchases at large organizations. That means there is often a variety of different groups and departments using cloud. There is no single, central, and logical way of controlling cost.

Secondly there is the sheer number of available services, and the resulting complexity of trying to deal with all of the different nuances with regard to different image sizes, on keeping taps on who is doing what, and so on. That also underpins this resource wastage.

There isn’t one single reason. And, quite frankly, these things are moving forward so quickly that some users want to get on to the next service advance before they are used to using what they already have.

For organizations fearful of runaway costs, this amounts to a drunken sailor effect, where an individual group within an organization just starts using cloud services without regard to any kind of cost-management or economic insight.

Learn More About

HPE OneSphere

In those cases, cloud costs can spiral dramatically. That, of course, is the fear for the chief information officer (CIO), especially as they are trying to build a business case for accelerating the conversion to cloud at an organization.

Yet the actual mechanisms by which organizations are able to better control and eliminate waste are fairly simple. Even Amazon Web Services (AWS) has a mantra on this: Simply turn things off when they are no longer needed. Make sure you are using the right size of instance, for example, for what you are trying to achieve, and make sure that you work with tools that can turn things off as well as turn things on. In other words, employ services that are flexible.

Gardner: We are also seeing more organizations using multiple clouds in multiple ways. So even if AWS, for example, gives you more insight and clarity into your spend with them, and allows you to know better when to turn things off — that doesn’t carry across the hybrid environment people are facing. The complexity is ramping up at the same time as spiraling costs.

If there were 30 percent waste occurring in other aspects of the enterprise, the chief financial officer (CFO) would probably get involved. The chief procurement officer (CPO) would be called in to do some centralized purchasing here, right?

Why don’t we see the business side of these enterprises come in and take over when it comes to fixing this cloud use waste problem?

It’s costly not to track cloud costs

Fellows: You are right. In defense of the hyperscale cloud providers, they are now doing a much better job of providing tools for doing cost reporting on their services. But of course, they are only interested in really managing the cost on their own services and not on third-party services. As we transition to a hybrid world, and multicloud, those approaches are deficient.

There has recently been a consolidation around the cloud cost reporting and monitoring technologies, leading to the next wave of more forensic resource optimization services, to gain the ability to do this over multiple cloud services.

Coming back to why this isn’t managed centrally, it’s because much of the use and purchasing is so decentralized. There is no single version of the economic truth, if you like, that’s being used to plan, manage, and budget.

For most organizations, they have one foot in the new world and still a foot in the old world. They are working in old procurement models, in the old ways of accounting, budgeting, and cost reporting, which are unlikely to work in a cloud context.

Cloud isn’t managed centrally because much of the use and purchasing is so decentralized. There is no single version of the economics truth being used to plan, manage, and budget.

That’s why we are seeing the rise of new approaches. Collectively these things were called cloud management servicesor cloud management platforms, but the language the industry is using now is cloud governance. And that implies that it’s not only the optimization of resources, infrastructure, and absent workloads — but it’s also governance in terms of the economics and the cost. And it’s governance when it comes to security and compliance as well.

Again, this is needed because enterprises want a verifiable return on investment (ROI), they do want to control these costs. Economics is important, but it’s not the only factor. It’s only one dimension of the problem they face in this conversion to cloud.

Gardner: It seems to me that this problem needs to be solved if the waste continues to grow, and if decentralization proves to be a disadvantage over time. It behooves the cloud providers, the enterprises, and certainly the IT organizations to get control over this. The economics is, as you say, a big part — not the only part — but certainly worth focusing on.

Tell me why you have created at 451 Research a Digital Economics Unit and the 451 Cloud Price Index. Do you hope to accelerate movement toward a solution to this waste problem?

Carry a cost-efficient basket

Fellows: Yes, thanks for bringing that into the interview. I created the Digital Economics Unit at 451 about five years ago. We produce a range of pricing indicators that help end-users and vendors understand the cost of doing things in different kinds of hosted environments. The first set of indicators are around cloud. So, the Cloud Price Index acts like a Consumer Price Index, which measures the cost of a basket of consumer goods and services over time.

The Cloud Price Index measures the cost of a basket of cloud goods and services over time to determine where the prices are going. Of course, five years ago we were just at the beginning of the enormous interest in the relative costs of doing things within AWS versus Azure versus Google, or another places as firms added services.

We’ve assembled a basket of cloud goods and services and priced that in the market. It provides a real average price per basket of goods. We do that by public cloud, and we do it by private cloud. We do it by commercial code, such as Microsoft and others, as well as via open source offerings such as OpenStack. And we do it across global regions.

That has been used by enterprises to understand whether they are getting a good deal from their suppliers, or whether they are paying over the market rates. For vendors, obviously, this helps them with their pricing and packaging strategies.

In the early days, we saw a big shift [downward] in cloud pricing as the vendors introduced new basic infrastructure services. Recently this has fallen off. Although cloud prices are falling, they are coming down less.

Learn More About

HPE OneSphere

I just checked and the basket of goods that we use has fallen this year by about 4 percent in the US. You can expect Europe and Asia-Pac to still pay, for example, a premium of 10 and 25 percent more respectively for the same cloud services in those regions.

We also provide insight into about a dozen services in those baskets of cloud goods, so not only compute but storage, networking, SQL and non-SQL bandwidth, and all kinds of other things.

Now, if you were to choose the provider that offers the cheapest services in each of those — and you did that across the full basket of goods — you would actually make a savings of 75 percent on the market costs of that basket. It shows that there is an awful lot of headroom in the market in terms of pricing.

Gardner: Let me make sure I understand what that 75 percent represents. That means if you had clarity, and you were able to shop with full optimization on price, you could reduce your cloud bill by 75 percent. Is that right?

Fellows: Correct, yes. If you were to choose the cheapest provider of each one of those services, you would save yourself 75 percent of the cost over the average market price.

Gardner:Well, that’s massive. That’s just massive.

Opportunity abounds in cloud space

Fellows: Yes, but by the same token, no one is doing that because it’s way too complex and there is nothing in the market available that allows someone to do that, let alone manage that kind of complexity. The key is that it shows there is a great deal of opportunity and room for innovation in this space.

We feel at 451 Research that the price of cloud compute services may go down further. I think it’s unlikely to reach zero, but what’s much more important now is determining the cost of using basic cloud across all of the vendors as quickly as we can because they are now adding higher-value services on top of the basic infrastructure.

The game is now beyond infrastructure. That’s why we have added 16 managed services to the Cloud Price Index of cloud services. With this you can see what you could expect to be paying in the market for those different services, and by different regions. This is the new battleground and the new opportunity for service providers.

Gardner: Clearly 451 Research has identified a big opportunity for cloud spend improvement. But what’s preventing IT people from doing more on costs? Why does it so difficult to get a handle on the number of cloud services? And what needs to happen next for companies to be able to execute once they have gained more visibility?

Fellows:You are right. One of the things we like to do with the Cloud Price Index is to ask folks, “Just how many different things do you think you can buy from the hyperscale vendors now?” The answer as of last week was more than 500,000 — there are more than 500,000 SKUs available from AWS, Azure, and Google right now.

How can any human keep up with understanding what combination of these things might be most useful within their organization?

The second wave

You need more than a degree in cloud economics to be able to figure that out. And that’s why I talked earlier about a second wave of cloud cost management tools now coming into view. Specifically, these are around resource optimization, and they deliver a forensic view. This is more than just looking at your monthly bill; this is in real time looking at how the services are performing and then recommending actions on that basis to optimize their use from an economic point of view.

Some of these are already beginning to employ more automation based on machine learning (ML).  So, the tools themselves can learn what’s going on and make decisions based upon those.

A second wave of cloud cost management tools is coming into view around resource optimization, and they deliver a forensic view.

There is a whole raft of vendors we are covering within our research here. I fully expect that like the initial wave of cloud-cost-reporting tools that have largely been acquired, that these newest tools will probably go the same way. This is because the IT vendors are trying to build out end-to-end cloud governance portfolios, and they are going to need this kind of introspection and optimization as part of their offerings.

Gardner: As we have seen in IT in the past, oftentimes we have new problems, but they have a lot in common with similar waves of problems and solutions from years before. For example, there used to be a lot of difficulty knowing what you had inside of your own internal data centers. IT vendors came to the rescue with IT management tools, agent-based, agentless, crawling across the network, finding all the devices, recognizing certain platforms, and then creating a map, if you will.

So we have been through this before, William. We have seen how IT management has created the means technically to support centralization, management, and governance over complexity and sprawl. Are the same vendors who were behind IT management traditionally now extending their capabilities to the cloud? And who might be some of the top players that are able to do that?

Return of the incumbents

Fellows: You make a very relevant point because although it has taken them some time, the incumbents, the systems management vendors, are rearchitecting, reengineering. And either by organic, in-house development, by partnership, or by acquisition, they are extending and remodeling their environments for the cloud opportunity.

Many of them have now assembled real and meaningful portfolios, whether that’s CiscoBMCCAHPE, or IBM, and so on. Most of these folks now have a good set of tools for doing this, but it has taken them a long time.

Sometimes some of these firms don’t need to do anything for a number of years and they can still come out on top of this market. One of the questions is whether there is room for long-term, profitable, growing, independent firms in this area. That remains to be seen.

The most likely candidates are not necessarily the independent software vendors (ISVs). We might think about RightScale as being one of the longest serving folks in the market. But, instead, I believe it will be solved by the managed service providers (MSPs).

These are the folks providing ways for enterprises to achieve a meaningful conversion to cloud and to multiple cloud services. In order to be able to do that, of course, they need to manage all those resources in a logical way.

There is a new breed of MSPs coming to the market that are essentially born in the cloud, or cloud-native, in their approach — rather than the incumbent vendors, who have bolted this [new set of capabilities] onto their environments.

One of the exceptions is HPE, because of what they have done by selling most of their legacy software business to Micro Focus. They have actually come from a cloud-native starting place for the tooling to do this. They have taken a somewhat differentiated approach to the other folks in the market who have really been assembling things through acquisition.

Learn More About

HPE OneSphere

The other folks in the market are the traditional systems integrators. It’s in their DNA to be working with multiple services. That may be Accenture, Capgemini, and DXC, or any of these folks. But, quite frankly, those organizations are only interested in working with the Global 1000 or 2000 companies. And as we know, the conversion to cloud is happening across all industries. There is a tremendous opportunity for folks to work with all kinds of companies as they are moving to the cloud.

Gardner: Again, going back historically in IT, we have recognized that having multiple management points solves only part of the problem. Organizations quickly tend to want to consolidate their management and have a single view, in this case, of not just the data center or private cloud, but all public clouds, so hybrid and multicloud.

It seems to me that having a single point across all of the hybrid IT continuum is going to be an essential attribute. Is that something you are seeing in the market as well?

More is better

Fellows: Yes, it is, although, I don’t think there is any one company or one approach that has a leadership position yet. That makes this point in time more interesting but somewhat risky for end users. That is why our counsel to enterprises is to work with vendors who can offer a full and a rich set of services.

The more things that you have, the more you are going to be able to undertake and navigate this journey to the cloud — and then support the digital transformation on top.

Working with vendors that have loosely-coupled approaches allows you to take advantage of a core set of native services — but then also use your own tools or third-party services via application programming interfaces (APIs). It may be a platform approach or it may be a software-as-a-service (SaaS) approach.

At this point, I don’t think any of the IT vendor firms have sufficiently joined up these approaches to be able to operate across the hybrid IT environment. But it seems to me that HPE is doing a good job here in terms of bringing, or joining, these things together.

On one side of the HPE hash is the mature, well-understood, HPE OneView environment, which is now being purposed to provide a software-defined way of provisioning infrastructure. The other piece is the HPE OneSphere environment, which provides API-driven management for applications, services, workloads, and the whole workspace and developer piece as well.

So, one is coming top-down and the other one bottom-up. Once those things become integrated, they will offer a pretty rich way for organizations to manage their hybrid IT environments.

The HPE OneSphere environment provides API-driven management for applications, services, workloads, and the developer piece as well.

Now, if you are also using HPE’s Synergy composable infrastructure, then you are going to get an exponential benefit from using those other tools. Also, the Cloud Cruiser cost reporting capability is now embedded into HPE OneSphere. And HPE has a leading position in this new kind of hardware consumption model — for using new hardware services payment models — via its HPE GreenLake Hybrid Cloud offering.

So, it seems to me that there is enough here to appeal to many interests within an organization, but crucially it will allow IT to retain control at the same time.

Now, HPE is not unique. It seems to me that all of the vendors are working to head in this general direction. But the HPE offering looks like it’s coming together pretty well.

Gardner: So, a great deal of maturity left to go. Nonetheless, the cloud-governance opportunity appears big enough to drive a truck through. If you can bring together an ecosystem anda platform approach that appeals to those MSPs, to systems integrators, works well in the large global 2000, but also has a direct role toward the small and medium businesses – that’s a very big market opportunity.

I think businesses and IT operators should begin to avail themselves of learning more about this market, because there is so much to gain when you do it well. As you say, the competition is going to push the vendors forward, so a huge opportunity is brewing out there.

William, what should IT organizations be doing now to get ready for what the vendors and ecosystems bring out around cloud management and optimization? What should you be doing now to get in a position where you can take advantage of what the marketplace is going to provide?

Get your cloud house in order

Fellows: First and foremost, organizations now need to be moving toward a position of cloud-readiness. And what I mean is understanding to what extent applications and workloads are suitable for moving to the cloud. Next comes undertaking the architecting, refactoring, and modernization. That will allow them to move into the cloud without the complexity, cost, and disruption of the first-generation lift-and-shift approaches.

In other words, get your own house in order, so to speak. Prepare for the move to the cloud. It will become apparent that some applications and workloads are suitable for some kind of services deployment, maybe a public cloud. Other types of apps and workloads are going to be more suited to other kinds of environments, maybe a hosted private environment.

You are then also going to have applications that you want to take advantage of on the edge, for Internet of things (IoT), and so on. You are going to want a different set of services for that as well.

The challenge is going to be working with providers that can help you with all of that. One thing we do know is that most organizations are accessing cloud services via partners. In fact, in AWS’s case, 90 percent of Fortune 100 companies that are its customers are accessing its services via a partner.

And this comes back to the role and the rise of the MSP who can deliver value-add by enabling an organization to work and use different kinds of cloud services to meet different needs — and to manage those as a logical resource.

That’s the way I think organizations need to approach this whole cloud piece. Although we have been doing this for a while now — AWS has had cloud services for 11 years — the majority of the opportunity is still ahead of us. Up until now, it has really still only been the early adopters who have converted to cloud. That’s why there is such a land grab underway at present to be able to capture the majority of the opportunity.

Learn More About

HPE OneSphere

Gardner: I’m sure we can go on for another 30 minutes on just one more aspect to this, which is the skills part. It appears to me there will be a huge need for the required skills for managing cloud adoption across the economics and procurement best practices — as well as the technical side. So perhaps a whole new class of people are needed within companies who have backgrounds in economics, procurement, IT optimization and management methods, as well as deeply understanding cloud ecosystem.

Develop your skills

Fellows:You are right. 451’s Voice of the Enterprise data shows that the key barrier to accelerating adoption is not technology — but a skills shortage. Indeed, that’s across operations, architecture, and security.

Again, I think this is another opportunity for the MSPs, to help upskilla customer’s own organization in these areas. That will be a driver for success, because, of course, when we talk about being in the cloud, we are not talking so much about the technology — we are talking about the operating model. That really is the key here.

That operating model is consumption-based, services-driven, and with a retail-model’s discipline. It’s more than CAPEX to OPEX. It’s more than hardwired to being agile — it’s all of those things, and that really means the transformation of enterprises and organizations. It’s really the most difficult and challenging thing going on here.

Whatever an IT supplier can do to assist end-customers with that, to rotate to that new operating model, is likely to be more successful.

Listen to the podcastFind it on iTunesGet the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise

You may also be interested in:

Posted in application transformation, Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Micro Focus, multicloud | Tagged , , , , , , , , | 1 Comment

GDPR forces rekindling of people-centric approach to marketing

The next BriefingsDirect digital business innovation discussion explores how modern marketing is impacted by the General Data Protection Regulation (GDPR).

Those seeking to know their customers well are finding that this sweeping new European Union (EU) law forces a dramatic shift in how customer data can be gathered, shared, and protected. And it means that low-touch marketing by mass data analysis and inference alone likely will need to revert to the good-old-fashioned handshake and more high-touch trust building approaches that bind people to people, and people to brands.

Listen to the podcastFind it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to help sort through a more practical approach to marketing within the new requirements of highly protected data is Tifenn Dano Kwan, Chief Marketing Officer at SAP Ariba. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now with GDPR is fully in place, it seems that we’ve had to embrace the concept that good privacy is good business. In doing so, it seems that marketers have become too dependent on data-driven and digital means of interacting with their customers and prospects.

Has GDPR done us a favor in marketing — maybe as an unintended consequence — when it comes to bringing the human relationships aspect of business back to the fore?

Dano Kwan: GDPR is giving us the ability to remember what marketing is, and who we are as marketers. I think that it is absolutely critical, to go back to the foundation of what marketing is. If you think about the role of marketing in an organization, we are a little bit of the Picassos of companies — we are the creative souls. We bring the soul back into an organization.

Tifenn Dano Kwan (1)

Dano Kwan

Why? Because we control the narrative, we control the storytelling, and we control the brands. Also, in many ways — especially over the past couple of years — we control the data because our focus is understanding the audience and our customers.

With the rise of digital over the past couple of years, data has been the center of a lot of what marketing has been driving. But make no mistake, marketers are creative people. Their passion is in creating amazing stories — to promote and support sales in the selling process, and being, frankly, the voice of the customer.

The GDPR law is simply bringing back to the forefront what the value of marketing is. It’s not just controlling the data. We have to go back to what marketing really brings to the table. And go back to balancing the data with the art, the science with the art, and ensuring that we continue to add value to represent the voice of the customer.

Gardner: It must have been tempting for marketers, with the data approach, to see a lot of scalability — that they could reach a lot more people, with perhaps less money spent. The human touch, the high-touch can be more expensive. It doesn’t necessarily scale as well.

Do you think that we need to revisit cost and scale when it comes to this human and creative aspect of marketing?

Balancing high- and low-touch points 

Dano Kwan: It’s a matter of realigning the touch points and how we consider touch points when we drive marketing strategies. I don’t think that there is one thing that is better than the other. It’s a matter of sequencing and orchestrating the efforts when we run marketing initiatives.

If you think about the value of digital, it’s really focused on the inbound marketing engine that we have been hearing about for so many years now. Every company that wants to scale has to build an inbound engine. But in reality, if you look at the importance of creating inbound, it is a long-term strategy, it doesn’t necessarily provide a short-term gain from the marketing standpoint or pipeline standpoint. It needs to be built upon a long-term strategy around inbound searches, such as paid media search, and so on. Those very much rely on data.

While we need to focus on these low-touch concepts, we also need to recognize that the high-touch initiatives are equally important.

Sometimes marketing can be accused of being completely disconnected from the customers because we don’t have enough face-to-face interactions. Or of creating large events without an understanding of high-touch. GDPR is an opportunity like never before for marketers to deeply connect with customers.

Gardner: Let’s step back and explain more about GDPR and why the use of data has to be reevaluated.

GDPR is from the EU, but any company that deals with the supply chains that enter the European Union — one of the largest trading blocks in the world — is impacted. Penalties can be quite high if you don’t treat data properly, or if you don’t alert your customers if their private data has been compromised in any way.

How does this reduce the amount that marketers can do? What’s the direct connection between what GDPR does and why marketers need to change?

Return to the source 

Dano Kwan: It’s a matter of balancing the origins of a sales pipeline. If you look at the sources of pipeline in an organization, whether it’s marketing-led or sales-led, or even ecosystem- or partner-led, everybody is specifically tracking the sources of pipeline.

What we call the marketing mix includes the source of the pipeline and the channels of those sources. When you look at pure inbound strategies, you can see a lot of them coming out of digital properties versus physical properties.

We need to understand the impact [of GDPR] and acknowledge a drop in the typical outbound flow, whether it’s telemarketing, inside sales, or the good-old events, which are very much outbound-driven.

Over the next couple of months there is going to be a direct impact on all sources of pipeline. At the very least, we are going to have to monitor where the opportunities are coming from. Those who are going to succeed are those who are going to shift the sources of the pipeline and understand over time how to anticipate the timing for that new pipeline that we generate.

We are absolutely going to have to make a shift. Some readjustment needs to happen. We need new forms of opportunities for business.

We are absolutely going to have to make a shift. Like I said, inbound marketing takes more time, so those sources of pipeline are more elongated in time versus outbound strategies. Some readjustment needs to happen, but we also need new forms of opportunities for business.

That could mean going back to old-fashioned direct mail, believe it or not — this is back in fashion, and this is going to happen over again. But it also means new ways of doing marketing, such as influencer marketing.

If you think about the value of social media and blogs, all those digital influencers in the world are going to have a blast, because today if you want to multiply your impact, and if you want to reach out to your audiences, you can’t do it just by yourself. You have to create an ecosystem and a network of influencers that are going to carry your voice and carry the value for you. Once they do that they tap into their own networks, and those networks capture the audiences that you are looking for. Once those audiences are captured through the network of influencers, you have a chance to send them back to your digital properties and dotcom properties.

We are very excited to see how we can balance the impact of GDPR, but also create new routes and techniques, to experiment with new opportunities. Yes, we are going to see a drop in the traditional sources of pipeline. It’s obvious. We are going to have to readjust. But that’s exciting, it’s going to mean more experimentation or thinking outside of the box and reinventing ourselves.

Opportunity knocks, outside the box 

Gardner: And how is this going to be different for business-to-consumer (B2C) and business-to-business (B2B)? We are seeing a lot influencer marketing effective for consumer and some retail; is it just as effective in the B2B space? How should B2B marketers be thinking differently?

Dano Kwan: I don’t know that it’s that different, to be honest with you, Dana. I think it’s the same thing. I think we are going to have to partner a lot more with what I call an ecosystem of influencers, whether it be partners, analysts, press, bloggers or very strong influencers who are extremely well-networked.

In the consumer world, the idea is to multiply the value. You are going to see a lot more partnerships, such as core branding initiatives that are going to rise. Or where two brands come together, carrying the power of their message to reach up to and join customers.

Gardner: As an observer of SAP Ariba and over the past several years, it’s been very impactful for me see how the company has embraced the notion of doing good and in doing well in terms of the relationship with customers and the perception of a company. I think your customers have received this very well.

Is there a relationship between this new thinking of marketing and the idea of being a company that’s perceived as being a good player, a good custodian in their particular ecosystems?

Purpose-driven pipelines

Dano Kwan: It’s a great question, Dana. I think those two things are happening at the same time. We are moving toward being more purposeful because the world simply is moving toward becoming more purposeful. This is a trend we see among buyers in both the B2C world and B2B worlds. They are extremely sensitive to those notions – especially millennials. They look at the news and they truly worry for their future.

The end-goal here is to remind ourselves that companies are not just here to make a profit — they are here to make a difference.

GDPR is shifting the focus of marketing within companies to where we are not just seeking data to reach out to audiences — but to be meaningful and purposeful when we reach out to our customers. We must not only provide content; we have to give them something that aligns with their values and ignites their passions.

The end goal here is to remind ourselves that companies are not just here to make a profit — they are here to make a difference.

So, those two things are connected to each other, and I think it’s going to accelerate the value of purpose, it’s going to accelerate the value of meaningful conversations with our customers that are truly based — not just on profit and data — but on making a difference in the world, and that is a beautiful thing.

Gardner: Do you think, Tifenn, that we are going to see more user conferences — perhaps smaller ones, more regional, more localized — rather than just once a year?

Dano Kwan: I think that we are going to see some readjustments. Big conferences used to happen in Europe and North America, but think about the emerging markets, think about Latin America, think about Asia Pacific, and Japan, think about Middle East. All of those regions are growing, they are getting more connected.

In my organization, I am pushing for it. People don’t necessarily want to travel long distances to go to big conferences. They prefer local interaction and messaging.So regionalization and localizations – from messaging to marketing activities – are going to become a lot more prominent, in my opinion, in the coming years.

Gardner: Another big trend these days is the power that artificial intelligence (AI) and machine learning (ML) can bring to solve many types of problems. While we might be more cautious about what we do with data – and we might not get the same amount of data under a GDPR regime — the tools for what we can do with the data are much stronger than before.

Is there some way in which we can bring the power of AI and ML into a creative process that allows a better relationship between businesses and consumers and businesses and businesses? How does AI factor into the next few years in a GDPR world?

AI gets customers 

Dano Kwan: AI is going to be a way for us to get more quality control in the understanding of the customer, definitely. I think it is going to allow us to learn about behaviors and do that at scale.

Business technologies and processes are going to be enabled through AI and ML; that is obvious, all of the studies indicate it. It starts with obvious sectors and industries, but it’s going to expand drastically because it informs more curiosity in the understanding of processes and customers.

Gardner: Perhaps a way to look at it would be that aggregated data and anonymized data will be used in an AI environment in order to then allow you to get closer to your customer in that high-touch fashion. Like we are seeing in retail, when somebody walks into a brick-and-mortar environment, a store, you might not know them individually, but you have got enough inference from aggregated data to be able to have a much better user experience.

Dano Kwan: That’s exactly right. I think it’s going to inform the experience in general, whether that experience is communicated through marketing or via face-to-face. At the end of the day, and you are right, the user experience affects everything that we do. Users can get very specific about what they want. They want their experiences to be personal, to be ethical, to be local, and regionalized. They want them to be extremely pointed to their specific needs.

And I do believe that AI is going to allow us to get rapidly attuned to the customer experience and constantly innovate and improve that experience. So in the end, if it’s just the benefit of providing a better experience, then I say, why not? Choose the tools that offer a superior experience for our customers.

I believe that the face-to-face approach, especially when you have complex interactions with customers, still is going to be needed. And the face-to-face approach, the real touch point that you have, is going to be necessary in complex engagements with customers.

But AI can also help prepare for those types of complex interactions. It really depends on what you sell, what you promote. If you promote a simple solution or thing that can be triggered online, then AI is simply going to accelerate the ability for the customer to click and purchase.

But if you go with very complex sales cycles, for example, that require human interactions, you can use AI to inform a conversation and be prepared for a meeting where you have activated data to present in front of your customer and to support whatever value you want to bring to the customer.

Gardner: We are already seeing that in the help-desk field where people who are fielding calls from customers are much better prepared. It makes the agents themselves far more powerful.

How does this all relate to the vast amount of data and information you have in the Ariba Network, for example? Being in a position of having a lot of data but being aware that you have to be careful about how you use it, seems to me the best of all worlds. How does the Ariba Network and the type of data that you can use safely and appropriately benefit your customers?

Be prepared, stay protected

Dano Kwan: We have done extensive work at the product level within SAP Ariba to prepare for GDPR. In fact, our organization is one of the most prepared from a GDPR standpoint not only to be compliant but to offer solutions that are enabling our customers to themselves become compliant from a GDPR standpoint.

That’s one of the strengths [that comes] not just from Network, but also [from] the solutions that we bring to the industry and to our customers.

The Ariba Network has a lot of data that is specific to the customer. GDPR is simply reinforcing the fact that data has to be protected, that all companies, including SAP Ariba — and all supply chain and procurement organizations in the world — have to be prepared for it, to work toward respect of privacy, consent, and ensuring that the data is used in the right way. SAP Ariba is absolutely partnering with all the suppliers and buyers in the network and preparing for this.

Gardner: If you’re a marketing executive and you weren’t necessarily thinking about the full impact of GDPR, do you have some advice now that you have thought this through? What should others who are just beginning that process be mindful of?

Ensuring that GDPR is well understood by suppliers and agencies — from a marketing point of view — is critical.

Dano Kwan: My single biggest advice is to really focus on knowledge transfer within the organization. GDPR is a collective responsibility. It is not just a marketing responsibility; the sales teams, the customer facing teams — whether it’s support services, presales, sales — everybody has to be prepared. The knowledge transfer is absolutely critical, and it has to be clear, it has to be simple, and equipping the field within your organization is critical. So that’s number one, internally.

But the positioning with the external contributors to your business is also critical. So ensuring that GDPR is well understood with the external suppliers as well as agencies, from a marketing standpoint, and then all the partners that you have is equally important.

Prepare by doing a lot of knowledge transfer on what GDPR is, what its impact is, and what’s in it for each constituent of the business. Also, explore how people can connect and communicate with customers. Learn what they can do, what they can’t do. This has to be explained in a very simple way and has to be explained over and over and over again because what we are seeing is that it’s new for everyone. And one launch is not enough.

Over the next couple of months all companies are going to have to heavily invest in regular knowledge-transfer sessions and training to ensure that all of their customer-facing teams — inside the organization or outside — are very well prepared for GDPR.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business networks, Cloud computing, Cyber security, data analysis, Enterprise transformation, Identity, Information management, marketing, Networked economy, procurement, risk assessment, SAP, SAP Ariba, Security, supply chain, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

Path to client workspace automation paved with hyperconverged infrastructure for New Jersey college

The next BriefingsDirect hyperconverged infrastructure (HCI) use case discussion explores how a New Jersey college has embarked on the time-saving, virtual desktop infrastructure (VDI) modernization journey.

We will now learn how the combination of HCI and VDI makes the task of deploying and maintaining the latest end-user devices far simpler — and cheaper than ever before.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to explore how a new digital and data-driven culture can emerge from uniting the desktop edge with the hyper-efficient core are Tom Gillon, Director of Network and User Services at County College of Morris (CCM) in Randolph, New Jersey; Michael Gilchrist, Assistant Director of Network Systems at County College of Morris (CCM), and Felise Katz, CEO of PKA Technologies, Inc. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the trends driving your needs at County College of Morris to modernize and simplify your personal computer (PC) architecture?

Gillon: We need to be flexible and agile in terms of getting software to the students, when they need it, where they need it.

Tom Gillon

Gillon

With physical infrastructure that really isn’t possible. So we realized that VDI was the solution to meet our goals — to get the software where the students need it, when they need it, and so that’s a top trend that got us to this point.

Gardner: And is the simplicity of VDI deployments something you are looking at universally, or is this more specific to just students?

Gillon: We are looking to deploy VDI all throughout the college: Faculty, staff, and students. We started out with a pilot of 300 units that we mostly put out in labs and in common areas for the students. But now we are replacing older PCs that the faculty and staff use as well.

Gardner: VDI has been around for a while, and for the first few years there was a lot of promise, but there was also some lag from complications in that certain apps and media wouldn’t run properly; there were network degradation issues. We’ve worked through a lot of that, but what are some of your top concerns, Michael, when it comes to some of those higher-order infrastructure performance issues that you have to conquer before you get to the proper payoff from VDI?

Get Your Gorilla Guide

To HCI Implementation Strategies

Gilchrist: You want to make sure that the user experience is the same as what they would experience on a physical device, otherwise they will not accept it.

Just having the horse power — nowadays these servers are so powerful, and now you can even get graphics processing units (GPUs) in there — you can run stuff like AutoCAD or Adobe and still give the user the same experience that they would normally have on a physical device. That’s what we are finding. Pretty good so far.

Gardner: Felise, as a Hewlett Packard Enterprise (HPE) Platinum Partner, you have been through this journey before, so you know how it was rough-and-tumble there for a while with VDI. How has that changed from your perspective at PKA Technologies?

Katz: When HPE made the acquisition of SimpliVity that was the moment that defined a huge game-changer because it enabled us, as a solution provider, to bring the right technology to CCM. That was huge.

Gardner: When you’re starting out on an IT transition, you have to keep the wings on the airplane while you’re changing the engines, or vice versa. You have to keep things going while you are doing change. Tom, how did you manage that? How did you keep your students getting their apps? How have you been able to swap things out in a way that hasn’t been disruptive?

Gillon: The beauty of VDI is that we can switch out a lab completely with thin clients in about an hour. And we didn’t realize that going in. We thought it would take us most of the day. And then when we did it, we were like, “Oh my God, we are done.” We were able to go in there first thing in the morning and knock it out before the students even came in.

That really helped us to get these devices out to where the students need them and to not be disruptive to them.

That really helped us to get these devices out to where the students need them and not be disruptive to them.

Gardner: Tom, how did it work from your perspective in terms of an orderly process? How was the support from your partners like PKA? Do you get to the point where this becomes routine?

Gillon: PKA has the expertise in this area. We worked with them previously on an Aruba wireless network deployment project, and we knew that’s who we wanted to work with, because they were professional and thorough.

Moving to the thin client systems deployments, we contacted PKA and they put together a solution that worked well for us. We had not been aware of SimpliVity combined with HPE. They determined that this would be the best path for us, and it turned out to be true. They came in and we worked with HPE, setting this up and deploying it. Michael did a lot of that work with HPE. It was very simple to do. We were surprised at how simple it was.

Academic pressure 

Gardner: Felise, as a solution partner that specializes in higher education, what’s different from working at a college campus environment from, say, a small- to medium-sized business (SMB) or another type of enterprise? Is there something specific about a college environment, such as the number of apps, the need for certain people and groups in the college to have different roles within responsibilities? How did it shake out?

Katz: That’s an interesting question. As a solution provider, as an owner of a business, we always put our best foot forward. It really doesn’t matter whether it’s an academic institution or a commercial customer, it always has to be done in the right way.

Felise Katz

Katz

As a matter of fact, in academics it’s even more profound, and a lot more pressured, because you are dealing with students, you are dealing with faculty, and you are dealing with IT staff. Once we are in a “go” mode, we are under a lot of pressure. We have a limited time span between semesters — or vacations and holidays — where we have to be around to help them to get it up and running.

We have to make sure that the customer is enabled. And with these guys at CCM, they were so fabulous to work with. They enabled us to help them to do more with less — and that’s what the solution is all about. It’s all about simplification. It’s all about modernization. It’s all about being more efficient. And as Michael said so eloquently, it’s all about the experience for the students. That’s what we care about.

Choose an HCI for VDI Solution

That’s Right for Your Needs

Gardner: Michael, where are you on your VDI-enablement journey? We heard that you want to go pervasively to VDI. What have you had to put in place — in terms of servers in the HPE SimpliVity HCI case — to make that happen?

Gilchrist: So far, we have six servers in total. Three servers in each of our two data centers that we have on campus, for high redundancy. That’s going to allow us to cover our initial pilot of 300 thin clients that we are putting out there.

As far as the performance of the system goes, we are not even scratching the surface in terms of the computing or RAM available for those first 300 endpoints.

When it comes to getting more thin clients, I think we’re going to be able to initially tack on more thin clients to the initial subset of six servers. And as we grow, the beauty of SimpliVity is that we just buy another server, rack it up, and bolt it in — and that’s it. It’s just plug and play.

Gardner: In order to assess how well this solution is working, let’s learn more about CCM. It’s 50 years old. What’s this college all about?

Data-driven college transformation 

Gillon: We are located in North Central New Jersey. We have an enrollment of about 8,000 students per semester; that’s for credit. We also have a lot of non-credit students coming and going as well.

As you said, we are 50-years-old, and I’ve been there almost 23 years. I was the second person hired in the IT Department.

I have seen a lot come and go, and we actually just last year inaugurated our third college president, just three presidents in 50 years. It’s a very stable environment, and it’s really a great place to work.

Gardner: I understand that you have had with this newest leadership more of a technical and digital transformation focus. Tell us how the culture of the college has changed and how that may have impacted your leaping into some of the more modern infrastructure to support VDI.

GillonOur new president is very data-driven. He wants data on everything, and frankly we weren’t in a position to provide that.

We also changed CIOs. Our new CIO came in about a year after the new president, and he has also a strong data background. He is more about data than technology. So, with that focus we really knew that we had to get systems in place that are capable of quick transitions, and this HCI system really did the job for us. We are looking to expand further beyond that.

Gardner: Felise, I have heard other people refer to hyperconverged infrastructure architectures like SimpliVity as a gift that keeps giving. Clearly the reason to get into this was to support the VDI, which is a difficult workload. But there are also other benefits.

The simplification from HCI has uncomplicated their capability for growth and for scale.

What have been some of the other benefits that you have been able to demonstrate to CCM that come with HCI? Is it the compression, the data storage savings, or a clear disaster recovery path that they hadn’t had before? What do you see as some of the ancillary benefits?

KatzIt’s all of the above. But to me — and I think to both Tom and Michael — it’s really the simplification, because [HCI] has uncomplicated their capability for growth and for scale.

Look, they are in a very competitive business, okay, attracting students, as Tom said. That’s tough, that’s where they have to make the difference, they have to make a difference when that student arrives on campus with his, I don’t know, how many devices, right?

One student, five devices 

Gillon: It averages five now, I think.

Katz: Five devices that come on board. How do you contend with that, besides having this huge pipe for all the data and everything else that they have to enable? And then you have new ways of learning that everybody has to step up and enable. It’s not just about a classroom; it’s a whole different world. And when you’re in a rural part of New Jersey, where you’re looking to attract students, you have to make sure you are at the top of your game.

Gardner: Expectations are higher than ever, and the younger people are even more demanding because they haven’t known anything else.

KatzYes, just think about their Xbox, their cell phones, and more devices. It’s just a huge amount. And it’s not only for them, it’s also for your college staff.

Gardner: We can’t have a conversation about IT infrastructure without getting into the speeds and feeds a little bit. Tell us about your SimpliVity footprint, energy, maintenance, and operating costs. What has this brought to you at CCM? You have been doing this for 23 years, you know what a high-maintenance server can be like. How has this changed your perspective on keeping a full-fledged infrastructure up and running?

Ease into IT

Gillon: There are tremendous benefits, and we are seeing that. The six servers that we have put in, they are replacing a lot of other devices. If we would have gone with a different solution, we would have had a rack full of servers to contend with. With this solution, we are putting three devices in each of our server rooms to handle the load of our initial 300 VDI deployments — and hopefully more soon.

There are a lot of savings involved, such as power. A lot of our time is being saved because we are not a big shop. Besides Michael and myself, I have a network administrator, and another systems administrator — that’s it, four people. We just don’t have the time to do a lot of things we need to do — and this system solves a lot of those issues.

Gilchrist: From a resources utilization standpoint, the deduplication and compression that the SimpliVity system provides is just insane. I am logically provisioning hundreds of terabytes of information in my VMware system — and only using 1.5 terabytes physically. And just the backup and restore, it’s kind of fire and forget. You put this stuff in place and it really does do what they say. You can restore large virtual machines (VMs) in about one or two seconds and then have it back up and running in case something goes haywire. It just makes my life a lot easier.

I’m no longer having to worry about, “Well, who was my back-up vendor? Or who is my storage area network (SAN) vendor? And then there’s trying to combine all of those systems into one. Well,HPE SimpliVity just takes care of all of that. It’s a one-stop shop; it’s a no-brainer.

Gardner:All in one, Felise, is that a fair characterization?

Get Your Gorilla Guide

To HCI Implementation Strategies

KatzThat is a very, very true assessment. My goal, my responsibility is to bring forward the best solution for my customers and having HPE in my corner with this is huge. It gives me the advantage to help my clients, and so we are able to put together a really great solution for CCM.

Gardner: There seems to be a natural progression with IT infrastructure adoption patterns. You move from bare metal to virtualization, then you move from virtualization to HCI, and then that puts you on a path to private cloud — and then hybrid cloud. And in doing this modernization, you get used to the programmatic approach to infrastructure, so composable infrastructure.

Do you feel that this progression is helping you modernize your organization? And where might that lead to, Tom?

Gillon: I do. With the experience we are gaining with SimpliVity, we see that this can go well beyond VDI, and we are excited about that. We are getting to a point where our current infrastructure is getting a little long in the tooth. We need to make some decisions, and right now the two of us are like, this is only decision we want to make. This is the way we are going to go.

Gardner: I have also read that VDI is like the New York of IT — if you can do it there, you can do it anywhere. So what next workloads do you have in mind? Is this enterprise resource planning (ERP), is it business apps? What?

Gillon: All of the above. We are definitely looking to put some of our server loads into the VDI world, and just the benefits that SimpliVity gives to us in terms of business continuity and redundancy, it really is a no-brainer for us.

And yes, ERP, we have our ERP system currently virtualized, and the way Michael has things set up now, it’s going to be an easy transition for us when we get to that point.

Gardner: We have talked a lot about the hardware, but we also have to factor in the software. You have been using the VMware Horizon approach to VDI and workspaces, and that’s great, but what about moving toward cloud?

Do you want to have more choice in your hypervisor? Does that set you on another path to make choices about private cloud? What comes next in terms of what you support on such a great HCI platform?

A cloudy future?

Gillon: We have decisions to make when it comes to cloud. We are doing some things in the cloud now, but there are some things we don’t want to do in the cloud. And HPE has a lot of solutions.

We recently attended a discussion with the CEO of HPE [Antonio Neri] about where they are headed, and they say hybrid is the way to go. You are going to have some on-premises workloads, you are going to have some off-premises. And that’s where we see CCM going as well.

Gardner: What advice would you give to other organizations that are maybe later in starting out with VDI? What might save them a step or two?

Get yourself a good partner because there are so many things that you don’t know about these systems.

Gillon: First thing, get yourself a good partner because there are so many things that you don’t know about these systems. And having a good partner like PKA, they brought a lot to the table. They could have easily provided a solution to us that was just a bunch of servers.

Gilchrist: Yes, they brought in the expertise. We didn’t know about SimpliVity, and once they showed us everything that it can do, we were skeptical. But it just does it. We are really happy with it, and I have to say, having a good partner is step number one.

Gardner: Felise, what recommendations do you have for organizations that are just now dipping their toe into workloads like VDI? What is it about HCI in particular that they should consider?

Look to the future 

Katz: If they are looking for flexible architecture, if they are looking for the agility, to be able to make those moves down the road — and that’s where their minds are – then they really have to do the due diligence. Tom, Michael and their team did. They were able understand what their needs are, what right requirements are for them — not just for today but also going down the road to the future.

When you adopt a new architecture, you are displacing a lot of your older methodologies, too. It’s a different world, a hybrid world. You need to be able to move, and to move the workloads back and forth.

It’s a great time right now. It’s a great place to be because things are working, and they are clicking. We have the reference architectures available now to help, but it’s really first about doing their homework.

Choose an HCI for VDI Solution

That’s Right for Your Needs

CCM is really a great team to work with. It’s really a pleasure, and it’s a lot of fun.

And I would be remiss not to say, I have a great team. From my sales to my technical: Strategic Account Manager Angie Moncada, Systems Engineer Patrick Shelley, and Vice President of Technology Russ Chow, they were just all-in with them. That makes a huge difference when you also connect with HPE on the right solutions. So that’s really been great.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Citrix, Cloud computing, Cyber security, Data center transformation, disaster recovery, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, mobile computing, professional services, Security, VMware | Tagged , , , , , , , , , , , | Leave a comment

How HPE and Docker together accelerate and automate hybrid cloud adoption

The next BriefingsDirect hybrid cloud strategies discussion examines how the use of containers has moved from developer infatuation to mainstream enterprise adoption.

As part of the wave of interest in containerization technology, Docker, Inc. has emerged as a leader in the field and has greased the skids for management and ease of use.

Meanwhile, Hewlett Packard Enterprise (HPE) has embraced containers as a way to move beyond legacy virtualization and to provide both developers and IT operators more choice and efficiency as they seek new hybrid cloud deployment scenarios.

Like the proverbial chocolate and peanut butter coming together — or as I like to say, with Docker and HPE, fish and chips — the two make a highly productive alliance and cloud ecosystem tag team.

Listen to the podcast. Find it on iTunes. Get the mobile app.  Read a full transcript or download a copy. 

Here to describe exactly how the Docker and HPE alliance accelerates modern and agile hybrid architectures, we are joined by two executives, Betty Junod, Senior Director of Product and Partner Marketing at Docker, and Jeff Carlat, Senior Director of Global Alliances at HPE. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jeff, how do containers — and how does Docker specifically — help data center architects achieve their goals?

Carlat: When you look at the advent of where technology has gone, through virtualization of applications, we are moving into a whole new era where we need much more agility in in applications — and IT operations.

We believe that our modern infrastructure and our partnership with Docker — specifically around containers and container orchestration — provides businesses of all sizes much lower acquisition cost of deploying infrastructure, and ongoing operation costs. And, of course, the game from a business standpoint is all about driving profitability and shareholder stock value.

Second, there is huge value when it comes to Docker and containers around extending the life of legacy applications. Modernizing traditional apps and being able to extend their life and bring them forward to a new modern architecture — that drives greater efficiencies and lower risk.

Gardner: Betty, how do you see the alignment between what HPE’s long-term vision for hybrid computing and edge-to-core computing and what Docker and containerization can do? How do these align?

Align your apps

Junod: It’s actually a wonderful alignment because what we look at from a Docker perspective is specifically at the application layer and bringing choice, agility, and security at the application layer in a way that can be married with what HPE is doing on the infrastructure layer across the hybrid cloud.

Betty Junod

Junod

Our customers are saying, “We want to go to cloud, but we know the world is hybrid. We are going to be hybrid. So how do we do that in a way that doesn’t blow up all of our compliance if we make a change? Is this all for new apps? Or what do I do with all the stuff that I have accrued over the decades that’s eating into all of my budget?”

When it comes to transformation, it is not just an infrastructure story. It’s not just an applications story. It’s how do I use those two together in a way that’s highly efficient and also very agile for managing the stuff I already have today. Can I make that cheaper, better, stronger — and how do I enable the developers to build all the new services for the future that are going to provide more services, or better engage with my customers?

Gardner: How does DevOps, in particular, align? There is a lot of the developer allegiance to a Docker value proposition. But IT operators are also very much interested in what HPE is bringing to market, such as better management, better efficiency, and automation.

How are your two companies an accelerant to DevOps?

The future is Agile 

Junod: DevOps is interesting in that it’s a word that’s been used a lot, along with Agile development. It all stems from the desire for companies to be faster, right? They want to be faster in everything — faster in delivering new services, faster in time-to-market, as well as faster in responses so they can deliver the best service-level agreements (SLAs) to the customer. It’s very much about how application teams and infrastructure teams work together.

What’s great is that Docker brings the ability for developers andoperations teams to have a common language, to be able to do their own thing on their timelines without messing up the other side of the house. No more of that Waterfall. Developers can keep developing, shipping, and not break something that the infrastructure teams have set up, and vice versa.

No more of that Waterfall. Developers can keep developing and shipping, and not break something that the infrastructure teams have set up.

Carlat: Let’s be clear, the world is moving to Agile. I mean, companies are delivering continuous releases and ongoing builds. Those companies that can adopt and embrace that are going to get a leg up on their competition and provide better service levels. So the DevOps community and what we are doing is a perfect match. What Docker and HPE are delivering is ideal for that Devorthe Ops environments.

Gardner: When you have the fungibility of moving workloads around the operators benefit, because they get to finally gain more choice about what keeps the trains running on time regardless of who is inside those trains, so to speak.

Let’s look at some of the hurdles. What prevents organizations from adopting these hybrid cloud and containerization benefits? What else needs to happen?

Make hybrid happen 

Junod: One of the biggest things we hear from our customers is, “Where should I go when it comes to cloud, and how?” They want to make sure that what they do is future-proof. The want to spend their time being beholden to what their application and customer needs are — and not specifically a cloud A or cloud B.

Learn more about the Docker

Enterprise Container Platform

Because with the new regulations regarding data privacy and data sovereignty, if you are a multinational company, your data sets are going to have to live in a bunch of different places. People want the ability to have things hybrid. But that presents an application and an infrastructure operational challenge.

What’s great in our partnership is that we are saying we are going to provide you the safest way to do hybrid; the fastest way to get there. With the Docker layer on top of that, no matter what cloud you pick to marry with your HPE on-premises infrastructure, it’s seamless portability — and you can have the same operational governance.

Carlat: We also see enterprises, as they move to gain efficiencies, are on a journey. And the journey around containerization and containers in our modern infrastructure can be daunting at times.

Jeff Carlat

Carlat

One of the barriers, or prohibitions, to active adoption movement is complexity, of not knowing where to start. This is where we are partnering deeply; essentially around the services capabilities, to be able to bring in our consultative capabilities with Pointnext and do assessments and help customers establish that journey and get them through the maturity of testing and development, and progressing into full production-level environments.

Gardner: Is Cloud Technology Partners, a recent HPE acquisition, also a big plus given that they have been of, by, and for cloud — and very heavily into containers?

Carlat: Yes. That snaps in naturally with the choice in our hybrid strategy. It’s a great bridge, if you will, between what applications you may want on-premises and also using Cloud Technology Partners for leveraging an agnostic set of public cloud providers.

Gardner: Betty, when we think about adoption, sometimes too much of a good thing too soon can provide challenges. Is there anything about people adopting containers too rapidly without doing the groundwork — the blocking and tackling, around management and orchestration, and even automation — that becomes a negative? And how does HPE factor into that?

Too much transformation, too soon 

Junod: We have learned over these last few years, across 500 different customers, what does and doesn’t work. It has a consistent pattern. The companies that say they want to do DevOps, and cloud, and microservices — and they put all the buzzwords in – and they want to do it all right now for transformation — those organizations tend to fail. That’s because it’s too much change at once, like you mentioned.

What we have worked out by collaborating tightly with our partners as well as our customers is that we say, “Pick one, and maybe not the most complicated application you have. Because you might be deploying on a new infrastructure. You are using a new container model. You are going to need to evolve some of your processes internally.”

And if you are going to do hybrid, when is it hybrid? Is it during the development and test in the cloud, and then to on-premises for production? Or is it cloud bursting for scale up? Or is it for failover replication? If you don’t have some of that sorted out before you go, well, then you are just stuck with too much stuff, too much of a good thing.

The companies that say they want to do DevOps, cloud, microservices, and do it all right now — those organizations tend to fail.

What we have partnered with HPE on — and especially HPE Pointnext from a services standpoint — is very much an advisory role, to say let’s look at your landscape of applications that you have today and let’s assess them. Let’s put them in buckets for you and we can pick one or two to start with. Then, let’s outline what’s going to happen with those. How does this inform your new platform choices?

And then once we get some of those kinks worked out and try some of the operational processes that evolve, then after that it’s almost like a factory. They can just start funneling more in.

Gardner: Jeff, lot of what HPE has been doing is around management and monitoring, governance, being mindful of security and compliance issues. So things like HPE Synergy, things like HPE OneView that have been in the market for a long time, and newer products like HPE OneSphere, how are they factoring into allowing containers to be what they should be without getting out of control?

Hand in glove

Carlat: We have seen containerization evolve. And the modern architectures such as HPE Synergy and OneView are designed and built for bare metal deployment or containers or virtualization. It’s all designed — you say, it’s like fish and chips, or it’s like a hand in glove in my analogy – to allow customers choice, agility, and flexibility.

Our modern infrastructure is not purely designed for containers. We see a lot of virtualization, and Docker runs great in a virtualized environment as well. So it’s not one or the other. So again, it’s like a hand in glove.

Gardner: By the way, I know that the Docker whale isn’t technically a fish, but I like to use it anyway.

Let’s talk about the rapid adoption now around hyperconverged infrastructure (HCI). How is HCI helping move forward hybrid cloud and particularly for you on the Docker side? Are you seeing it as an accelerant?

Learn how to extend Docker containers

Across your entire enterprise

Junod: What you are seeing with some of the hyperconverged — and especially if you relate that over to what’s going on with the adoption of containers — it’s all about agility. They want speed and they want to be able to spin things out fast, whether it’s compute resources or whether it’s application resources. I think it’s a nice marriage of where the entire industry wants to go, and what companies are looking for to deliver services faster to our customers.

Carlat: Specifically, hyperconverged represents one of the fastest growing segments in the market for us. And the folks that are adopting hyperconverged clearly want the choice, agility, and rapid simplicity — and rapid deployment — of their applications.

Where we are partnering with Docker is taking HPE SimpliVity, our hyperconverged infrastructure, in building out solutions for either test or development and using scripting to be able to deploy this all in a complete environment in 30 minutes or less.

Yes, we are perfectly aligned, and we see hyperconverged as a great area for dropping in infrastructure and testing and development, as well as for midsize IT environments.

Gardner: Recently DockerCon wrapped up. Betty, what was some of the big news there, and how has that had an impact on going to market with a partner like HPE?

Choice, Agility, Security 

Junod: At DockerCon we reemphasized our core pillars: choice, agility, and security, because it’s choice in what you want to build. You should as an organization be able to build the best applications with the best components that you feel are right for your application — and then be able to run that anywhere, in whatever scenario.

Agility is really around speed for delivering new applications, as well as speed for operations teams. Back to DevOps, those two sides have to exist together and in partnership. One can’t be fast and the other slow. We want to enable both to be fast together.

And lastly, security. It’s really about driving security throughout the lifecycle, from development to production. We want to make sure that we have security built into the entire stack that’s supporting the application.

Organizations should be able to build the best applications with the best components and run them anywhere, in any scenario.

We just advanced the platform along those lines. Docker Enterprise Edition 2.0 really started a couple of months ago, so 2.0 is out. But we announced as part of that some technology preview capabilities. We introduced the integration of Kubernetes, which is a very popular container orchestration engine, to allow into our core Enterprise Edition platform and then we added being able to do that all with Windows as well.

So back to choice; it’s a Linux and Windows world. You should be able to use any orchestration you like as part of that.

No more kicking the tires 

Carlat: One thing I really noticed at DockerCon was not necessarily just about what Docker did, but the significance of major enterprises — Fortune 500, Fortune 100 enterprises – that are truly pivoting to the use of containers and Docker specifically on HPE.

No longer are they kicking the tires and evaluating. We are seeing full-scale production roll outs in major, major, major enterprises. The time is right for customers to modernize, embrace, and adopt containers and container orchestration and drop that onto a modern infrastructure or architecture. They can then gain the benefits of the efficiencies, agility, and the security that we have talked about. That is paramount.

Learn more about the Docker

Enterprise Container Platform

Gardner: Along those lines, do you have examples that show how the combination of what HPE brings to the table and what Docker brings to the table combine in a way that satisfies significant requirements and needs in the market?

Junod: I can highlight two customers. One is Bosch, a major manufacturer in Europe, as well as DaVita healthcare.

What’s interesting is that Bosch began with a lot of organic use of Docker by their developers, spread all over the place. But they said, “Hang on a second, because developers are working with corporate intellectual property (IP), we need to find a way to centralize that, so it better scales for them — and it’s also secure for us.”

This is one of the first accounts that Docker and HPE worked on together to bring them an integrated solution. They implemented a new development pipeline. Central IT at Bosch is doing the governance, management, and the security around the images and content. But each application development team, no matter where they are around the world, is able to spin up their own separate clusters and then be able to do the development and continuous integration on their own, and then publish the software to a centralized pipeline.

Containers at the intelligent edge 

Carlat: There are use cases across the board and in all industry verticals; healthcare, manufacturing. We are seeing strong interest in adoption outside of the data center and we call that the intelligent edge.

We see that containers, and containers-as-a-service, are joining more compute, data, and analytics at the edge. As we move forward, the same level of choice, agility, and security there is paramount. We see containers as a perfect complement, if you will, at the edge.

Gardner: Right; bringing down the necessary runtime for those edge apps — but not any more than the necessary runtime. Let’s unpack that a little bit. What is it about container and edge devices, like an HPE Edgeline server, for example, that makes so much sense?

Junod: There is a broad spectrum on the edge. You will have like things like remote offices and retail locations. You will also see things like Industrial Internet of Things (IIoT). There you have very small devices for data ingest that feed into a distributed server that then ultimately feeds into the core, or the cloud, to do large-scale data analytics. Together this provides real-time insights, and this is an area we have been partnering and working with some of our customers on right now.

Security is actually paramount because — if you start thinking about the data ingest devices — we are not talking about, “Oh, hey, I have 100 small offices.” We are talking about millions and millions of very small devices out there that need to run a workload. They have minimal compute resources and they are going to run one or two workloads to collect data. If not sufficiently secured, they can be risk areas for attack.

So, what’s really important from a Docker perspective is the security; integrated security that goes from the core — all the way to the edge. Our ability, from a software layer, to provide trusted transport and digital signatures and the locking down of the runtime along the way means that these tiny sensor devices have one container on them. And it’s been encrypted and locked with keys that can’t be attacked.

Learn how to extend Docker containers

Across your entire enterprise

That’s very important, because now if someone did attack, they could also start getting access into the network. So security is even more paramount as you get closer to the edge.

Gardner: Any other forward-looking implications for your alliance? What should we be thinking about in terms of analyzing that data and bringing machine learning (ML) to the edge? Is there something that between your two companies will help facilitate that?

Carlat: The world of containers and agile cloud-native applications is not going away. When I think about the future, enterprises need to pivot. Yet change is hard for all enterprises, and they need help.

They are likely going to turn to trusted partners. HPE and Docker are perfectly aligned, we have been bellwethers in the industry, and we will be there to help on that journey.

Gardner: Yes, this seems like a long-term relationship.

Listen to the podcast. Find it on iTunes. Get the mobile app.  Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, containers, Data center transformation, DevOps, Docker, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud, Security | Tagged , , , , , , , , , , , , | Leave a comment

Legacy IT evolves: How cloud choices like Microsoft Azure can conquer the VMware Tax

The next BriefingsDirect panel discussion explores cloud adoption strategies that can simplify IT operations, provide cloud deployment choice — and that make the most total economic sense.

Many data center operators face a crossroads now as they consider the strategic implications of new demands on their IT infrastructure and the new choices that they have when it comes to a cloud continuum of deployment options. These hybrid choices span not only cloud hosts and providers, but also platform technologies such as containers, intelligent network fabrics, serverless computing, and, yes, even good old bare metal.

For thousands of companies, the evaluation of their cloud choices also impacts how they on can help conquer the “VMware tax” by moving beyond a traditional server virtualization legacy.

The complexity of choice goes further because long-term decisions about technology must also include implications for long-term recurring costs — as well as business continuity. As IT architects and operators seek to best map a future from a VMware hypervisor and traditional data center architecture, they also need to consider openness and lock-in.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Our panelists review how public cloud providers and managed service providers (MSPs) are sweetening the deal to transition to predicable hybrid cloud models. The discussion is designed to help IT leaders to find the right trade-offs and the best rationale for making the strategic decisions for their organization’s digital transformation.

The panel consists of David Grimes, Vice President of Engineering at NavisiteDavid Linthicum, Chief Cloud Strategy Officer at Deloitte Consulting, and Tim Crawford, CIO Strategic Advisor at AVOA. The discussion is moderated by BriefingsDirect’s Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Clearly, over the past decade or two, countless virtual machines have been spun up to redefine data center operations and economics. And as server and storage virtualization were growing dominant, VMware was crowned — and continues to remain — a virtualization market leader. The virtualization path broadened over time from hypervisor adoption to platform management, network virtualization, and private cloud models. There have been a great many good reasons for people to exploit virtualization and adopt more of a software-defined data center (SDDC) architecture. And that brings us to where we are today.

Dominance in virtualization, however, has not translated into an automatic path from virtualization to a public-private cloud continuum. Now, we are at a crossroads, specifically for the economics of hybrid cloud models. Pay-as-you-go consumption models have forced a reckoning on examining your virtual machine past, present, and future.

My first question to the panel is … What are you now seeing as the top drivers for people to reevaluate their enterprise IT architecture path?

The cloud-migration challenge

Grimes: It’s a really good question. As you articulated it, VMware radically transformed the way we think about deploying and managing IT infrastructure, but cloud has again redefined all of that. And the things you point out are exactly what many businesses face today, which is supporting a set of existing applications that run the business. In most cases they run on very traditional infrastructure models, but they’re looking at what cloud now offers them in terms of being able to reinvent that application portfolio.

David Grimes copy

Grimes

But that’s going to be a multiyear journey in most cases. One of the things that I think about as the next wave of transformation takes place is how do we enable development in these new models, such as containers and serverless, and using all of the platform services of the hyperscale cloud. How do we bring those to the enterprise in a way that will keep them adjacent to the workloads? Separating off in the application and the data is very challenging.

Gardner: Dave, organizations would probably have it easier if they’re just going to go from running their on-premises apps to a single public cloud provider. But more and more, we’re quite aware that that’s not an easy or even a possible shift. So, when organizations are thinking about the hybrid cloud model, and moving from traditional virtualization, what are some of the drivers to consider for making the right hybrid cloud model decision, where they can do both on-premises private cloud as well as public cloud?

Know what you have, know what you need

Linthicum: It really comes down to the profiles of the workloads, the databases, and the data that you’re trying to move. And one of the things that I tell clients is that cloud is not necessarily something that’s automatic. Typically, they are going to be doing something that may be even more complex than they have currently. But let’s look at the profiles of the existing workloads and the data — including security, governance needs, what you’re running, what platforms you need to move to — and that really kind of dictates which resources we want to put them on.

David Linthicum

Linthicum

As an architect, when I look at the resources out there, I see traditional systems, I see private clouds, virtualization — such as VMware — and then the public cloud providers. And many times, the choice is going to be all four. And having pragmatic hybrid clouds, which are paired with traditional systems and private and public clouds — means multiple clouds at the same time. And so, this really becomes an analysis in terms of how you’re going to look at the existing as-is state. And the to-be state is really just a functional matter of what the to-be state should be based on the business requirements that you see. So, it’s a little easier than I think most people think, but I think the outcome is typically going to be more expensive and more complex than they originally anticipated.

Gardner: Tim Crawford, do people under-appreciate the complexity of moving from a highly virtualized on-premises, traditional data center to hybrid cloud?

Crawford: Yes, absolutely. Dave’s right. There are a lot of assumptions that we take as IT professionals and we bring them to cloud, and then find that those assumptions kind of fall flat on their face. Many of the myths and misnomers of cloud start to rear their ugly heads. And that’s not to say that cloud is bad; cloud is great. But we have to be able to use it in a meaningful way, and that’s a very different way than how we’ve operated our corporate data centers for the last 20, 30, or 40 years. It’s almost better if we forget what we’ve learned over the last 20-plus years and just start anew, so we don’t bring forward some of those assumptions.

Tim Crawford Headshot

Crawford

And I want to touch on something else that I think is really important here, which has nothing to do with technology but has to do with organization and culture, and some of the other drivers that go into why enterprises are leveraging cloud today. And that is that the world is changing around us. Our customers are changing, the speed in which we have to respond to demand and need is changing, and our traditional corporate data center stacks just aren’t designed to be able to make those kinds of shifts.

And so that’s why it’s going to be a mix of cloud and corporate data centers.We’re going to be spread across these different modes like peanut butter in a way. But having the flexibility, as Dave said, to leverage the right solution for the right application is really, really important. Cloud presents a new model because our needs have not been able to be fulfilled in the past.

Gardner: David Grimes, application developers helped drive initial cloud adoption. These were new apps and workloads of, by, and for the cloud. But when we go to enterprises that have a large on-premises virtualization legacy — and are paying high costs as a result — how frequently are we seeing people move existing workloads into a cloud, private or public? Is that gaining traction now?

Lift and shift the workload

Grimes: It absolutely is. That’s really been a core part of our business for a while now, certainly the ability to lift and shift out of the enterprise data center. As Dave said, the workload is the critical factor. You always need to understand the workload to know which platform to put it on. That’s a given. With a lot of that existing legacy application stacks running in traditional infrastructure models, very often those get lifted and shifted into a like-model — but in a hosting provider’s data center. That’s because many CIOs have a mandate to close down enterprise data centers and move to the cloud. But that does, of course, mean a lot of different things.

You mentioned the push by developers to get into the cloud, and really that was what I was alluding to in my earlier comments. Such a reinventing of the enterprise application portfolio has often been led by the development that takes place within the organization. Then, of course, there are all of the new capabilities offered by the hyperscale clouds — all of them, but notably some of the higher-level services offered by Azure, for example. You’re going to end up in a scenario where you’ve got workloads that best fit in the cloud because they’re based on the services that are now natively embodied and delivered as-a-service by those cloud platforms.

But you’re going to still have that legacy stack that still needs to leave the enterprise data center. So, the hybrid models are prevailing, and I believe will continue to prevail. And that’s reflected in Microsoft’s move with Azure Stack, of making much of the Azure platform available to hosting providers to deliver private Azure in a way that can engage and interact with the hyperscale Azure cloud. And with that, you can position the right workloads in the right environment.

Gardner: Now that we’re into the era of lift and shift, let’s look at some of the top reasons why. We will ask our audience what their top reasons are for moving off of legacy environments like VMware. But first let’s learn more about our panelists. David Grimes, tell us about your role at Navisite and more about Navisite itself.

Panelist profiles

Grimes: I’ve been with Navisite for 23 years, really most of my career. As VP of Engineering, I run our product engineering function. I do a lot of the evangelism for the organization. Navisite’s a part of Spectrum Enterprise, which is the enterprise division of Charter. We deliver voice, video, and data services to the enterprise client base of Navisite, and also deliver cloud services to that same base. It’s been a very interesting 20-plus years to see the continued evolution of managed infrastructure delivery models rapidly accelerating to where we are today.

Gardner: Dave Linthicum, tell us a bit about yourself, particularly what you’re doing now at Deloitte Consulting.

It’s been a very interesting 20-plus years to see the continued evolution of managed infrastructure delivery models.

Linthicum: I’ve been with Deloitte Consulting for six months. I’m the Chief Cloud Strategy Officer, the thought leadership guy, trying to figure out where the cloud computing ball is going to be kicked and what the clients are doing, what’s going to be important in the years to come. Prior to that I was with Cloud Technology Partners. We sold that to Hewlett Packard Enterprise (HPE) last year. I’ve written 13 books. And I do the cloud blog on InfoWorld, and also do a lot of radio and TV. And the podcast, Dana.

Gardner: Yes, of course. You’ve been doing that podcast for quite a while. Tim Crawford, tell us about yourself and AVOA.

Crawford: After spending 20-odd years within the rank and file of the IT organization, also as a CIO, I bring a unique perspective to the conversation, especially about transformational organizations. I work with Fortune 250 companies, many of the Fortune 50 companies, in terms of their transformation, mostly business transformation. I help them explore how technology fits into that, but I also help them along their journey in understanding the difference between the traditional and transformational. Like Dave, I do a lot of speaking, a fair amount of writing and, of course, with that comes with travel and meeting a lot of great folks through my journeys.

Survey says: It’s economics

Gardner: Let’s now look at our first audience survey results. I’d like to add that this is not scientific. This is really an anecdotal look at where our particular audience is in terms of their journey. What are their top reasons for moving off of legacy environments like VMware?

Screen Shot 1The top reason, at 75 percent, is a desire to move to a pay-as-you-go versus a cyclical CapEx model. So, the economics here are driving the move from traditional to cloud. They’re also looking to get off of dated software and hardware infrastructure. A lot of people are running old hardware, it’s not that efficient, can be costly to maintain and in some cases, difficult or impossible, to replace. There is a tie at 50 percent each in concern about the total cost of ownership, probably trying to get that down, and a desire to consolidate and integrate more apps and data, so seeking a transformation of their apps and data.

Coming up on the lower end of their motivations are complexity and support difficulties, and the developer preference for cloud models. So, the economics are driving this shift. That should come as no surprise, Tim, that a lot of people are under pressure to do more with less and to modernize at the same time. The proverbial changing of the wings of the airplane while keeping it flying. Is there any more you would offer in terms of the economic drivers for why people should consider going from a traditional data center to a hybrid IT environment?

Crawford: It’s not surprising, and the reason I say that is this economic upheaval actually started about 10 years ago when we really felt that economic downturn. It caused a number of organizations to say, “Look, we don’t have the money to be able to upgrade or replace equipment on our regular cycles.”

And so instead of having a four-year cycle for servers, or a five-year cycle for storage, or in some cases as much as 10-plus cycle for network — they started kicking that can down the road. When the economic situation improved, rather than put money back into infrastructure, people started to ask, “Are there other approaches that we can take?” Now, at the same time, cloud was really beginning to mature and become a viable solution, especially for mid- to large- enterprises. And so, the combination of those two opened the door to a different possibility that didn’t have to do with replacing the hardware in corporate data centers.

Instead of having a four-year cycle for servers or five-year cycle for storage, they started kicking the can down the road.

And then you have the third piece to that trifecta, which are the overall business demands. We saw a very significant change in customer buying behavior at the same time, which is people were looking for things now. We saw the uptick of Amazon use and away from traditional retail, and that trend really kicked into gear around the same time. All of these together lead into this shift to demand for a different kind of model, looking at OpEx versus CapEx.

Gardner: Dave, you and I have talked about this a lot over the past 10 years, economics being a driver. But you don’t necessarily always save money by going to cloud. To me, what I see in these results is not just seeking lower total cost — but simplification, consolidation and rationalization for what enterprises do spend on IT. Does that make sense and is that reflected in your practice?

Savings, strategy and speed

Linthicum: Yes, it is, and I think that the primary reason for moving to the cloud has morphed in the last five years from the CapEx saving money, operational savings model into the need for strategic value. That means gaining agility, ability to scale your systems up as you need to, to adjust to the needs of the business in the quickest way — and be able to keep up with the speed of change.

A lot of the Global 2,000 companies out there are having trouble maintaining change within the organization, to keep up with change in their markets. I think that’s really going to be the death of a thousand cuts if they don’t fix it. They’re seeing cloud as an enabling technology to do that.

In other words, with cloud they can have the resources they need, they can get to the storage levels they need, they can manage the data that they need — and do so at a price point that typically is going to be lower than the on-premise systems. That’s why they’re moving in that direction. But like we said earlier, in doing so they’re moving into more complex models. They’re typically going to be spending a bit more money, but the value of IT — in its ability to delight the business in terms of new capabilities — is going to be there. I think that’s the core metric we need to consider.

Gardner: David, at Navisite, when it comes to cost balanced by the business value from IT, how does that play out in a managed hosting environment? Do you see organizations typically wanting to stick to what they do best, which is create apps, run business processes, and do data science, rather than run IT systems in and out of every refresh cycle? How is this shaking out in the managed services business?

Grimes: That’s exactly what I’m seeing. Companies are really moving toward focusing on their differentiation. Running infrastructure has become almost like having power delivered to your data center. You need it, it’s part of the business, but it’s rarely differentiating. So that’s what we’re seeing.

Running infrastructure has become almost like having power delivered to your data center. You need it, but its rarely differentiating.

One of the things in the survey results that does surprise me is the relatively low scoring for the operations complexity and support difficulties. With the pace of technology innovation happening, and even within VMware, within the enterprise context, but certainly within the context of the cloud platforms, Azure in particular, the skillsets to use those platforms, manage them effectively and take the biggest advantage of them are in exceedingly high demand. Many organizations are struggling to acquire and retain that talent. That’s certainly been my experience in with dealing with my clients and prospects.

Gardner: Now that we know why people want to move, let’s look at what it is that’s preventing them from moving. What are the chief obstacles that are preventing those in our audience from moving off of a legacy environment like VMware?

There’s more than just a technological decision here. Dell Technologies is the major controller of VMware, even with VMware being a publicly traded company. But Dell Technologies, in order to go private, had to incur enormous debt, still in the vicinity of $48 billion. There’s been reports recently of a reverse merger, where VMware as a public company will take over Dell as a private company. The markets didn’t necessarily go for that, and it creates a bit of confusion and concern in the market. So Dave, is this something IT operators and architects should concern themselves with when they’re thinking about which direction to go?

Linthicum: Ultimately, we need to look at the health of the company we’re buying hardware and software from in terms of their ability to be around over the next few years. The reality is that VMware, Dell, and [earlier Dell merger target] EMC are mega forces in terms of a legacy footprint in a majority of data centers. I really don’t see any need to be concerned about the viability of that technology. And when I look at viability of companies, I look at the viability of the technology, which can be bought and sold, and the intellectual property can be traded off to other companies. I don’t think the technology is going to go away, it’s just too much of a cash cow. And the reality is, whoever owns VMware is going to be able to make a lot of money for a long period of time.

Gardner: Tim, should organizations be concerned in that they want to have independence as VMware customers and not get locked in to a hardware vendor or a storage vendor at the same time? Is there concern about VMware becoming too tightly controlled by Dell at some point?

Partnership prowess

Crawford: You always have to think about who it is that you’re partnering with. These days when you make a purchase as an IT organization, you’re really buying into a partnership, so you’re buying into the vision and direction of that given company.

And I agree with Dave about Dell, EMC, and VMware in that they’re going to be around for a long period of time. I don’t think that’s really the factor to be as concerned with. I think you have to look beyond that.

You have to look at what it is that your business needs, and how does that start to influence changes that you make organizationally in terms of where you focus your management and your staff. That means moving up the chain, if you will, and away from the underlying infrastructure and into applications and things closely tied to business advantage.

As you start to do that, you start to look at other opportunities beyond just virtualization. You start breaking down the silos, you start breaking down the components into smaller and smaller components — and you look at the different modes of system delivery. That’s really where cloud starts to play a role.

Gardner: Let’s look now to our audience for what they see as important. What are the chief obstacles preventing you from moving off of a legacy virtualization environment? Again, the economics are quite prevalent in their responses.

Screen Shot 2By a majority, they are not sure that there’s sufficient return on investment (ROI) benefits. They might be wondering why they should move at all. Their fear of a lock-in to a primary cloud model is also a concern. So, the economics and lock-in risk are high, not just from being stuck on a virtualization legacy — but also concern about moving forward. Maybe they’re like the deer in the headlights.

You have to look at what it is that your business needs, and how does that start to influence changes that you make organizationally, of where you focus your management and your staff.

The third concern, a close tie, are issues around compliance, security, and regulatory restrictions from moving to the cloud. Complexity and uncertainty that the migration process will be successful, are also of concern. They’re worried about that lift and shift process.

They are less concerned about lack of support for moving from the C-Suite or business leadership, of not getting buy-in from the top. So … If it’s working, don’t fix it, I suppose, or at least don’t break it. And the last issue of concern, very low, is that it’s still too soon to know which cloud choices are best.

So, it’s not that they don’t understand what’s going on with cloud, they’re concerned about risk, and complexity of staying is a concern — but complexity of moving is nearly as big of a concern. David, anything in these results that jump out to you?

Feel the fear and migrate anyway

Grimes: Of those not being sure of the ROI benefits, that’s been a common thread for quite some time in terms of looking at these cloud migrations. But in our experience, what I’ve seen are clients choosing to move to a VMware cloud hosted by Navisite. They ultimately end up unlocking the business agility of their cloud, even if they weren’t 100 percent sure going into it that they would be able to.

But time and time again, moving away from the enterprise data center, repurposing the spend on IT resources to become more valuable to the business — as opposed to the traditional keeping the lights on function — has played out on a fairly regular basis.

I agree with the audience and the response here around the fear of lock-in. And it’s not just lock-in from a basic deployment infrastructure perspective, it’s fear of lock-in if you choose to take advantage of a cloud’s higher-level services, such as data analytics or all the different business things that are now as-a-service. If you buy into them, you certainly increase your ability to deliver. Your own pace of innovation can go through the roof — but you’re often then somewhat locked in.

You’re buying into a particular service model, a set of APIs, et cetera. It’s a form of lock-in. It is avoidable if you want to build in layers of abstraction, but it’s not necessarily the end of the world either. As with everything, there are trade-offs. You’re getting a lot of business value in your own ability to innovate and deliver quickly, yes, but it comes at the cost of some lock-in to a particular platform.

Gardner: Dave, what I’m seeing here is people explaining why hybrid is important to them, that they want to hedge their bets. All or nothing is too risky. Does that make sense to you, that what these results are telling us is that hybrid is the best model because you can spread that risk around?

IT in the balance between past and future

Linthicum: Yes, I think it does say that. I live this on a daily basis in terms of ROI benefits and concern about not having enough, and also the lock-in model. And the reality is that when you get to an as-is architecture state, it’s going to be a variety — as we mentioned earlier – of resources that we’re going to leverage.

So, this is not all about taking traditional systems – and the application workloads around traditional systems — and then moving them into the cloud and shutting down the traditional systems. That won’t work. This is about a balance or modernization of technology. And if you look at that, all bets are on the table — including traditional, including private cloud, and public cloud, and hybrid-based computing. Typically, it’s going to be the best path to success at looking at all of that. But like I said, the solution’s really going to be dependent on the requirements on the business and what we’re looking at.

Going forward, these kinds of decisions are falling into a pattern, and I think that we’re seeing that this is not necessarily going to be pure-cloud play. This is not necessarily going to be pure traditional play, or pure private cloud play. This is going to be a complex architecture that deals with a private and public cloud paired with traditional systems.

And so, people who do want to hedge their bets will do that around making the right decisions that they leverage the right resources for the appropriate task at hand. I think that’s going to be the winning end-point. It’s not necessarily moving to the platforms that we think are cool, or that we think can make us more money — it’s about localization of the workloads on the right platforms, to gain the right fit.

Gardner: From the last two survey result sets, it appears incumbent on legacy providers like VMware to try to get people to stay on their designated platform path. But at the same time, because of this inertia to shift, because of these many concerns, the hyperscalers like Google Cloud, Microsoft Azure, and Amazon Web Services also need to sweeten their deals. What are these other cloud providers doing, David, when it comes to trying to assuage the enterprise concerns of moving wholesale to the cloud?

It’s not moving to the platforms that we think are cool, or that can make us money, it’s about localization of the workloads on the right platforms, to get the right fit.

Grimes: There are certainly those hyperscale players, but there are also a number of regional public cloud players in the form of the VMware partner ecosystem. And I think when we talk about public versus private, we also need to make a distinction between public hyperscale and public cloud that still could be VMware-based.

I think one interesting thing that ties back to my earlier comments is when you look at Microsoft Azure and their Azure Stack hybrid cloud strategy. If you flip that 180 degrees, and consider the VMware on AWS strategy, I think we’ll continue to see that type of thing play out going forward. Both of those approaches actually reflect the need to be able to deliver the legacy enterprise workload in a way that is adjacent from an equivalence of technology as well as a latency perspective. Because one thing that’s often overlooked is the need to examine the hybrid cloud deployment models via the acceptable latency between applications that are inherently integrated. That can often be a deal-breaker for a successful implementation.

What we’ll see is this continued evolution of ensuring that we can solve what I see as a decade-forward problem. And that is, as organizations continue to reinvent their applications portfolio they must also evolve the way that they actually build and deliver applications while continuing to be able to operate their business based on the legacy stack that’s driving day-to-day operations.

Moving solutions

Gardner: Our final survey question asks What are your current plans for moving apps and data from a legacy environment like VMware, from a traditional data center?

Screen Shot 3And two strong answers out of the offerings come out on top. Public clouds such as Microsoft Azure and Google Cloud, and a hybrid or multi-cloud approach. So again, they’re looking at the public clouds as a way to get off of their traditional — but they’re looking not for just one or a lock-in, but they’re looking at a hybrid or multi-cloud approach.

Coming up zero, surprisingly, is VMware on AWS, which you just mentioned, David. Private cloud hosted and private cloud on-premises both come up at about 25 percent, along with no plans to move. So, staying on-premises in a private cloud has traction for some, but for those that want to move to the dominant hyperscalers, a multi-cloud approach is clearly the favorite.

Linthicum: I thought there would be a few that would pick VMware on AWS, but it looks like the audience doesn’t necessarily see that that’s going to be the solution. Everything else is not surprising. It’s aligned with what we see in the marketplace right now. Public cloud movement to Azure, Google Cloud and then also the movement to complex clouds like hybrid and multi-cloud also seem to be the two trends worth seeing right now in the space, and this is reflective of that.

Gardner: Let’s move our discussion on. It’s time to define the right trade-offs and rationale when we think about these taxing choices. We know that people want to improve, they don’t want to be locked in, they want good economics, and they’re probably looking for a long-term solution.

Now that we’ve mentioned it several times, what is it about Azure and Azure Stack that provides appeal? Microsoft’s cloud model seems to be differentiated in the market, by offering both a public cloud component as well as an integrated – or adjacent — private cloud component. There’s a path for people to come onto those from a variety of different deployment histories including, of course, a Microsoft environment — but also a VMware environment. What should organizations be thinking about, what are the proper trade-offs, and what are the major concerns when it comes to picking the right hybrid and multi-cloud approach?

Strategic steps on the journey

Grimes: At the end of the day, it’s ultimately a journey and that journey requires a lot of strategy upfront. It requires a lot of planning, and it requires selecting the right partner to help you through that journey.

Because whether you’re planning an all-in on Azure, or an all-in on Google Cloud, or you want to stay on VMware but get out of the enterprise data center, as Dave has mentioned, the reality is everything is much more complex than it seems. And to maximize the value of the models and capabilities that are available today, you’re almost necessarily going to end up in a hybrid deployment model — and that means you’re going to have a mix of technologies in play, a mix of skillsets required to support them.

Whether you’re planning on an all-Azure or all-Google, or you want to stay on VMware, it’s about getting out of the enterprise datacenter, and the reality is far more complex than it seems.

And so I think one of the key things that folks should do is consider carefully how they partner regardless of where they are in that journey, if they are on step one or step three, to continue that journey is going to be critical on selecting the right partner to help them.

Gardner: Dave, when you’re looking at risk versus reward, cost versus benefits, when you’re wanting to hedge bets, what is it about Microsoft Azure and Azure Stack in particular that help solve that? It seems to me that they’ve gone to great pains to anticipate the state of the market right now and to try to differentiate themselves. Is there something about the Microsoft approach that is, in fact, differentiated among the hyperscalers?

A seamless secret

Linthicum: The paired private and public cloud, with similar infrastructures and similar migration paths, and dynamic migration paths, meaning it could be workloads in between them — at least this is the way that it’s been described — is going to be unique in the market. Kind of the dirty little secret.

It’s going to be very difficult to port from a private cloud to a public cloud because most private clouds are typically not AWS and not Google, and they don’t make private clouds. Therefore, you have to port your code between the two, just like you’ve had to port systems in the past. And the normal issues about refactoring and retesting, and all the other things, really come home to roost.

But Microsoft could have a product that provides a bit more of a seamless capability of doing that. And the great thing about that is I can really localize on whatever particular platform I’m looking at. And if I, for example, “mis-localize” or I misfit, then it’s a relatively easy thing to move it from private to public or public to private. And this may be at a time where the market needs something like that, and I think that’s what is unique about it in the space.

Gardner: Tim, what do you see as some of the trade-offs, and what is it about a public, private hybrid cloud that’s architected to be just that — that seemingly Microsoft has developed? Is that differentiating, or should people be thinking about this in a different way?

Crawford: I actually think it’s significantly differentiating, especially when you consider the complexity that exists within the mass of the enterprise. You have different needs, and not all of those needs can be serviced by public cloud, not all of those needs can be serviced by private cloud.

There’s a model that I use with clients to go through this, and it’s something that I used when I led IT organizations. When you start to pick apart these pieces, you start to realize that some of your components are well-suited for software as a service (SaaS)-based alternatives, some of the components and applications and workloads are well-suited for public cloud, some are well-suited for private cloud.

A good example of that is if you have sovereignty issues, or compliance and regulatory issues. And then you’ll have some applications that just aren’t ready for cloud. You’ve mentioned lift and shift a number of times, and for those that have been down that path of lift and shift, they’ve also gotten burnt by that, too, in a number of ways.

And so, you have to be mindful of what applications go in what mode, and I think the fact that you have a product like Azure Stack and Azure being similar, that actually plays pretty well for an enterprise that’s thinking about skillsets, thinking about your development cycles, thinking about architectures and not having to create, as Dave was mentioning, one for private cloud and a completely different one for public cloud. And if you get to a point where you want to move an application or workload, then you’re having to completely redo it over again. So, I think that Microsoft combination is pretty unique, and will be really interesting for the average enterprise.

Gardner: From the managed service provider (MSP) perspective, at Navisite you have a large and established hosted VMware business, and you’re helping people transition and migrate. But you’re also looking at the potential market opportunity for an Azure Stack and a hosted Azure Stack business. What is it for the managed hosting provider that might make Microsoft’s approach differentiated?

A full-spectrum solution

Grimes: It comes down to what both Dave and Tim mentioned. Having a light stack and being able to be deployed in a private capacity, which also — by the way — affords the ability to use bare metal adjacency, is appealing. We haven’t talked a lot about bare metal, but it is something that we see in practice quite often. There are bare metal workloads that need to be very adjacent, i.e. land adjacent, to the virtualization-friendly workloads.

Being able to have the combination of all three of those things is what makes AzureStack attractive to a hosting provider such as Navisite. With it, we can solve the full-spectrum of the needs of the client, covering bare metal, private cloud, and hyperscale public — and really in a seamless way — which is the key point.

Gardner: It’s not often you can be as many things to as many people as that given the heterogeneity of things over the past and the difficult choices of the present.

We have been talking about these many cloud choices in the abstract. Let’s now go to a concrete example. There’s an organization called Ceridian. Tell us about how they solved their requirements problems?

Azure Stack is attractive to a hosting provider like Navisite. With it we can solve the full-spectrum of the needs of the client in a seamless way.

Grimes: Ceridian is a global human capital management company, global being a key point. They are growing like gangbusters and have been with Navisite for quite some time. It’s been a very long journey.

But one thing about Ceridian is they have had a cloud-first strategy. They embraced the cloud very early. A lot of those barriers to entry that we saw, and have seen over the years, they looked at as opportunity, which I find very interesting.

Requirements around security and compliance are critical to them, but they also recognized that a SaaS provider that does a very small set of IT services — delivering managed infrastructure with security and compliance — is actually likely to be able to do that at least as effectively, if not more effectively, than doing it in-house, and at a competitive and compelling price point as well.

So some of their challenges really were around all the reasons that we see, that we talked about here today, and see as the drivers to adopting cloud. It’s about enabling business agility. With the growth that they’ve experienced, they’ve needed to be able to react quickly and deploy quickly, and to leverage all the things that virtualization and now cloud enable for the enterprises. But again, as I mentioned before, they worked closely with a partner to maximize the value of the technologies and ensure that we’re meeting their security and compliance needs and delivering everything from a managed infrastructure perspective.

Overcoming geographical barriers

One of the core challenges that they had with that growth was a need to expand into geographies where we don’t currently operate our hosting facilities, so Navisite’s hosting capabilities. In particular, they needed to expand into Australia. And so, what we were able to do through our partnership with Microsoft was basically deliver to them the managed infrastructure in a similar way.

This is actually an interesting use case in that they’re running VMware-based cloud in our data center, but we were able to expand them into a managed Azure-delivered cloud locally out of Australia. Of course, one thing we didn’t touch on today — but is a driver in many of these decisions for global organizations — is a lot of the data sovereignty and locality regulations are becoming increasingly important. Certainly, Microsoft is expanding the Azure platform. And so their presence in Australia has enabled us to deliver that for Ceridian.

As I think about the key takeaways and learnings from this particular example, Ceridian had a very clear, very well thought out cloud-centric and cloud-first strategy. You, Dana, mentioned it earlier, that that really enables them to continue to keep their focus on the applications because that’s their bread and butter, that’s how they differentiate.

By partnering, they’re able to not worry about the keeping the lights on and instead focus on the application. Second, of course, is they’re a global organization and so they have global delivery needs based on data sovereignty regulations. And third, and I’d say probably most important, is they selected a partner that was able to bring to bear the expertise and skillsets that are difficult for enterprises to recruit and retain. As a result, they were able to take advantage of the different infrastructure models that we’re delivering for them to support their business.

Gardner: We’re now going to go to our question and answer portion. Kristen Allen of Navisite is moderating our Q and A section.

Bare metal and beyond

Kristen Allen: We have some very interesting questions. The first one ties into a conversation you were just having, “What are the ROI benefits to moving to bare metal servers for certain workloads?”

Grimes: Not all software licensing is yet virtualization-friendly, or at least on a virtualization platform-agnostic platform, and so there’s really two things that play into the selection of bare metal, at least in my experience. There is kind of a model of bare metal computing, small cartridge-based computers, that are very specific to certain workloads. But when we talk in more general terms for a typical enterprise workload, it really revolves around either software licensing incompatibility with some of the cloud deployment models or a belief that there is a performance that requires bare metal, though in practice I think that’s more of optics than reality. But those are the two things that typically drive bare metal adoption in my experience.

Linthicum: Ultimately, people want access directly for at the end-of-the-line platforms, and if there’s some performance reason, or some security reason, or some kind of a direct access to some of the input-output systems, we do see these kinds of one-offs for bare metal. I call them special needs applications. I don’t see it as something that’s going to be widely adopted, but from time to time, it’s needed, and the capabilities are there depending on where you want to run it.

Allen: Our next question is, “Should there be different thinking for data workloads versus apps ones, and how should they be best integrated in a hybrid environment?”

The compute aspect and data aspect of an application should be decoupled. If you want to you can then assemble them on different platforms, even one on public cloud and one on private cloud.

Linthicum: Ultimately, the compute aspect of an application and the data aspect of that application really should be decoupled. Then, if you want to, you can assemble them on different platforms. I would typically think that we’re going to place them either on all public or all private, but you can certainly do one on private and one on public, and one on public and one on private, and link them that way.

As we’re migrating forward, the workloads are getting even more complex. And there’s some application workloads that I’ve seen, that I’ve developed, where the database would be partitioned against the private cloud andthe public cloud for disaster recovery (DR) purposes or performance purposes, and things like that. So, it’s really up to you as the architect as to where you’re going to place the data in adjacent relation to the workload. Typically, a good idea to place them as close to each other as they can so they have the highest bandwidth to communicate to each other. However, it’s not necessary depending on what the application’s doing.

Gardner: David, maybe organizations need to place their data in a certain jurisdiction but might want to run their apps out of a data center somewhere else for performance and economics?

Grimes: The data sovereignty requirement is something that we touched on and that’s becoming increasingly important and increasingly, that’s a driver too, in deciding where to place the data.

Just following on Dave’s comments, I agree 100 percent. If you have the opportunity to architect a new application, I think there’s some really interesting choices that can be made around data placement, network placement, and decoupling them is absolutely the right strategy.

I think the challenge many organizations face is having that mandate to close down the enterprise data center and move to the “cloud.” Of course, we know that “cloud” means a lot of different things but, do that in a legacy application environment and that will present some unique challenges as well, in terms of actually being able to sufficiently decouple data and applications.

Curious, Dave, if you’ve had any successes in kind of meeting that challenge?

Linthicum: Yes. It depends on the application workload and how flexible the applications are and how the information is communicating between the systems; also security requirements. So, it’s one of those obnoxious consulting responses, “it depends” as to whether or not we can make that work. But the thing is the architecture is a legitimate architectural pattern that I’ve seen before and we’ve used it.

Allen: Okay. How do you meet and adapt for Health Insurance Portability and Accountability Act of 1996(HIPAA) requirements and still maintain stable connectivity for the small business?

Grimes: HIPAA, like many of the governance programs, is a very large and co-owned responsibility. I think from our perspective at Navisite, part of Spectrum Enterprise, we have the unique capability of delivering both the network services and the cloud services in an integrated way that can address the particular question around stable connectivity. But ultimately, HIPAA is a blended responsibility model where the infrastructure provider, the network provider, the provider managing up to whatever layer of the application stack will have certain obligations. But then the partner, the client would also retain some obligations as well.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Sponsor: Navisite.

You may also be interested in:

 

Posted in application transformation, Cloud computing, data center, Data center transformation, Dell, Deloitte, disaster recovery, enterprise architecture, Enterprise transformation, multicloud, Navisite, server, Virtualization, VMware | Tagged , , , , , , , , , , , , , , | Leave a comment

How new tools help any business build ethical and sustainable supply chains

The next BriefingsDirect digital business innovations discussion explores new ways that companies gain improved visibility, analytics, and predictive responses to better manage supply-chain risk-and-reward sustainability factors.

We’ll examine new tools and methods that can be combined to ease the assessment and remediation of hundreds of supply-chain risks — from use of illegal and unethical labor practices to hidden environmental malpractices.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

Here to explore more about the exploding sophistication in the ability to gain insights into supply-chain risks and provide rapid remediation, are our panelists, Tony Harris, Global Vice President and General Manager of Supplier Management Solutions at SAP Ariba; Erin McVeigh, Head of Products and Data Services at Verisk Maplecroft, and Emily Rakowski, Chief Marketing Officer at EcoVadis. The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tony, I heard somebody say recently there’s never been a better time to gather information and to assert governance across supply chains. Why is that the case? Why is this an opportune time to be attacking risk in supply chains?

Harris: Several factors have culminated in a very short time around the need for organizations to have better governance and insight into their supply chains.

Tony Harris

Harris

First, there is legislation such as the UK’s Modern Slavery Act in 2015 and variations of this across the world. This is forcing companies to make declarations that they are working to eradicate forced labor from their supply chains. Of course, they can state that they are not taking any action, but if you can imagine the impacts that such a statement would have on the reputation of the company, it’s not going to be very good.

Next, there has been a real step change in the way the public now considers and evaluates the companies whose goods and services they are buying. People inherently want to do good in the world, and they want to buy products and services from companies who can demonstrate, in full transparency, that they are also making a positive contribution to society — and not just generating dividends and capital growth for shareholders.

Finally, there’s also been a step change by many innovative companies that have realized the real value of fully embracing an environmental, social, and governance (ESG) agenda. There’s clear evidence that now shows that companies with a solid ESG policy are more valuable. They sell more. The company’s valuation is higher. They attract and retain more top talent — particularly Millennials and Generation Z — and they are more likely to get better investment rates as well.

Gardner: The impetus is clearly there for ethical examination of how you do business, and to let your costumers know that. But what about the technologies and methods that better accomplish this? Is there not, hand in hand, an opportunity to dig deeper and see deeper than you ever could before?

Better business decisions with AI

Harris: Yes, we have seen a big increase in the number of data and content companies that now provide insights into the different risk types that organizations face.

We have companies like EcoVadis that have built score cards on various corporate social responsibility (CSR) metrics, and Verisk Maplecroft’s indices across the whole range of ESG criteria. We have financial risk ratings, we have cyber risk ratings, and we have compliance risk ratings.

These insights and these data providers are great. They really are the building blocks of risk management. However, what I think has been missing until recently was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business in one place.

What has been missing was the capability to pull all of this together so that you can really get a single view of your entire supplier risk exposure across your business.

Technologies such as artificial intelligence (AI), for example, and machine learning (ML) are supporting businesses at various stages of the procurement process in helping to make the right decisions. And that’s what we developed here at SAP Ariba. 

Gardner: It seems to me that 10 years ago when people talked about procurement and supply-chain integrity that they were really thinking about cost savings and process efficiency. Erin, what’s changed since then? And tell us also about Verisk Maplecroft and how you’re allowing a deeper set of variables to be examined when it comes to integrity across supply chains.

McVeigh: There’s been a lot of shift in the market in the last five to 10 years. I think that predominantly it really shifted with environmental regulatory compliance. Companies were being forced to look at issues that they never really had to dig underneath and understand — not just their own footprint, but to understand their supply chain’s footprint. And then 10 years ago, of course, we had the California Transparency Act, and then from that we had the UK Modern Slavery Act, and we keep seeing more governance compliance requirements.

Erin McVeigh

McVeigh

But what’s really interesting is that companies are going beyond what’s mandated by regulations. The reason that they have to do that is because they don’t really know what’s coming next. With a global footprint, it changes that dynamic. So, they really need to think ahead of the game and make sure that they’re not reacting to new compliance initiatives. And they have to react to a different marketplace, as Tony explained; it’s a rapidly changing dynamic.

We were talking earlier today about the fact that companies are embracing sustainability, and they’re doing that because that’s what consumers are driving toward.

At Verisk Maplecroft, we came to business about 12 years ago, which was really interesting because it came out of a number of individuals who were getting their master’s degrees in supply-chain risk. They began to look at how to quantify risk issues that are so difficult and complex to understand and to make it simple, easy, and intuitive.

They began with a subset of risk indices. I think probably initially we looked at 20 risks across the board. Now we’re up to more than 200 risk issues across four thematic issue categories. We begin at the highest pillar of thinking about risks — like politics, economics, environmental, and social risks. But under each of those risk’s themes are specific issues that we look at. So, if we’re talking about social risk, we’re looking at diversity and labor, and then under each of those risk issues we go a step further, and it’s the indicators — it’s all that data matrix that comes together that tell the actionable story.

Some companies still just want to check a [compliance] box. Other companies want to dig deeper — but the power is there for both kinds of companies. They have a very quick way to segment their supply chain, and for those that want to go to the next level to support their consumer demands, to support regulatory needs, they can have that data at their fingertips.

Global compliance

Gardner: Emily, in this global environment you can’t just comply in one market or area. You need to be global in nature and thinking about all of the various markets and sustainability across them. Tell us what EcoVadis does and how an organization can be compliant on a global scale.

Rakowski: EcoVadis conducts business sustainability ratings, and the way that we’re using the procurement context is primarily that very large multinational companies like Johnson and Johnson or Nestlé will come to us and say, “We would like to evaluate the sustainability factors of our key suppliers.”

Emily Rakowski

Rakowski

They might decide to evaluate only the suppliers that represent a significant risk to the business, or they might decide that they actually want to review all suppliers of a certain scale that represent a certain amount of spend in their business.

What EcoVadis provides is a 10-year-old methodology for assessing businesses based on evidence-backed criteria. We put out a questionnaire to the supplier, what we call a right-sized questionnaire, the supplier responds to material questions based on what kind of goods or services they provide, what geography they are in, and what size of business they are in.

Of course, very small suppliers are not expected to have very mature and sophisticated capabilities around sustainability systems, but larger suppliers are. So, we evaluate them based on those criteria, and then we collect all kinds of evidence from the suppliers in terms of their policies, their actions, and their results against those policies, and we give them ultimately a 0 to 100 score.

And that 0 to 100 score is a pretty good indicator to the buying companies of how well that company is doing in their sustainability systems, and that includes such criteria as environmental, labor and human rights, their business practices, and sustainable procurement practices.

Gardner: More data and information are being gathered on these risks on a global scale. But in order to make that information actionable, there’s an aggregation process under way. You’re aggregating on your own — and SAP Ariba is now aggregating the aggregators.

How then do we make this actionable? What are the challenges, Tony, for making the great work being done by your partners into something that companies can really use and benefit from?

Timely insights, best business decisions

Harris: Other than some of the technological challenges of aggregating this data across different providers is the need for linking it to the aspects of the procurement process in support of what our customers are trying to achieve. We must make sure that we can surface those insights at the right point in their process to help them make better decisions.

The other aspect to this is how we’re looking at not just trying to support risk through that source-to-settlement process — trying to surface those risk insights — but also understanding that where there’s risk, there is opportunity.

So what we are looking at here is how can we help organizations to determine what value they can derive from turning a risk into an opportunity, and how they can then measure the value they’ve delivered in pursuit of that particular goal. These are a couple of the top challenges we’re working on right now.

We’re looking at not just trying to support risk through that source-to-settlement process — trying to surface those risk insights — but also understanding that where there is risk there is opportunity.

Gardner: And what about the opportunity for compression of time? Not all challenges are something that are foreseeable. Is there something about this that allows companies to react very quickly? And how do you bring that into a procurement process?

Harris: If we look at some risk aspects such as natural disasters, you can’t react timelier than to a natural disaster. So, the way we can alert from our data sources on earthquakes, for example, we’re able to very quickly ascertain whom the suppliers are, where their distribution centers are, and where that supplier’s distribution centers and factories are.

When you can understand what the impacts are going to be very quickly, and how to respond to that, your mitigation plan is going to prevent the supply chain from coming to a complete halt.

Gardner: We have to ask the obligatory question these days about AI and ML. What are the business implications for tapping into what’s now possible technically for better analyzing risks and even forecasting them?

AI risk assessment reaps rewards

Harris: If you look at AI, this is a great technology, and what we trying to do is really simplify that process for our customers to figure out how they can take action on the information we’re providing. So rather them having to be experts in risk analysis and doing all this analysis themselves, AI allows us to surface those risks through the technology — through our procurement suite, for example — to impact the decisions they’re making.

For example, if I’m in the process of awarding a piece of sourcing business off of a request for proposal (RFP), the technology can surface the risk insights against the supplier I’m about to award business to right at that point in time.

A determination can be made based upon the goods or the services I’m looking to award to the supplier or based on the part of the world they operate in, or where I’m looking to distribute these goods or services. If a particular supplier has a risk issue that we feel is too high, we can act upon that. Now that might mean we postpone the award decision before we do some further investigation, or it may mean we choose not to award that business. So, AI can really help in those kinds of areas.

Gardner: Emily, when we think about the pressing need for insight, we think about both data and analysis capabilities. This isn’t something necessarily that the buyer or an individual company can do alone if they don’t have access to the data. Why is your approach better and how does AI assist that?

Rakowski: In our case, it’s all about allowing for scale. The way that we’re applying AI and ML at EcoVadis is we’re using it to do an evidence-based evaluation.

We collect a great amount of documentation from the suppliers we’re evaluating, and actually that AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, compress the evaluation time from what used to be about a six or seven-hour evaluation time for each supplier down to three or four hours. So that’s essentially allowing us to double our workforce of analysts in a heartbeat.

AI is helping us scan through the documentation more quickly. That way we can find the relevant information that our analysts are looking for, allowing us to double our workforce of analysts.

The other thing it’s doing is helping scan through material news feeds, so we’re collecting more than 2,500 news sources from around all kinds of reports, from China Labor Watch or OSHA. These technologies help us scan through those reports from material information, and then puts that in front of our analysts. It helps them then to surface that real-time news that we’re for sure at that point is material.

And that way we we’re combining AI with real human analysis and validation to make sure that what we we’re serving is accurate and relevant.

Harris: And that’s a great point, Emily. On the SAP Ariba side, we also use ML in analyzing similarly vast amounts of content from across the Internet. We’re scanning more than 600,000 data sources on a daily basis for information on any number of risk types. We’re scanning that content for more than 200 different risk types.

We use ML in that context to find an issue, or an article, for example, or a piece of bad news, bad media. The software effectively reads that article electronically. It understands that this is actually the supplier we think it is, the supplier that we’ve tracked, and it understands the context of that article.

By effectively reading that text electronically, a machine has concluded, “Hey, this is about a contracts reduction, it may be the company just lost a piece of business and they had to downsize, and so that presents a potential risk to our business because maybe this supplier is on their way out of business.”

And the software using ML figures all that stuff out by itself. It defines a risk rating, a score, and brings that information to the attention of the appropriate category manager and various users. So, it is very powerful technology that can number crunch and read all this content very quickly.

Gardner: Erin, at Maplecroft, how are such technologies as AI and ML being brought to bear, and what are the business benefits to your clients and your ecosystem?

The AI-aggregation advantage

McVeigh: As an aggregator of data, it’s basically the bread and butter of what we do. We bring all of this information together and ML and AI allow us to do it faster, and more reliably

We look at many indices. We actually just revamped our social indices a couple of years ago.

Before that you had a human who was sitting there, maybe they were having a bad day and they just sort of checked the box. But now we have the capabilities to validate that data against true sources.

Just as Emily mentioned, we were able to reduce our human-rights analyst team significantly and the number of individuals that it took to create an index and allow them to go out and begin to work on additional types of projects for our customers. This helped our customers to be able to utilize the data that’s being automated and generated for them.

We also talked about what customers are expecting when they think about data these days. They’re thinking about the price of data coming down. They’re expecting it to be more dynamic, they’re expecting it to be more granular. And to be able to provide data at that level, it’s really the combination of technology with the intelligent data scientists, experts, and data engineers that bring that power together and allow companies to harness it.

Gardner: Let’s get more concrete about how this goes to market. Tony, at the recent SAP Ariba Live conference, you announced the Ariba Supplier Risk improvements. Tell us about the productization of this, how people intercept with it. It sounds great in theory, but how does this actually work in practice?

Partnership prowess

Harris: What we announced at Ariba Live in March is the partnership between SAP Ariba, EcoVadis and Verisk Maplecroft to bring this combined set of ESG and CSR insights into SAP Ariba’s solution.

We do not yet have the solution generally available, so we are currently working on building out integration with our partners. We have a number of common customers that are working with us on what we call our design partners. There’s no better customer ultimately then a customer already using these solutions from our companies. We anticipate making this available in the Q3 2018 time frame.

And with that, customers that have an active subscription to our combined solutions are then able to benefit from the integration, whereby we pull this data from Verisk Maplecroft, and we pull the CSR score cards, for example, from EcoVadis, and then we are able to present that within SAP Ariba’s supplier risk solution directly.

What it means is that users can get that aggregated view, that high-level view across all of these different risk types and these metrics in one place. However, if, ultimately they are going to get to the nth degree of detail, they will have the ability to click through and naturally go into the solutions from our partners here as well, to drill right down to that level of detail. The aim here is to get them that high-level view to help them with their overall assessments of these suppliers.

Gardner: Over time, is this something that organizations will be able to customize? They will have dials to tune in or out certain risks in order to make it more applicable to their particular situation?

Customers that have an active subscription to our combined solutions are then able to benefit from the integration and see all that data within SAP Ariba’s supplier risk solutions directly.

Harris: Yes, and that’s a great question. We already addressed that in our solutions today. We cover risk across more than 200 types, and we categorized those into four primary risk categories. The way the risk exposure score works is that any of the feeding attributes that go into that calculation the customer gets to decide on how they want to weigh those.

If I have more bias toward that kind of financial risk aspects, or if I have more of the bias toward ESG metrics, for example, then I can weigh that part of the score, the algorithm, appropriately.

Gardner: Before we close out, let’s examine the paybacks or penalties when you either do this well — or not so well.

Erin, when an organization can fully avail themselves of the data, the insight, the analysis, make it actionable, make it low-latency — how can that materially impact the company? Is this a nice-to-have, or how does it affect the bottom line? How do we make business value from this?

Nice-to-have ROI

Rakowski: One of the things that we’re still working on is quantifying the return on investment (ROI) for companies that are able to mitigate risk, because the event didn’t happen.

How do you put a tangible dollar value to something that didn’t occur? What we can look at is taking data that was acquired over the past few years and understand that as we begin to see our risk reduction over time, we begin to source for more suppliers, add diversity to our supply chain, or even minimize our supply chain depending on the way you want to move forward in your risk landscape and your supply diversification program. It’s giving them that power to really make those decisions faster and more actionable.

And so, while many companies still think about data and tools around ethical sourcing or sustainable procurement as a nice-to-have, those leaders in the industry today are saying, “It’s no longer a nice-to-have, we’re actually changing the way we have done business for generations.”

And, it’s how other companies are beginning to see that it’s not being pushed down on them anymore from these large retailers, these large organizations. It’s a choice they have to make to do better business. They are also realizing that there’s a big ROI from putting in that upfront infrastructure and having dedicated resources that understand and utilize the data. They still need to internally create a strategy and make decisions about business process.

We can automate through technology, we can provide data, and we can help to create technology that embeds their business process into it — but ultimately it requires a company to embrace a culture, and a cultural shift to where they really believe that data is the foundation, and that technology will help them move in this direction.

Gardner: Emily, for companies that don’t have that culture, that don’t think seriously about what’s going on with their suppliers, what are some of the pitfalls? When you don’t take this seriously, are bad things going to happen?

Pay attention, be prepared

Rakowski: There are dozens and dozens of stories out there about companies that have not paid attention to critical ESG aspects and suffered the consequences of a horrible brand hit or a fine from a regulatory situation. And any of those things easily cost that company on the order of a hundred times what it would cost to actually put in place a program and some supporting services and technologies to try to avoid that.

From an ROI standpoint, there’s a lot of evidence out there in terms of these stories. For companies that are not really as sophisticated or ready to embrace sustainable procurement, it is a challenge. Hopefully there are some positive mavericks out there in the businesses that are willing to stake their reputation on trying to move in this direction, understanding that the power they have in the procurement function is great.

They can use their company’s resources to bet on supply-chain actors that are doing the right thing, that are paying living wages, that are not overworking their employees, that are not dumping toxic chemicals in our rivers and these are all things that, I think, everybody is coming to realize are really a must, regardless of regulations.

Hopefully there are some positive mavericks out there who are willing to stake their reputations on moving in this direction. The power they have in the procurement function is great.

And so, it’s really those individuals that are willing to stand up, take a stand and think about how they are going to put in place a program that will really drive this culture into the business, and educate the business. Even if you’re starting from a very little group that’s dedicated to it, you can find a way to make it grow within a culture. I think it’s critical.

Gardner: Tony, for organizations interested in taking advantage of these technologies and capabilities, what should they be doing to prepare to best use them? What should companies be thinking about as they get ready for such great tools that are coming their way?

Synergistic risk management

Harris: Organizationally, there tend to be a couple of different teams inside of business that manage risks. So, on the one hand there can be the kind of governance risk and compliance team. On the other hand, they can be the corporate social responsibility team.

I think first of all, bringing those two teams together in some capacity makes complete sense because there are synergies across those teams. They are both ultimately trying to achieve the same outcome for the business, right? Safeguard the business against unforeseen risks, but also ensure that the business is doing the right thing in the first place, which can help safeguard the business from unforeseen risks.

I think getting the organizational model right, and also thinking about how they can best begin to map out their supply chains are key. One of the big challenges here, which we haven’t quite solved yet, is figuring out who are the players or supply-chain actors in that supply chain? It’s pretty easy to determine now who are the tier-one suppliers, but who are the suppliers to the suppliers — and who are the suppliers to the suppliers to the suppliers?

We’ve yet to actually build a better technology that can figure that out easily. We’re working on it; stay posted. But I think trying to compile that information upfront is great because once you can get that mapping done, our software and our partner software with EcoVadis and Verisk Maplecroft is here to surfaces those kinds of risks inside and across that entire supply chain.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, Business networks, Cloud computing, Cyber security, data analysis, Enterprise transformation, Information management, machine learning, Networked economy, procurement, retail, risk assessment, SAP Ariba, Security, Spot buying, supply chain | Tagged , , , , , , , , , , , , | Leave a comment

Panel explores new ways to solve the complexity of hybrid cloud monitoring

The next BriefingsDirect panel discussion focuses on improving performance and cost monitoring of various IT workloads in a multi-cloud world.

We will now explore how multi-cloud adoption is forcing cloud monitoring and cost management to work in new ways for enterprises.

Our panel of Micro Focus experts will unpack new Dimensional Research survey findings gleaned from more than 500 enterprise cloud specifiers. You will learn about their concerns, requirements and demands for improving the monitoring, management and cost control over hybrid and multi-cloud deployments.

We will also hear about new solutions and explore examples of how automation leverages machine learning (ML) and rapidly improves cloud management at a large Barcelona bank.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To share more about interesting new cloud trends, we are joined by Harald Burose, Director of Product Management at Micro Focus, and he is based in Stuttgart; Ian Bromehead, Direct of Product Marketing at Micro Focus, and he is based in Grenoble, France, and Gary Brandt, Product Manager at Micro Focus, based in Sacramento. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s begin with setting the stage for how cloud computing complexity is rapidly advancing to include multi-cloud computing — and how traditional monitoring and management approaches are falling short in this new hybrid IT environment.

Enterprise IT leaders tasked with the management of apps, data, and business processes amid this new level of complexity are primarily grounded in the IT management and monitoring models from their on-premises data centers.

They are used to being able to gain agent-based data sets and generate analysis on their own, using their own IT assets that they control, that they own, and that they can impose their will over.

Yet virtually overnight, a majority of companies share infrastructure for their workloads across public clouds and on-premises systems. The ability to manage these disparate environments is often all or nothing.

The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure.

In many ways, the ability to manage in a hybrid fashion has been overtaken by the actual hybrid deployment models. The cart is in front of the horse. IT managers do not own the performance data generated from their cloud infrastructure. Their management agents can’t go there. They have insights from their own systems, but far less from their clouds, and they can’t join these. They therefore have hybrid computing — but without commensurate hybrid management and monitoring.

They can’t assure security or compliance and they cannot determine true and comparative costs — never mind gain optimization for efficiency across the cloud computing spectrum.

Old management into the cloud

But there’s more to fixing the equation of multi-cloud complexity than extending yesterday’s management means into the cloud. IT executives today recognize that IT operations’ divisions and adjustments must be handled in a much different way.

Even with the best data assets and access and analysis, manual methods will not do for making the right performance adjustments and adequately reacting to security and compliance needs.

Automation, in synergy with big data analytics, is absolutely the key to effective and ongoing multi-cloud management and optimization.

Fortunately, just as the need for automation across hybrid IT management has become critical, the means to provide ML-enabled analysis and remediation have matured — and at compelling prices.

Great strides have been made in big data analysis of such vast data sets as IT infrastructure logs from a variety of sources, including from across the hybrid IT continuum.

Many analysts, in addition to myself, are now envisioning how automated bots leveraging IT systems and cloud performance data can begin to deliver more value to IT operations, management, and optimization. Whether you call it BotOps, or AIOps, the idea is the same: The rapid concurrent use of multiple data sources, data collection methods and real-time top-line analytic technologies to make IT operations work the best at the least cost.

IT leaders are seeking the next generation of monitoring, management and optimizing solutions. We are now on the cusp of being able to take advantage of advanced ML to tackle the complexity of multi-cloud deployments and to keep business services safe, performant, and highly cost efficient.

We are on the cusp of being able to take advantage of ML to tackle the complexity of multi-cloud deployments and keep business services safe.

Similar in concept to self-driving cars, wouldn’t you rather have self-driving IT operations? So far, a majority of you surveyed say yes; and we are going to now learn more about that survey information.

Ian, please tell us more about the survey findings.

IT leaders respond to their needs

Ian Bromehead: Thanks, Dana. The first element of the survey that we wanted to share describes the extent to which cloud is so prevalent today.

Ian Bromehead

Bromehead

More than 92 percent of the 500 or so executives are indicating that we are already in a world of significant multi-cloud adoption.

The lion’s share, or nearly two-thirds, of this population that we surveyed are using between two to five different cloud vendors. But more than 12 percent of respondents are using more than 10 vendors. So, the world is becoming increasingly complex. Of course, this strains a lot of the different aspects [of management].

What are people doing with those multiple cloud instances? As to be expected, people are using them to extend their IT landscape, interconnecting application logic and their own corporate data sources with the infrastructure and the apps in their cloud-based deployments — whether they’re Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Some 88 percent of the respondents are indeed connecting their corporate logic and data sources to those cloud instances.

What’s more interesting is that a good two-thirds of the respondents are sharing data and integrating that logic across heterogeneous cloud instances, which may or may not be a surprise to you. It’s nevertheless a facet of many people’s architectures today. It’s a result of the need for agility and cost reduction, but it’s obviously creating a pretty high degree of complexity as people share data across multiple cloud instances.

The next aspect that we saw in the survey is that 96 percent of the respondents indicate that these public cloud application issues are resolved too slowly, and they are impacting the business in many cases.

Some of the business impacts range from resources tied up by collaborating with the cloud vendor to trying to solve these issues, and the extra time required to resolve issues impacting service level agreements (SLAs) and contractual agreements, and prolonged down time.

What we regularly see is that the adoption of cloud often translates into a loss in transparency of what’s deployed and the health of what’s being deployed, and how that’s capable of impacting the business. This insight is a strong bias on our investment and some of the solutions we will talk to you about. Their primary concern is on the visibility of what’s being deployed — and what depends on the internal, on-premise as well as private and public cloud instances.

People need to see what is impacting the delivery of services as a provider, and if that’s due to issues with local or remote resources, or the connectivity between them. It’s just compounded by the fact that people are interconnecting services, as we just saw in the survey, from multiple cloud providers. Sothe weak part could be anywhere, could be anyone of those links. The ability for people to know where those issues are isnot happening fast enough for many people, with some 96 percent indicating that the issues are being resolved too slowly.

How to gain better visibility?

What are the key changes that need to be addressed when monitoring hybrid IT absent environments? People have challenges with discovery, understanding, and visualizing what has actually been deployed, and how it is impacting the end-to-end business.

They have limited access to the cloud infrastructure, and things like inadequate security monitoring or traditional monitoring agent difficulties, as well as monitoring lack of real-time metrics to be able to properly understand what’s happening.

It shows some of the real challenges that people are facing. And as the world shifts to being more dependent on the services that they consume, then traditional methods are not going to be properly adapted to the new environment. Newer solutions are needed. New ways of gaining visibility – and the measuring availability and performance are going to be needed.

I think what’s interesting in this part of the survey is the indication that the cloud vendors themselves are not providing this visibility. They are not providing enough information for people to be able to properly understand how service delivery might be impacting their own businesses. For instance, you might think that IT is actually flying blind in the clouds as it were.

The cloud vendors are not providing the visibility. They are not providing enough information for people to be able to understand service delivery impacts.

So, one of my next questions was, Across the different monitoring ideas or types, what’s needed for the hybrid IT environment? What should people be focusing on? Security infrastructure, getting better visibility, and end-user experience monitoring, service delivery monitoring and cloud costs – all had high ranking on what people believe they need to be able to monitor. Whether you are a provider or a consumer, most people end up being both. Monitoring is really key.

People say they really need to span infrastructure monitoring, metric that monitoring, and gain end-user security and compliance. But even that’s not enough because to properly govern the service delivery, you are going to have to have an eye on the costs — the cost of what’s being deployed — and how can you optimize the resources according to those costs. You need that analysis whether you are a consumer or the provider.

The last of our survey results shows the need for comprehensive enterprise monitoring. Now, people need things such as high-availability, automation, the ability to cover all types of data to find issues like root causes and issues, even from a predictive perspective. Clearly, here people expect scalability, they expect to be able to use a big data platform.

For consumers of cloud services, they should be measuring what they are receiving, and capable of seeing what’s impacting the service delivery. No one is really so naive as to say that infrastructure is somebody else’s problem. When it’s part of this service, equally impacting the service that you are paying for, and that you are delivering to your business users — then you better have the means to be able to see where the weak links are. It should be the minimum to seek, but there’s still happenings to prove to your providers that they’re underperforming and renegotiate what you pay for.

Ultimately, when you are sticking such composite services together, IT needs to become more of a service broker. We should be able to govern the aspects of detecting when the service is degrading.

So when their service is more PaaS, then workers’ productivity is going to suffer and the business will expect IT to have the means to reverse that quickly.

So that, Dana, is the set of the different results that we got out of this survey.

A new need for analytics

Gardner: Thank you, Ian. We’ll now go to Gary Brandt to learn about the need for analytics and how cloud monitoring solutions can be cobbled together anew to address these challenges.

Gary Brandt: Thanks, Dana. As the survey results were outlined and as Ian described, there are many challenges and numerous types of monitoring for enterprise hybrid IT environments. With such variety and volume of data from these different types of environments that gets generated in the complex hybrid environments, humans simply can’t look at dashboards or use traditional tools and make sense of the data efficiently. Nor can they take necessary actions required in a timely manner, given the volume and the complexity of these environments.

Gary Brandt

Brandt

So how do we deal with all of this? It’s where analytics, advanced analytics via ML, really brings in value. What’s needed is a set of automated capabilities such as those described in Gartner’s definition of AIOps and these include traditional and streaming data management, log and wire metrics, and document ingestion from many different types of sources in these complex hybrid environments.

Dealing with all this, trying to, when you are not quite sure where to look, when you have all this information coming in, it requires some advanced analytics and some clever artificial intelligence (AI)-driven algorithms just to make sense of it. This is what Gartner is really trying to guide the market toward and show where the industry is moving. The key capabilities that they speak about are analytics that allow for predictive capabilities and the capability to find anomalies in vast amounts of data, and then try to pinpoint where your root cause is, or at least eliminate the noise and get to focus on those areas.

We are making this Gartner report available for a limited time. What we have found also is that people don’t have the time or often the skill set to deal with activities and they focus on — they need to focus on the business user and the target and the different issues that come up in these hybrid environments and these AIOpscapabilities that Gartner speaks about are great.

But, without the automation to drive out the activities or the response that needs to occur, it becomes a missing piece. So, we look at a survey — some of our survey results and what our respondents said, it was clear that upward of the high-90 percent are clearly telling us that automation is considered highly critical. You need to see which event or metric trend so clearly impacts on a business service and whether that service pertains to a local, on-prem type of solution, or a remote solution in a cloud at some place.

Automation is key, and that requires a degree of that service definition, dependency mapping, which really should be automated. And to be declared more – just more easily or more importantly to be kept up to date, you don’t need complex environments, things are changing so rapidly and so quickly.

Sense and significance of all that data?

Micro Focus’ approach uses analytics to make sense of this vast amount of data that’s coming in from these hybrid environments to drive automation. The automation of discovery, monitoring, service analytics, they are really critical — and must be applied across hybrid IT against your resources and map them to your services that you define.

Those are the vast amounts of data that we just described. They come in the form of logs and events and metrics, generated from lots of different sources in a hybrid environment across cloud and on-prem. You have to begin to use analytics as Gartner describes to make sense of that, and we do that in a variety of ways, where we use ML to learn behavior, basically of your environment, in this hybrid world.

And we need to be able to suggest what the most significant data is, what the significant information is in your messages, to really try to help find the needle in a haystack. When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to give them the chance to anticipate and to remediate issues before they disrupt the services in a company’s environment.

When you are trying to solve problems, we have capabilities through analytics to provide predictive learning to operators to remediate issues before they disrupt.

And then we take this further because we have the analytics capability that’s described by Gartner and others. We couple that with the ability to execute different types of automation as a means to let the operator, the operations team, have more time to spend on what’s really impacting the business and getting to the issues quicker than trying to spend time searching and sorting through that vast amount of data.

And we built this on different platforms. One of the key things that’s critical when you have this hybrid environment is to have a common way, or an efficient way, to collect information and to store information, and then use that data to provide access to different functionality in your system. And we do that in the form of microservices in this complex environment.

We like to refer to this as autonomous operations and it’spart of our OpsBridge solution, which embodies a lot of different patented capabilities around AIOps. Harald is going to speak to our OpsBridgesolution in more detail.

Operations Bridge in more detail

Gardner: Thank you, Gary. Now that we know more about what users need and consider essential, let’s explore a high-level look at where the solutions are going, how to access and assemble the data, and what new analytics platforms can do.

We’ll now hear from Harald Burose, Director of Product Management at Micro Focus.

Harald Burose: When we listen carefully to the different problems that Ian was highlighting, we actually have a lot of those problems addressed in the Operations Bridge solution that we are currently bringing to market.

Harald Burose

Burose

All core use cases for Operations Bridge tie it to the underpinning of the Vertica big data analytics platform. We’re consolidating all the different types of data that we are getting; whether business transactions, IT infrastructure, application infrastructure, or business services data — all of that is actually moved into a single data repository and then reduced in order to basically understand what the original root cause is.

And from there, these tools like the analytics that Gary described, not only identify the root cause, but move to remediation, to fixing the problem using automation.

This all makes it easy for the stakeholders to understand what the status is and provide the right dashboarding, reporting via the right interface to the right user across the full hybrid cloud infrastructure.

As we saw, some 88 percent of our customers are connecting their cloud infrastructure to their on-premises infrastructure. We are providing the ability to understand that connectivity through a dynamically updated model, and to show how these services are interconnecting — independent of the technology — whether deployed in the public cloud, a private cloud, or even in a classical, non-cloud infrastructure. They can then understand how they are connecting, and they can use the toolset to navigate through it all, a modern HTML5-based interface, to look at all the data in one place.

They are able to consolidate more than 250 different technologies and information into a single place: their log files, the events, metrics, topology — everything together to understand the health of their infrastructure. That is the key element that we drive with the Operations Bridge.

Now, we have extended the capabilities further, specifically for the cloud. We basically took the generic capability and made it work specifically for the different cloud stacks, whether private cloud, your own stack implementations, a hyperconverged (HCI) stack, like Nutanix, or a Docker container infrastructure that you bring up on a public cloud like AzureAmazon, or Google Cloud.

We are now automatically discovering and placing that all into the context of your business service application by using the Automated Service Modeling part of the Operations Bridge.

Now, once we actually integrate those toolsets, we tightly integrate them for native tools on Amazon or for Docker tools, for example. You can include these tools, so you can then automate processes from within our console.

Customers vote a top choice

And, best of all, we have been getting positive feedback from the cloud monitoring community, by the customers. And the feedback has helped earn us a Readers’ Choice Award by the Cloud Computing Insider in 2017, by being ahead of the competition.

This success is not just about getting the data together, using ML to understand the problem, and using our capabilities to connect these things together. At the end of the day, you need to act on the activity.

Having a full-blown orchestration compatibility within OpsBridgeprovides more than 5,000 automated workflows, so you can automate different remediation tasks — or potentially point to future provisioning tasks that solve the problems of whatever you can imagine. You can use this to not only identify the root cause, but you can automatically kick off a workflow to address the specific problems.

If you don’t want to address a problem through the workflow, or cannot automatically address it, you still have a rich set of integrated tools to manually address a problem.

Having a full-blown orchestration capability with OpsBridge provides more than 5,000 automated workflows to automate many different remediation tasks.

Last, but not least, you need to keep your stakeholders up to date. They need to know, anywhere that they go, that the services are working. Our real-time dashboard is very open and can integrate with any type of data — not just the operational data that we collect and manage with the Operations Bridge, but also third-party data, such as business data, video feeds, and sentiment data. This gets presented on a single visual dashboard that quickly gives the stakeholders the information: Is my business service actually running? Is it okay? Can I feel good about the business services that I am offering to my internal as well as external customer-users?

And you can have this on a network operations center (NOC) wall, on your tablet, or your phone — wherever you’d like to have that type of dashboard. You can easily you create those dashboards using Microsoft Office toolsets, and create graphical, very appealing dashboards for your different stakeholders.

Gardner: Thank you, Harald. We are now going to go beyond just the telling, we are going to do some showing. We have heard a lot about what’s possible. But now let’s hear from an example in the field.

Multicloud monitoring in action

Next up is David Herrera, Cloud Service Manager at Banco Sabadell in Barcelona. Let’s find out about this use case and their use of Micro Focus’s OpsBridge solution.

David Herrera: Banco Sabadell is fourth largest Spanish banking group. We had a big project to migrate several systems into the cloud and we realized that we didn’t have any kind of visibility about what was happening in the cloud.

David_Herrera

Herrera

We are working with private and public clouds and it’s quite difficult to correlate the information in events and incidents. We need to aggregate this information in just one dashboard. And for that, OpsBridgeis a perfect solution for us.

We started to develop new functionalities on OpsBridge, to customize for our needs. We had to cooperate with a project development team in order to achieve this.

The main benefit is that we have a detailed view about what is happening in the cloud. In the dashboard we are able to show availability, number of resources that we are using — almost in real time. Also, we are able to show what the cost is in real time of every resource, and we can do even the projection of the cost of the items.

The main benefit is we have a detailed view about what is happening in the cloud. We are able to show what the cost is in real time of every resource.

[And that’s for] every single item that we have in the cloud now, even across the private and public cloud. The bank has invested a lot of money in this solution and we need to show them that it’s really a good choice in economical terms to migrate several systems to the cloud, and this tool will help us with this.

Our response time will be reduced dramatically because we are able to filter and find what is happening, andcall the right people to fix the problem quickly. The business department will understand better what we are doing because they will be able to see all the information, and also select information that we haven’t gathered. They will be more aligned with our work and we can develop and deliver better solutions because also we will understand them.

We were able to build a new monitoring system from scratch that doesn’t exist on the market. Now, we are able to aggregate a lot of detailing information from different clouds.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Micro Focus.

You may also be interested in:

Posted in artificial intelligence, big data, Cloud computing, cloud messaging, Cyber security, data analysis, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, hyperconverged infrastructure, Information management, machine learning, Micro Focus, multicloud, Security | Tagged , , , , , , , , , , , , | Leave a comment

How HudsonAlpha transforms hybrid cloud complexity into an IT force multiplier

The next BriefingsDirect hybrid IT management success story examines how the nonprofit research institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT deployment environments.

We’ll now learn how HudsonAlpha has been testing a new Hewlett Packard Enterprise (HPE) solution, OneSphere, to gain a common and simplified management interface to rule them all.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to help explore the benefits of improved levels of multi-cloud visibility and process automation is Katreena Mullican, Senior Architect and Cloud Whisperer at HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s driving the need to solve hybrid IT complexity at HudsonAlpha?