Business unusual: How the Dell-EMC merger sends shockwaves across the global storage market

The next BriefingsDirect IT market analysis discussion explores customer impacts to the global storage market now that the $67 billion Dell-EMC merger deal appears imminent. The proposed merger, which also includes EMC’s majority control of VMware, has been controversial from the start.

A massive and complex financing apparatus, largely built on private equity debt, undergirds the deal, with privately held Dell taking over the publicly traded EMC and VMware federation. This largest IT vendor deal ever is expected to close sometime between now and October 2016.

While EMC CEO and Chairman Joe Tucci has assured the storage and IT infrastructure market that the mega deal means business as usual, many observers, including analysts from Gartner, take a different view.

We’re now joined by two storage industry experts to explore how consumers of storage infrastructure can best prepare for the expected storage shockwaves from the Dell takeover of EMC and VMware.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To help us sort through the unknown unknowns of such an unprecedented business merger in IT, please welcome Jorge Maestre, Competitive Strategist, Global Storage at Hewlett Packard Enterprise (HPE), and Craig Rice, Business Architect at Integris Solutions Group. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Even before this Dell acquisition of EMC was announced back in October, the storage market has been undergoing significant transformation. What have been the trends impacting the global storage business, and why did they prompt such an unprecedented merger in the first place?

Maestre: That’s a good question. Obviously, we start with flash storage. With flash as a focal point of primary storage in the data center, the technologies have evolved. Well, in the case of flash, we’re not talking about an evolution; we’re actually talking about a revolution. It has completely trumped what you have with spinning-disk media storage.


You saw a lot of different opportunities for a lot of different vendors to jump in here and be the first with flash. EMC didn’t have a head start technically. That hurts, when you have vendors like Pure Storage, or ourselves at HPE obviously, or some of these other names like SolidFire and Kaminario.

And as companies are consolidating their primary storage into these flash footprints, which can be hyper-dense now, what we found is that other [infrastructure] technologies have emerged. These technologies and these trends have been here for a while, but now … they are very complementary to primary storage. You now have use cases in your data center where you can take advantage of things like hyper-converged or software-defined, or even just reinvest in file.

Now, you’re looking at a data center that needs to have a completed picture. For all of EMC’s bravado, for all of their product set, for all of their ability to sell, the completed picture from them isn’t something that necessarily has always looked pretty.

We saw the result, which is the constant revenue decline. I think they’re in consecutive quarters of revenue decline, and in some cases, they’ve taken a pretty bad hit. They’ve lost the midrange. The number-one product in the midrange is the HPE 3PAR. They lost that segment, and that was their staple.

They’ve seen VMAX revenue decline by almost 50 percent or more in the last few years, and so it has painted this picture of this huge conglomerate, monolithic company, maybe losing its way. The merger was at the right time.

Rice: I think flash is a contributing component here, but the catalyst that’s causing the greatest amount of disruption is shareholder value. Let’s take a look at what’s transpired over the past year.


We have an activist investor [Elliott Management] that’s been bullying EMC for quite some time to divest themselves from VMware. VMware is a catalyst that adds value to their storage array. We look at other organizations such as NetApp and how they had to acquire SolidFire.

We have companies such as Pure, an upstart that’s done maybe $200 million in sales and an innovating leader.

When you look at this, the whole challenge, the true disruption in storage, in the IT market then stems from shareholder value. What uniqueness do any of these mergers and acquisitions bring to the end-user customer? How does a technology change, or an innovation of flash, drive value not to IT, but to the lines of business? That’s what we’ve been seeing here at Integris.

Business motivations

Gardner: So clearly, there are business motivations from EMC and Dell that might not necessarily be the same motivations that their customers are facing.

EMC Chairman Joe Tucci tells everyone it’s business as usual, don’t be worried, but we saw in a recent research report from Gartner: Dell’s Acquisition of EMC Will Impact Storage Customers, 10 March 2016, that this will impact customers of both vendors, no question. They suggested it could take two or four years for the storage market to settle out and for more clarity to come to that.

What Industry Experts

Say About
HPE Storage Leadership and Innovation

So, what are some of the biggest risks, as you see it, Jorge, for storage customers, with this deal in the works?

Maestre: We have to take a look at what EMC is telling people and what other people who are doing investigations on their own are finding, and we’re seeing that those contradict one another. … EMC’s business customers, their business partners, might be in a state of confusion, but I think storage is pretty solid in general.

There have been CRN articles, there have been Register articles that talk about what EMC is telling everyone, “This is going to be great. This is going to go well. The two companies combine. They’re going to be $80 billion. We’re $80 billion with a revenue, blah, blah, blah, blah.” The reality is that’s not going to be the case. Both companies are seeing revenues continue to decline.

As they merge, probably there’s not going to be any overlap. Just take a look at the storage portfolio. You’re not going to see a lot of overlap there. EMC is going to announce a new product [in May 2016] that everyone is expecting to jump into that entry-level space. So, they’re probably going to even create a displacement for Dell Compellent.

And, of course, Dell is telling people, “No matter what happens, we’ll still support Compellent for five years.” That’s pretty much saying, “This product is dead.” Most people agree that that’s going to happen.

From a product set perspective you’re not going to see too much craziness. You’re going to have the same EMC salespeople selling the same stuff. They’re going to be selling servers too now, which could be a good thing or a bad thing, depending on where you come from.

But what we’re not seeing, and what we’re not going to see, is any type of growth. There is no way there’s going to be any growth. They’re talking about cutting $2 billion worth of expenses just to pay for this $67 billion deal. That’s a huge number. Cutting your expenses that much in order to show an increase in revenue, assuming you don’t lose any customers, or lose any executives, as this merger becomes complete, is just a huge risk.

Not going to happen

It’s not even a risk; it’s an uncertainty. There’s no way it’s going to happen. I think there is a CRN article that talks about this. In order for them to actually show revenue growth, they have to see a seven percent improvement on top of the $74 billion that would combine the two companies together. That’s where they would be today.

That’s crazy — seven percent on top of that and from two companies whose revenues continue to decline? How is this merger all of a sudden going to stop the revenue decline, turn it around, and bring it up seven percent? We could talk about financials all day, but you have to have a compelling product set for that. They don’t have it. I just don’t see it.

Rice: I’d like to emphasize one thing that Jorge said. I have some unique insight. We are a partner that used to be exclusively EMC. We’ve seen the writing on the wall. We’ve been working with HPE and transitioning over. We have a lot of good friends that have worked at EMC for 10, 12, 15 years, and in that highly competitive sales force environment of EMC, that’s a lifetime. These key leadership positions from district managers, area managers, and engineers are leaving the company in droves.

Why are they leaving if this is such a good deal and things are going forward? I have customers asking, “What happened to Bob Smith? He has been our rep or our district manager for 10, 12 years, why did he leave and go there?” I think that just puts credence on Jorge.

Gardner: We certainly have very big and different cultures here, where EMC has always been focused on enterprise, large companies, with an aggressive sales force, a very involved sales force. Dell, on the other hand, focused more on the mid-tier, and largely a self-service culture, where people are encouraged to buy things at a commodity level.

So what does that mean for enterprises? Are they going to see the Dell culture come to the EMC market or will the EMC market go to the Dell tier? How do you see these cultures melding, particularly in sales, that inflection point between the customer and the vendor? Jorge?

Maestre: Here’s the thing. This gets a little frustrating because we’re dealing with the greatest sales spin marketing company of all time. EMC is the Michael Jordan of sales spin, marketing, and everything else. Maybe not so much in product delivery and all that other stuff, but the reality is that these guys know how to talk the game.

It’s like everyone went to the Don King school of selling. They can just promote, promote, promote all day. They do a good job of being Don King-like, every single one of them. For those who don’t remember, Don King was a huge boxing promoter in the ‘80s; Google him.

So, they are all that and they are good at that. For me, it’s very frustrating, because there is nothing there. We take a look at the revenues, the product sets, and there’s just nothing here. You’re looking at two completely different product sets. There’s nothing compelling about it.

Now take that a step further. Why are people so interested in this? Why is everyone in love with this merger? The reality is because people love EMC. It’s the badge, it’s the sales badge, it’s the resources, and it’s the fact that they make you feel good. They come to your house. They make you hot cocoa. They tuck you in at night. That’s what they do. That’s how you sell. They’re great at it. Nobody does it better than them. They literally set a bar of selling that no other vendor has even attempted to approach. You have to tip your hat.

Gardner: How will that change, Jorge?

It takes resources

Maestre: Well, that’s just it. It takes resources. You have to invest in that. You have to put a lot of money behind that. You have to create a huge support infrastructure. Take a look at how each company invests in their R and D, just to put it in perspective. Dell’s numbers, public numbers are somewhere in the area of 10 percent. EMC’s numbers are somewhere in the area of 25 percent; it might be a little bit more than that.

Think about their resources. EMC is a resource-heavy company. Dell is a very lean company. They’re very much an assembly-line company. Let’s push it out here, and we’ll make our revenue through volume, and don’t worry about the margins. That’s what they’ve shown. It’s almost contradictory cultures, contradictory selling styles, and now you have to put them together.

There’s an ESG report that targets EMC customers and asks how they feel about this? Seventy five percent of the people who responded to that said, “We’re fine; nothing is going to change.” That’s crazy.

[The report] is actually telling you that 25 percent of those people are concerned. Twenty five percent is a big number for people who are EMC loyals. That’s a huge number, and we have to consider that.

At the end of the day, when this is all completed, those 25 percent are right to be nervous about this. There’s no way Dell just raised $45 billion. It’s not like they went to the bank and asked for a $45 billion mortgage. They actually raised $45 billion in private equity.

That means they don’t even get to say how the money gets spent. I’m sure they had to show game plans and show how things are going to work to get the money. So, of course they had a plan. And of course the private equity investors were no problem. They bought into the plan when they gave the money, but they still have to have return on that.

And that means you’re not going to be resource-heavy the way EMC is today. You’re not going to invest in your business the way EMC does today. You have no choice; you have to recoup it. So if we see the data, it’s already there. Dell has told people they have to cut expenses by $2 billion a year. How can you be resource-rich, resource-heavy, the way EMC is today and cut $2 billion in expenses? You just can’t. You can’t have it both ways. It’s one or the other; there’s no way around this. There are a lot of EMC customers out there who are due for a major wake-up call.

Gardner: Craig, Jorge said the halcyon days of EMC sales is coming to an end, that they won’t be spending the money to have that sales force. Is that what you’re seeing, and what’s wrong with going to the Dell model of a straightforward information-based, order-it-online approach to storage?

Assembly-line model

Rice: We’re seeing that. Like I mentioned earlier, Dana, there are a lot of people who have been long-term tenured, the soul of EMC, and they’re leaving the organization. There’s nothing wrong with going to the streamlined assembly-line model. I hope they do it and I hope they do it successfully, because what that means for a partner like myself that’s focusing on HPE is that they’re taking value out of the equation.

Their buyers are going to come to Dell-EMC and they’re going to buy solely on price. Going to Jorge’s point, in raising $45 billion in private equity, you have to do an awful lot of volume to pay back those types of people.

When you start to add value and you understand the customers’ business like we do and other HPE partners do, because of the portfolio which HPE has, it’s going to become a very clear night-and-day difference of who is going to be able to provide a business the ability and technology in the partnership to grow from 10 percent of their market share to 20 percent to 30 percent. I don’t know many businesses that just want the low price and don’t want value and don’t want a partner to help them grow their business. The Dell-EMC model is not that.

Gardner: It seems that Dell is taking a risk by not having a more sophisticated approach to sales if that’s what they need to do.

Rice: Oh, 100 percent. They’re not just taking a risk. They’re betting the whole company. They’re putting everything in on black. That would be concerning to me if I were a customer looking at that. They’re going to be so debt heavy, so focused on storage without innovation on compute. Storage is just not alone; you have all these applications, all these business processes that need to rely on compute.

What type of innovation are they going to do? Let’s make that even a little bit cloudier. You’re not going to do any innovation, but yet you sell a lot of servers because you’re a volume-based business, but yet I have a partnership with a competitor. So I have competition with Cisco that’s also self-compute.

Now, how can the two of you offer something you need, how can they bring out a product like Apollo or Moonshot? You need to do more than just innovate on storage; you need to innovate across whole IT spectrum.

I don’t see them doing that because they’re going to be so debt-heavy, so laden, that they have to trim all these costs and expenses, and by the way, they have to do an awful lot of volume. If you’re doing volume, you can make the best little widget, whatever that widget is, but how do you bring out that next product line, how do you impact the market, how do you change the industry, how do you bring out something like what HPE is doing with composable infrastructure? Where is that innovation in the Dell model?

Gardner: Clearly, this is not business as usual in the new sales force. So how can organizations that might be heavily EMC-orientated, or for smaller-sized organizations that are using a lot of Dell, protect themselves? They can hope for the best, they can hope that things don’t change for them, but what assurance can you put in place so that no matter what happens with Dell and EMC, you can, as an enterprise, still continue to do your business as usual?

Stay or move

Maestre: That’s a good question. For the Dell customers, the product set is easy to stay in or move to something else. If you choose to stay with the new Dell-EMC, there are a million ways to graduate from Dell into EMC’s portfolio, and of course, there are a million ways to get off of Dell’s portfolio easily altogether. So those customers are relatively safe. I think it’s relatively low risk.

The challenge … is not going to be technical, but it’s certainly going to be relationship-wise, and I don’t mean to disparage Dell. If it comes across disparaging, let me apologize up front for it, but Dell isn’t necessarily known for being a relationship company. You may have business processes in place, you may have contracts in place, things you get at a certain dollar-per-gig or at a certain price point. There is some risk to that, but that happens in business every day anyway. So, very little risk.

Let’s flip it over to the harder question, which are EMC shops. Forget that I work for HPE or anybody else. EMC products may work, but there’s no question that it takes a lot longer to get those things set up and in place than other vendors’ products.

So, you’ve now not just made a financial investment but you have a significant time investment, a significant training investment. That’s a lot trickier. If you’re not happy with this new combined Dell-EMC entity, if you’re not happy with the direction, if you’re not happy with the products that you are going to get going forward, you have a long road ahead of you. You’re going to have to talk to some vendors and you’re going to have to figure out how to migrate off. You have to figure out what your direction is. I would give those customers the same advice I give any customer.

What Industry Experts

Say About
HPE Storage Leadership and Innovation

What’s your plan? What does your world look like in three years, in two years, in one year, whatever it is? Tell me what Utopia looks like, give me that, and then we’ll figure out how to make the technology fit that. I don’t think those customers should be making concessions for the technology or the technology vendor or the technology vendor story. They should be making those vendors either deliver, or move on to a vendor who can. Those are the conversations they have to start having.

In a way, this is an opportunity. EMC customers who invested in a lot of infrastructure can now look around and say, “Maybe this is an opportunity for me to shrink my infrastructure, to take advantage of the fact that it’s a buyer’s market in storage, take advantage of flash, take advantage of all these different things, and see what I can do to restart my infrastructure and get me closer to what my dream vision of my data center would look like.”

It could be a long and winding road. You may want partner companies. Partners are critical. The one thing that everyone has in common is Dell. HPE, IBM, no matter who you’re talking to, they’re all talking about partners and how important partners are. This is the best time in the world to lean on those partners and say, “Guys, help me navigate through this.”

The challenge is finding an impartial or unbiased partner. Everybody works with one specific vendor, and in that way, they’re just an expansion of the vendor, but there are a lot of good ones out there. This is the best time to lean on those guys if for nothing else, then just to get their consultative advice.

Gardner: What’s the time frame here, Jorge? It seems to me that we’re at a point where business agility means getting new systems in place to accommodate things like user experience, big data, and Internet of Things (IoT). These are driving change very rapidly. Waiting two or three years seems  to me a very long time for making strategic decisions.

Maestre: Let’s put that in perspective. The only concept in this industry is change. Things are always going to get better. Social media has created an endless stream of data that’s going to be written all the time. IoT exacerbates that. Change is always going to be here.

Every vendor is always going to be changing in some way, shape, or form, always going to be evolving. They have to. Otherwise, they’re going to be left behind. Craig brought up a good point earlier about NetApp and what happened at NetApp. Now, they are buying SolidFire, and that’s like their fifth or sixth different attempt at getting into the flash market.

So, you’re certainly looking at a world where you can’t be just constant. Either you stay in front of it or you’re going to get left behind. The issue for EMC customers for the next two or three years is not so much the roadmap, the combined product set. Everybody agrees that there is very little overlap. No one wants to disrespect Dell here, but the reality is that there is very little in the storage world that Dell has that isn’t going to be replaced by EMC, and EMC doesn’t sell servers.

Real questions

Sure, there are some questions around VCE and Vblock, but is that going to be their investment? Why would you continue to partner with Cisco Unified Computing System (UCS) when you have servers already? Those are real questions, and that’s probably one of the points that Craig made so well before.

But the reality is that that’s not where you are going to feel the pain in the next two or three years. Where you’re going to feel the pain over the next two or three years is in that thing that made EMC special, the fact that they make you feel good about your purchase and the fact that they support you, and they deliver what they say.

Your EMC service is based on the people who are going to leave, the point that Craig made earlier, but not just the people who are going to leave, but what process is going to survive. You have to be a blind fool to go into this thinking that nothing is going to change; that’s ridiculous. Of course, something is going to change. Even if everything worked out in the way that Joe Tucci said it would, there would still be a lot of change, there would still have to be some concessions. So there is no question about that things are going to change.

So the one thing that made EMC great, in making all of those steps to give you what they promised, feels painless and makes you feel good. All of that is where you’re going to feel the pain for the next two or three years. Craig nailed it. It’s going to take about two or three years for them to sort all that out. That’s where the problem lies and that’s where this is going to impact customers. That’s why now is a good time to maybe start thinking about going elsewhere or looking at other direction.

Gardner: How do you as a storage consumer get assurance of reducing your risk, given this complex deal?

Rice: The best thing is to make competition a key component. I’ve read a couple of reviews from a couple well-known organizations that say get it in writing. I worked at EMC for a while, I worked at HPE for a while over the past decade. Prior to this change, a lot of salespeople would always do that get-it-in-writing thing. “Mr. Customer, I guarantee this.” When they leave, what good is that guarantee? They’re a publicly traded company. You can’t commit that in writing. Will Dell and EMC do that going forward? I don’t know.

The best way to keep them honest is to find a partner, such as Integris — there are many other good partners as well. Evaluate some competing technologies. Competition will always keep each other honest. That’s the simplest, most efficient, and least impactful way that a prospective customer can determine it. Do I want to go with Dell-EMC? Do I want to go with HPE? Do I want to go with anyone else? Bring competition in with a partner so they can equally evaluate what they had to offer.

Reducing risk

Maestre: Find partners you can work with that are good. Integris is a good one, and there are others, but find partners who are out there who can take care of you and have your best interests at heart, whose interests aren’t aligned with another vendor’s interests.

It’s great that they resell a vendor’s product, but the best partners have expertise across multiple vendors, and that’s what you want to look for. That’s important.

The other thing is to have a plan, make a plan. One thing I know about HPE in terms of the enterprise is that we absolutely make the best product. I don’t have to give you a commercial to buy my stuff. I know that we have the best product, and you’ll wind up here eventually.

Consider your perfect data center, think it through, write it down, and then start talking to people, and the people who can fit your vision those are the guys you want to talk to.  Don’t worry about what somebody else is saying, what somebody else is marketing, what somebody else is highlighting. The people who ask you to make concessions to fit their product set are probably the guys you want to walk away from. That’s the best way to reduce risk — just essentially invest in yourself.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Dell, Hewlett Packard Enterprise, HP, storage | Tagged , , , , , , , , , , | Leave a comment

Capgemini and HPE team up to foster needed behavioral change that bolsters cyber security across application lifecycles

The next BriefingsDirect discussion explores improving cyber security in applications across their entire lifecycles. Such trends as the Internet of Things (IoT), hybrid cloud services, mobile-first, and DevOps are increasing the demands and complexity of the overall development process.

Key factors to improving both development speed and security despite these new challenges include new levels of collaboration and communication across formerly disparate teams — from those who design, to coders, to testers, and on to continuous monitoring throughout operations. The result is security being integrated into software design, even as the pressure builds to bring more apps to market faster.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’re here now with two experts from a Capgemini and Hewlett Packard Enterprise (HPE) Alliance to learn how to create the culture, process, and technologies needed to make and keep today’s applications as secure as possible.

Please join me now in welcoming our guests, Gopal Padinjaruveetil, Global Cyber Security Strategist for Capgemini, and Mark Painter, Security Evangelist at Hewlett Packard Enterprise. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start with you Gopal. What do you see as some of the top trends that are driving the need for improved security for applications? It seems like we’re in the age of “continuous everything” across the entire spectrum of applications.

Padinjaruveetil: Let me talk about a few trends with some data and focus on why application security is going to become more-and-more important as we move forward.

There’s a report saying that there will be 50 billion connected devices by 2020. There was also a Cisco report that said that 92 percent of the devices today, connected devices, are vulnerable. There was an HPE study that came out last year said that 80 percent of the attacks are now happening at the application layer.

Read the Latest Insights

On How to Protect
Your Enterprise Applications

If you put together these three diverse data points coming from three different people, we see that there will be 37 billion devices in 2020 that are deemed to be vulnerable. That’s very interesting, 37 billion devices vulnerable in 2020. We need to change the way that we develop software.

Key trend

The other key trend that we’re seeing is that agility is becoming a prime driver in application development, where the business would like to have functionality as early as possible. So the whole agile development methodology driving agility is becoming key, and that’s posing some unique problems.


The other thing that we’re seeing from a trend perspective is that apps and data are moving out of the enterprise landscape. So the concept of mobile-first, free the data, free the app, and the cloud movement are major trends that affects the application security and how applications are being developed and delivered.

The other trend is regulators. In many critical industries regulations are becoming very strict with cyber crime and advanced actors. We’re seeing nation states, advanced actors, coming into the game and we’re seeing advanced persistent threats becoming a reality. So that’s driving another dimension to the whole application security.

Last, but not least, is that we see a big shortage of cyber security talent in the market. Those are the trends that drives the need for a different look at application security from a lifecycle approach.

Gardner: Mark, anything to offer in terms of trends that you are seeing from HPE, perhaps getting more involved with security earlier in the process?

Painter: Gopal gave a very good and very thorough answer and he was dead-on. As he said, 80 percent of attacks are aimed at the application layer. So it actually makes sense to try to prevent those vulnerabilities.


We propose that people implement application security during the development cycle, precisely because that’s where you get the most bang for your buck. You need to do things across the entire lifecycle, and that includes even production, but if you can shift to the left, stop them as early as possible, then you save so much money in the long run in case you are attacked.

We do a study in conjunction with the Ponemon Institute every year, and since 2010, every year, it shows that attacks increase in frequency, they’re harder to find, and they’re also increasingly costlier to remediate. So it’s the right way to do it. You have to bake security in. You just can’t simply brush it on.

Gardner: And with the heightened importance of user experience and the need for moving business agility through more rapid iterations of software, is it intuitive to conclude that more rapid development makes it more challenging for security, or is there something about doing rapid iterations and doing security that somehow can go hand in hand, a continuous approach? Gopal, any thoughts?

Rapid development

Padinjaruveetil: There’s a need for rapid applications, because we’re seeing lot of innovations coming, and we welcome that. But the challenge is, how do you do security in a rapid world?

There is no room for error. One of the things from a trend perspective is IoT. One of the things I tell my clients is that if you look at traditional IT, we’re operating in a virtual world, purely a virtual world. But when you talk about things like operation technology (OT), we’re talking about physical things, physical objects that we’re using in everyday life, like a car, your temperature monitors, or your heartbeat monitors. These are physical things.

When the physical world and the virtual world come together with IoT, that could have a very big impact on the physical layer or the physical objects that we use. For example, the safety of individuals, of community, of regions, of even countries can now be put in danger, and I think that is the key thing. Yes, we need to develop applications rapidly, but we need to develop them in a very secure way.

Gardner: So the more physical things that are connected, the more opportunity there is to go through that connection and somehow do bad things, nefarious activities. So in a sense, the vulnerability increases with the connectivity.

Padinjaruveetil: Absolutely. And that’s the fear, unless we change ways of developing software. There has to be a mindset change in how we develop, deploy, and deliver software in the new world.

Gardner: I suppose another element to this isn’t just that bad things can happen, but that the data can be accessed. If we have more data at the edge, if we move computing resources out to the edge where the data is, if we have data centers more frequently in remote locations, this all means that data privacy and data access is also key.

How much of the data security is part of the overall application security equation, Gopal?

Padinjaruveetil: One of the things I ask is to define an application, because we have different kinds of applications. You have web services and APIs. Even though those are headless, we would consider that those are applications, and applications without data have no meaning.

The application and the data are very closely tied to each other, and what’s the value? There’s no real advantage for a hacker just to have an application. They’re coming after the data. The private data, sensitive data, or critical data about a client or a customer is what they’re coming at.

You bring up a very good point that security and privacy are the key drivers when we are talking about applications. That is what people are trying to get at, whether it’s intellectual property (IP) or whether it’s sensitive data, credit card data, or your health data. The application and the data are tied at the hip, and it’s important that we look at both as a single entity, rather than just looking at the application as a siloed concept.

Solving problems

Gardner: Let’s look a little bit at how we go about helping organizations approach these problems and solve them. What is it that HPE and Capgemini have done in teaming up to solve these problems? Maybe you could provide, Gopal, a brief history of how the app security alliance with these two organizations has come about?

Padinjaruveetil: Capgemini is a services company, and HPE has great security products that they bring to the market. So, very early on, we realized that there’s a very good opportunity for us to partner, because we provide services and HPE provides great security products.

One of the key things, as we move into agility or into application development, is that many of the applications have millions of lines of code. These are huge applications, and it’s difficult to do a manual assessment. So, automation in an agile world and in an application world becomes important. That’s a key thing that HPE is enabling, automation of security through their security products and application space. We bring the services that sit on top of the products.

When I go and talk to my clients about the HPE and Capgemini partnership, I tell them that HPE is bringing a very tasty cake, and we’re bringing a beautiful icing on top of the cake. Together, we have something really compelling for the user.

Gardner: Let’s go to Mark in describing that cake, I would imagine there are many layers. Maybe you could describe it for some of our listeners and readers who might not be that familiar with what those layers are. What are the major components of the transformation area around security that HPE is focused on?

Painter: At a high-level, what we’re trying to do is expand the application security scope, and that basically includes three big buckets. Those are secure development, security testing, and then continuous monitoring and protection.

During the development phase, you need to build security in while the developers are coding, and for that specifically, we use a tool called DevInspect. It will actually show secure coding to a developer as he is typing his own code. That gets you much, much farther ahead of the game.

As far as security testing, there are two main forms. There is static, which is code analysis, not only for your own code, but open-source components and other things. In this day and age, you really are taking security into your own hands if you trust open-source components without testing them thoroughly. So, static gives you one perspective on application security.

Then there is also dynamic scanning, where you don’t have access to the code, and you actually attack the application just as the hacker would, so you get those dynamic results.

We have a platform that combines and correlates those results. So, you get to reduce false positives and you can trust the accuracy of your results to a much greater detail.

Sustained frequency

We also provide services, but the whole thing is that you have to do this with sustained frequency. Maybe 10 years ago, there was a stage-gate approach, in which you tested at the end of the development cycle and released it. Well, that’s simply not good enough; you have to do this on a repeatable basis.

Some people would probably consider that the developmental lifecycle ends once the product is out there in the wild, but if anything, my experience in the security industry has taught me that software plus time equals vulnerability. You can’t stop your security efforts just because something has been released. You need that continuous monitoring and protection.

This is a new thing in application security, at least if you call something that’s almost a few years old “new.” With something called App Defender, you can actually put an agent on the application server and it will block attacks in real time, which is a really good thing, because it’s not always convenient to patch your software.

At HPE, we offer a combination of products that you can use yourself and we also offer hybrid solutions, because there’s no such thing as one-size-fits-all in any environment.

Read the Latest Insights

On How to Protect
Your Enterprise Applications

We also offer expertise. Gopal was talking earlier about the lack of qualified candidates, and Forbes has predicted that, by 2019, a full quarter of cyber security jobs are going to be unfilled. Organizations need to be able to rely on technology, but they also need to be able to find experts and expertise when they need it. We do a lot at HPE; I will leave it at that.

Gardner: Gopal, how do these products, these layers in the cake, help with the shifting-left concept, where we move more concern about vulnerability and security deeper into the design, earlier into the coding and development process? Where do the products help with shifting left?

Padinjaruveetil: That’s a great question if you decompose or if you analyze application security as a cake. Security vulnerabilities in applications come from three specific areas. One is what I call design flaws, where the application itself is designed in a flawed manner that opens up vulnerabilities. So a bad design, in itself, causes security vulnerabilities.

The second thing is the coding flaws. Take an Apple iPhone or something like that. If you look at the design of an iPhone, the actual end product, there will be a very close match. A lot of problems we have in software industry are because there is a high level of mismatch between the design and the actual product itself as coded.

Software is coded by the developers, and if the developers aren’t adding good code, there’s a high possibility that that vulnerability is introduced because of poor coding.

Configuration parameters

The third thing is that the application isn’t running in a vacuum. It’s running on app servers and database servers and it’s going through multiple layers. There are a lot of configuration parameters, and if these configuration parameters are not set, then it leads to open vulnerability.

From a product perspective, HPE has great products that detect coding flaws. Mark talked about DevInspect. It’s a great tool from a dynamics perspective, or hacking. There are great tools to look at all these three layers from a design flaw, from a configuration flaw, and a coding flaw.

As a security expert, I see that there is a great scope for tooling in the design flaw, because right now, we’re talking about threat modeling and risk determination. To detect a design flaw requires a high level of human intelligence. I’m sure that in the future, there will be products that can detect design flaws, but when it comes to coding flaws, these tools can detect a coding flaw at 99 percent accuracy. So, we’ve seen a very good maturity in the application security areas with these products, with the different products that Mark mentioned.

Gardner: Another part of the process for development isn’t just coding, but pulling together components that have already been coded: services, SDKs, APIs, vast libraries, often in an open-source environment. Is there a way for the alliance between Capgemini and HPE to give some assurance as to what libraries or code have already been vetted, that may have already been put through the proper paces? How does the open-source environment provide a challenge, or maybe even a benefit, when done properly, to allow a reuse of code and this idea of componentized nature of development?

Padinjaruveetil: That’s a great point, because most of the modern applications are not valid applications. They talk with other applications. They get data from other applications, data through Web service interface, a REST API, and open source.

For example, if you want to do login, there are open-source login frameworks available. If there are things that are available, we’d like to use them, but just like custom code, open source is also vulnerable. There are vulnerabilities in open source.

Vulnerability can come from multiple things in an application. It can be caused by an API. It can be caused by an integration point, like a Web service or any other integration point. It can be caused by the device itself, when you’re talking about mobile and all those things. Understanding that is a very critical aspect when we’re talking about application security.

Gardner: Mark, anything to offer on this topic of open source and/or vetting code that’s available for developers to then use in their applications?

Painter: Well, it’s not an application, but it’s a good example. The Shellshock vulnerability was due to something wrong with the code of an open-source component, and that’s still impacting servers around the world. You can’t trust anybody else’s code.

There are so many different flavors of open-source components. Red Hat obviously is going to be a little better than your mom-and-pop development team, but it has to be an integrated part of your process for certain.

Cyber risk report

There is something Gopal was saying. We do a cyber risk report every year at HPE, and one of the things we do is test thousands and thousands of applications. In last year’s results, the biggest application flaw we found were basically configuration flaws. You could get to different directories than you should be able to.

Application security is not easy. If application security were easy, then we still wouldn’t be having cross-site scripting vulnerabilities that have been around almost as long as the web itself. There are a lot of different components in place. It’s a complex problem.

Gardner: So it’s important to go to partners and tried and true processes to make sure you don’t fall down into some of these holes. Let’s move on to another area, which is also quite important and difficult and challenging. That is the cultural shift, behavioral changes that are forced when a shift left happens, when you’re asking people in a traditional design environment to think about security, operations, configuration management, and business-service management.

Gopal, what are some of the challenges to promulgating cultural and behavioral changes that are needed in order to make a continuous application security culture possible?

Padinjaruveetil: That’s a key aspect, because most of the application development is happening in a distributed team, and things are being assembled. So there are different teams building different things, and you’re putting together the final application product and deploying it.

Many companies have now started talking about security policies and security standards, whether it’s Java development standards or .NET development. So, there are very good industry standards coming out, but the challenge is that having a policy or standard alone is not sufficient.

What I tell my clients is that any compliance without enforcement is ineffective. The example that I give is that we have traffic laws in India. If you’ve been to India and you look at the traffic situation there, it’s chaotic. Here, you see radar detection and automated detection of speed and things like that. So enforcement is a key area even in software development. It’s not enough to just have standards; you need to have enforcement.

The second thing I talk about is that compliance without consequence will not bring the right behavior. For example, if you get caught by a cop and he says, “Don’t do this again; I’ll let you go,” you’re not going to change your behavior. If there’s a consequence, many times that makes people change behaviors.

We need to have some kind of a discipline and compliance brought into the application development space. One of the things that I did for a major client was what I call zero tolerance. If you develop an application and if we did find a vulnerability in the application, we won’t allow you to deploy it. We have zero tolerance on putting up unsecured code when we use one of these great products that HPE has.

Once we find an issue with a critical or a high issue that’s been reported, we won’t let you deploy. Over a period of time, this caused a real behavioral change, because when you stop production, it has impact. It gets noticed at a very higher level. People start questioning why this deployment didn’t go.

Huge change

Slowly, over a period of time, because of this compliance and because of the enforcement with consequences, we saw a huge change in behavior in the entire team, right from project managers to business analysts making sure that they are getting the security non-functional requirement correct, by the project managers making sure that the project teams are addressing it, the architect making sure the applications are designed correctly, and the testers making sure that the testing is correct. When it goes into an independent audit or something like that, the application comes out clean.

It’s not enough if you just have standards; you need to have some kind of enforcement with that.

Gardner: Mark, in order to have that sort of enforcement you need to have visibility and measurement. It seems to me that there’s a lot more data gathering going on across this entire application lifecycle. And big data or analytics that we have in other areas are being brought into this fold.

Is there something about automation, orchestration, and data analytics that are part and parcel of the HPE products that could help on this behavioral shift by measuring, verifying, and then demonstrating where things are good or not so good?

Painter: One thing that HPE uses to build it in is secure coding, but also we talk about detect and response. We have an application product that integrates with our security and monitoring tool from ArcSight.

So you can actually get application information. Applications have been a typical blind spot for Security Information and Event Management (SIEM) tools, and you can actually get some of those results you are talking about from our SIEM technology, which is really cool.

Over the past 10 years in the security industry, we’ve changed from the idea of we’re going to block every attack, to one that says the attackers are already inside your network. This is part of that detection. Maybe you didn’t find these. You can see active exploitation in other words, and then you can track it down and stop it that way.

Fifteen years ago, you had to convince people that they needed application security. You don’t have to do that know. They know they need it, but they just might not exactly know what they need to do.

It’s all about making this an opportunity for them to get security right, instead of viewing it as some sort of conflict between the need for speed and agile development and the need to release balanced against the needs of the enterprise to actually be secure and protect themselves from potential data breaches and potential data loss and all the compliance issues and now legal challenges from individual actors and all the way down the line.

Gardner: Gopal, before we close out, let’s look to the future a little bit. What comes next? Do you expect to see more use of data, measurement, and analytics, a science of development, if you will, to help with security issues, perhaps feedback loops that extend from development into production and back? How important do you think this use of more data and analytics will be to the improved automation and overall security posture of these applications?

Continuous improvement

Padinjaruveetil: You need to have data and you need to have measurements to make improvements. We want continuous improvement, but you can’t manage unless you measure. So we need to determine what are the systemic issues in application development, what are the systemic issues that we see constantly coming?

For example, if you’re seeing cross-site scripting as a consistent vulnerability that’s coming across the multiple development team, we need to have some way to make sure that we’re seeing patterns with the data and looking at how to reduce these major systemic errors or vulnerabilities in systems?

You will see more-and-more data collections, data measurements, and applying advanced methods to look at not just the vulnerability aspect of it, but also the behavioral aspect. That’s something that we’re not doing, but I see a huge change coming where we’re actually going to see the behavioral aspects being tracked with data in the application lifecycle model.

Gardner: Another thing to be mindful of is getting ready for IoT with many more devices, endpoints, sensors, biological sensors. All of this is going to be something coming in the next few years.

How about revisiting the skills issue before we sign off? What can organizations do about  maintaining the right skill sets, attracting the right workers and professionals, but also looking for all the options within an ecosystem, like the alliance between HPE and Capgemini. How do you see the skills problem shaking out over the next several years, Gopal?

Padinjaruveetil: If you look at many of the compliance frameworks, like NIST or ISO 27001, there’s a big emphasis on control being put in place for security awareness and education. We’re seeing a big drive for security education within the whole organization.

Then, we’re seeing tools like DevInspect. When a developer writes bad code, if you give the feedback instantly that right now you have written a code that is bad, instead of waiting for three months or four months and doing a test, we’re seeing how these tools are making changes.

So, we’re seeing tools like DevInspect and helping developers to actually make themselves better code writers.

Painter: Developers are not natural security experts. They need help.

Padinjaruveetil: Yeah, absolutely.

Additional resources

Gardner: That was my last question to you, Mark. Can you suggest places that people can go for resources or how can they start to prepare themselves better for a number of the issues that we have discussed today?

Painter: It’s almost on an individual basis. There are plenty of resources on the Internet. We provide training as well. Web application security is actually one of the best places for organizations to leverage Capgemini to do their web application security testing.

The job crunch is the number one concern that enterprises have right now as part of security in the enterprise. There’s a lack of qualified applicants, which says a lot when that’s a bigger concern than a data breach. We do a State of the SOC survey every year, and that was the result from the last one, which was a little surprising.

Read the Latest Insights

On How to Protect
Your Enterprise Applications

But apart from outsourcing, you need to find those developers who have an interest in security in your organization, and you need to enable them to learn that and get better, because that’s who is going to be your security person in the future, and that’s a lot cheaper and a lot more cost-effective than going out and hiring an expert.

I know one thing, and it’s a good thing. I tell my boss repeatedly that if you have good security people, you’re going to have to pay them to keep them. That’s just the state of the market as it is now. So you have to leverage that and you have to rely on automation, but  even with automation, you’re still going to need that expert.

We are not yet at the point where you can just click a button and get a report. You still need somebody to look at it, and if you have interesting results, then you need that person who can go and examine those. It’s the 80/20 rule. You need that person who can go to the last 20 percent. You’re going to have automation, tools, and what have you to get to that first 80 percent, but you still need that 20 percent at the end.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cyber security, Hewlett Packard Enterprise, HP, Security | Tagged , , , , , , , , , , | Leave a comment

How efficient cloud networks help the smallest companies do brisk business with the largest

The next BriefingsDirect technology innovation thought leadership discussion examines new ways for small businesses to make and manage the connections that matter to them most using cloud-based networks to bring intelligent buying and digital business benefits to any type of company.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the business of doing more commerce in the digital economy using cloud-based networks, please join me in welcoming Bob Rosenthal, Chairman and CEO of JP Promotional Products, Inc. in Ossining, New York, and Anne Kramer, CEO at Ergo Works, Inc. in Palo Alto, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Bob, what is JP Promotional Products? What do you do?

Rosenthal: JP Promotional Products is a distributor of imprinted promotional products. Anything you can put an imprint or logo on. We’ve had this company with my daughter for about 12 years now and we sell to small companies, large companies, anyone who buys promotional products from us.


Gardner: Why is being digital, being on business networks, an important part of the way you find new clients?

Rosenthal: What this has given us is the ability to find the size of client that we could not ordinarily find. We’re getting into large corporations, and it is very difficult for a small company to get access to a large entity. Being on a network like SAP Ariba and leveraging services like Ariba Discovery has gotten us into some of these very large corporations.

Gardner: Anne, tell us about Egro Works.

Kramer: Ergo Works is a small, woman-owned company based in Palo Alto, California. We’re a full-service ergonomics company. We offer workstation evaluations and consulting, a complete line of ergonomic furniture, accessories and computer peripherals, as well as installation services. So, I would call it solution selling.

Gardner: And do you also share Bob’s challenge of trying to be seen and heard in a busy world, by big companies that perhaps don’t know about small vendors?

Kramer: Absolutely. It’s a challenge to get an audience with this group. They generally have established vendors, and trying to knock down those doors is challenging at best.

Gardner: We know that many of the buyers of goods are looking for increased automation. They’re looking for intelligence in that network and the partnerships and ecosystem that they play in. So they want to find people like you that have goods and services for them. What was it that you had to do in order to then be seen and heard, be and recognized among them?

Rosenthal: We joined Ariba Discovery, and that gave us the ability to search for leads as well as respond to matched leads. As a matter of fact, one of the first ones I got was about a half an hour after I paid for Ariba Discovery. It was a Fortune 100 Company. They were looking for a thousand pair of imprinted socks, something I knew we could do. It was a no-brainer. We established our relationship with the procurement manager. They never bought the socks, but we have a relationship now, and without Ariba Discovery, there was no way we could have done that.

Gardner: And is geography a barrier for you or you can do business with anyone, anywhere?

Rosenthal: We can do business with anyone, anywhere. The bulk of it is in the Continental US. We can ship to England or Canada and we do bring some product in from China as well.

Gardner: And for you, Anne, tell us about what you needed to in order to find clients.

Challenge of growing

Kramer: We’re located in Palo Alto, which is ground zero in Silicon Valley for ergonomics. So, we are well poised in that regard. Nonetheless, the challenge of growing a small business is ever present.


One way that we’ve overcome that is to participate in online marketplaces. Specifically, what we’re excited about now, and why I’m here today, is the Ariba Spot Buy Program. This is going to give us a direct access to large companies that have been challenging for us to get into. It’s an exciting opportunity. Unlike other marketplaces that are geared to one-off end users, Ariba is geared toward large corporations; so we’re very excited.

Gardner: Can you give us a bit more about background and understanding of Ariba Spot Buy? These are not the usual contracts that are ongoing and repeatable, but are instances where there is a need, an ad-hoc need perhaps, in a large organization. A purchasing department has been tasked with doing this or maybe people directly in the company have got the authority to find and buy things on their own.

Kramer: That’s very well put. For example, we are currently an Ariba supplier with  several clients and we offer a static catalog. We often provide or make recommendations for products that are off catalog, and Ariba Spot Buy allows companies to buy products from vendors that they don’t currently have a contractual relationship with.

The niche that we’re in is a relatively small niche. So it may not warrant a company wanting to put together a catalog. This is an opportunity for them to buy these products, yet stay compliant within the Ariba ecosystem.

Gardner: Now, of course, a big approach to finding things nowadays is through search on the web and having a good website, and getting good rankings on the search engines is a big part of that. But it strikes me that you’re small, you’re not going to get the kind of traffic on your website that might elevate you in those search results, and you are also highly customizable. So you’re not just putting a big billboard up on the Internet, so to speak, and say, here we are.

You’re offering custom types of things, with promotional products in your case, Bob, and you probably want to hear a lot about each customer and tailor your services to them. How do you overcome the challenge of not being able to put a billboard up on the Internet, but also maintain the advantage of having highly customized products, Bob?

Rosenthal: Our own website has hundreds of thousands of items on it. It’s an industry-based website. If you’re searching for almost any product, you’ll find it on our site.

In terms of how we got people to our site, we did invest some money a few years ago. We decided to go with what’s called Local Search. We put money into being on the first page in New York State, the Tri-State area, and that’s gotten us a few large accounts.

What we’re looking for in Ariba Spot Buy is to bring in more business because a lot of our products are last minute. Someone will remember at the last minute, “Oh, I’m doing a trade show next week; I need a thousand widgets to give away. I forgot to buy them. I don’t want to go through a contract.” That’s where I think Ariba Spot Buy will help us because we can deliver products in 24 hours if we have to.

Network advantage

Gardner: So there is an advantage to being in a business network versus just the worldwide wild web?

Rosenthal: Right. What that gets us is more targeted corporations, hopefully larger entities. Where a small corporation might buy 100 pieces, the big corporation is going to buy thousands of pieces. That’s why we’ve joined Ariba Discovery and are looking at Ariba Spot Buy.

Gardner: And I suppose, as someone in a selling position, you’re also getting a lot more information about who you’re selling to, given that they’re in the network and you can see and access more about what they’re looking for?

Rosenthal: That’s true, and where that helps is that we tend to add a lot of creativity to it. If we know who you are and what you do, we can make recommendations for certain kind of products. If you’re a tractor company exhibiting at a show, maybe we’ll suggest a squeeze toy in the shape of a tractor. Knowing who you are and where you are helps us with our creativity in suggesting products.

Gardner: And for you, Anne, in the same vein, trying to be seen, heard, and understood in the Worldwide Web is perhaps a bit more daunting than on a business network. How do you overcome that need to customize and tailor your goods and services?

Kramer: Certain products lend themselves more to selling on the web than others, and same with online marketplaces. The visibility with  Ariba Spot Buy will give us the opportunity to interact with our customers to offer them custom products and get into project-based opportunities.

Gardner: We’re also seeing from SAP Ariba the desire to bring more collaboration embedded and automated into these applications and services. Also, with Guided Buying, they’re allowing the sellers to be part of an intelligence network, so that buyers can be led through the process and automation can be brought to bear. How do these new technological advantages affect you as a small businesses particularly, Anne?

Kramer: Technology helps us with new ways to bring our products to market and expose our offerings to a larger audience. That’s really the biggest benefit.

In addition, it helps us to expand our current relationships with our Ariba buyers. They can now buy off-catalog, which is a win-win. Technology also impacts the products that we sell. As technology changes, the products change in response to the latest mouse design or the material that a wrist rest is covered in, maybe it’s anti-microbial for instance. So technology has a huge impact on direct and indirect part of our business. 

Running the business
Gardner: Of course, it’s important for small businesses to have visibility into cash flow, when to expect payments, and how to bill accurately and appropriately. Any thoughts, Bob, on how this business network for you also adds to your own ability to run your business properly?

Rosenthal: In terms of technology, the biggest issue with us is the logo. Anyone can say they want a Bic pen. Where the technology should help us is in getting the art files from one point to the other and knowing, as far as things like cash flow, who we’re dealing with, that it’s a large corporation. Some use POs, some don’t, for these type of buys. It gives me more comfort that we are going to get paid.

It’s difficult to ask General Motors for a deposit for a $1,000 order, but we might ask the insurance broker down the street for that. So that comfort level of knowing we should be paid on a certain date is a big advantage.

Gardner: Anne, the same thing. Business visibility is important. Is there something about a business-network approach that’s beneficial to you in being able to run your business well?

Kramer: Well, specifically what I am excited about with Ariba Spot Buy is that all the purchases are made using a credit card, which we love because it helps us control our cash flow. We don’t have to go chasing after past-due invoices, and that time can be better spent selling more products. We love the fact that it’s all credit-card based.

Gardner: Are there any specific examples of actual customers that you found through the Ariba Discovery process in this online marketplace that would illustrate some of these points? You don’t have to name them necessarily, but maybe walk us through how it’s worked and how that’s different from the other approaches that you’ve had to find in customers, Bob?

Rosenthal: Well, the big account that we got, which I can’t name, has turned into a huge account for us. We’ve established a relationship with the procurement people, and I think that relationship has built this business with them over the last 18 months, because they have a confidence level in us, and we are confident in them that, a) we’re going to get paid, and paid on time, and b) it’s a continuing relationship.

We do a lot of one-offs. We get a hit on our website, I need something tomorrow, can you get it? We never hear from the people again but we get an order, which is great; we do a lot of that. But we also try and establish relationships and that’s what we get out of Discovery so far.

Gardner: As a small-business person myself, I know that you don’t want to push that rock up the hill every month. You want to have the recurring dependable revenue; it’s super important, right?

Kramer: Right. Ariba Spot Buy is an opportunity for ongoing and repeat business from companies participating in this technology.

Gardner: But this allows you to get the best of both worlds, which you can discover and find new interesting clients, but you can also maintain a steady flow from, from your installed base.

Kramer: That’s right. This technology offer us an opportunity to engage new corporate customers and get paid quickly with credit card payments.

Gardner: Thank you, Bob, and if people want to learn more about your organization, how might they do that?

Rosenthal: Our website is or feel free to call us at 1-800-920-3451.

Gardner: Anne, how could organizations learn more about your company?

Kramer: They could go to our website at or our toll free number 866-ASK-ERGO.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, Business networks, SAP, SAP Ariba | Tagged , , , , , , , , | Leave a comment

Intralinks uses hybrid cloud to blaze a compliance trail across the regulatory mine field of data sovereignty

The next BriefingsDirect hybrid computing case study discussion explores how regulations around data sovereignty are forcing enterprises to consider new approaches to data location, intellectual property, and cloud collaboration services.

As organizations move beyond their on-premises data centers, regulation and data sovereignty issues have become as important as the technical requirements for their cloud infrastructure and applications.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how organizations have been able to get the best of data control and protection — along with business agility — from hybrid cloud models, we’re joined Richard Anstey, CTO at Intralinks, and he’s based in London. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the trends that make data sovereignty so important as a consideration when organizations look at how and where to manage, house, and store their data.

Anstey: This is becoming a much more important topic. It has obviously been in the news very much recently in association with the Safe Harbor regulation having been effectively annulled by the European courts.


This is the regulators catching up with the Internet. The Internet has been somewhat unregulated for a long time, and quite rightly, the national and regional authorities are putting in place the right protections to ensure that citizens’ data are looked after and treated with the respect they deserve.

So it’s becoming more important for companies to understand the regulatory environment, even those organizations that did not previously feel that they were subject to such regulation.

Gardner: So the pendulum seems to have swung from the Wild West Internet toward greater security oversight.  Do we expect more laws across more jurisdictions to make placement of data more restricted? Are we seeing this pendulum swing more toward regulation?

Anstey: Yes, it’s certainly swinging that way, and the big one for the European Region of course is the General Data Protection Regulation (GDPR), which is the European Commission initiative to unify the regulations, at least across the European Union. But the pendulum is swinging toward a greater level of regulation.

Gardner: How about in Asia-Pacific (APAC) and North America, what’s happening there?

Global issue

Anstey: Post-Snowden, this has become much more of an issue globally, and certainly across APAC there have been some very specific regulations in place for sometime, Singapore Banking Authority being the famous one, but globally this is becoming much more of an important issue for companies to be aware of.

Gardner: So while the regulatory atmosphere is becoming more important for companies to keep track of, its also more onerous for them as businesses to comply. The Internet is still a very powerful tool and people want to take advantage of cloud models and compliant data lifecycle models. Tell us about Intralinks, and about how organizations can have the best of both protected data and cloud models.

Anstey: Intralinks is in the fortunate position of having been offering cloud services in highly regulated environments for almost 20 years now. Back when we were founded, which by the way was really before most people would do their shopping online, Intralinks was operating things called Virtual Data Rooms to facilitate very high value, market-moving transactions through effectively a cloud service. We didn’t call it cloud at that time; we called it software as a service (SaaS).

HPE Cloud
HPE Helion
Click Here to Learn More

But Intralinks has come from this environment. We’ve always been operating in highly regulated environments, and so we’re able to bring that expertise that we have built up over the last 20 years or so to bear on solving this problem for a wider range of organizations as the regulation really steps in to control a greater part of the services delivered over the Internet today.

Gardner: In a nutshell, how is it that you’re able to do, in a highly regulated environment, what people think of as putting everything in a cloud?

Anstey: Well, in a nutshell, it may be tricky, because there’s lot to it. There’s a lot of technology that goes into this. And there are a lot of dimensions around which you need to consider this problem. It’s not just about the physical location of data. Although that may be important, there are other dimensions. Physical location may be one thing to think about, but there’s another thing called logical location.

The logical location is defined as the location of the control point of the encryption as opposed to the location of highly encrypted data, which many people would argue is somewhat irrelevant. If it’s sufficiently encrypted, it doesn’t matter where it is. The location of the key is actually more important than who controls that key, and more important than where your encrypted data lives.

In fact, we all implicitly accept that principle. When you use your online bank, you don’t know the route that that information takes between your home computer and the bank. It may well be routed across the Atlantic, based on conditions of the Internet. You just don’t know, and yet we implicitly accept that because it’s encrypted in transit, it doesn’t really matter what route it takes.

So there is the physical location and the logical location, but there is still also the legal location, which might be to what jurisdiction this information pertains. Perhaps it pertains to a citizen of a certain country, and so there is a legal location angle to consider.

And there is also a political location to consider, which may be, for example, the jurisdiction under which the service provider is operating and where the headquarters of that service provider is.

Four dimensions

There are four dimensions already, but there is another one as well, which is the time dimension. While it may be suitable for you to share information with a third party in perhaps a different jurisdiction for a period of time, the moment that business agreement comes to an end, or perhaps the purpose or the project for which that information was being used has come to an end, you also need to be able to clear it up.

You need to tidy up and remove those things over time and make sure that just because that particular information-sharing activity was valid at one point, it doesn’t mean that that’s true forever, and so you need to take the responsibility to clear it up. So there are technologies that you can bring to bear to make that happen as well.

Gardner: It sounds as if there is a full spectrum, a marketplace, of different solutions and approaches to suit whatever particular issues an organization needs in order to satisfy the regulatory, audit, and other security requirements.

Tell us about how you have been working with HPE to increase this marketplace and solve data sovereignty issues as they become more prominent in more places.

Anstey: The thing that HPE really helps us with is the fact that while we’ve been able for quite a long time to have data centers in multiple regions — as the regulation and the requirements of our customers grow — we need to be even more agile with bringing new workloads up and running in different locations.

With HPE Helion OpenStack we’re able to spin up a new environment — a new data center perhaps, or a new service — to run in a new location far more quickly and more cost effectively than we would otherwise be able to if we were starting from the ground-up.

HPE Cloud
HPE Helion
Click Here to Learn More

Gardner: So it’s important to not just be able to take advantage of cloud conceptually, but to be able to move those cloud data centers, have the fungibility, if you will, of a cloud infrastructure, a standardized approach that can be accepted in many different data-center locations, many different jurisdictions.

Is that the case, and what can we expect for the depth and reach of your services? Are you truly global?

Anstey: We are certainly truly global. We’ve been operating right across the world for a number of years now. The key elements that we require from this infrastructure are things like workload portability and the ability to plug into additional service providers at any time we need to be able to create a truly distributed platform.

In order to do that, you need some kind of cloud operating system, and that’s what we feel we get from the HPE Helion OpenStack technology. It means that we have become much more portable to move our services around whenever we need to.

Gardner: When you’re an organization and you know that there’s that data portability, that there’s a true global footprint for your data that you can comply with the regulations, what does that do for you as a business?

How does this, from a business perspective, benefit your bottom line? How does it translate into business terms?

Enormous uncertainty

Anstey: The key thing to realize is that there has been an enormous amount of uncertainty, and in a way, the closure of the Safe Harbor agreement has been a good thing in that there was always some doubt over its applicability and its suitability. If you’ll forgive the pun, there was a cloud hanging over it. When you remove that, you still have to get a little bit more certainty, of … “Well, that thing definitely doesn’t work and so we need to have a different structure.”

Nevertheless, what happens in that environment of uncertainty is that people start to play it safe and they start to think, “This cloud thing is a bit scary. Maybe we should just do it all ourselves, or maybe we should only consider private cloud deployments.” When you do that, you cut off the huge options and agility that’s available from using the cloud to its full extent.

What would be a bad thing is if, as the pendulum swings, as you described, toward regulation, people retreat and give up and say, “This Internet thing, we don’t want to do that. We’re going to reverse the trends and the huge technological advances that we’ve been able to leverage over the last 10 years of growth of cloud.”

We believe that by building technology in the way that we are able to construct it, with all of those options associated with ways in which you can demonstrably prove that you are responsibly looking after data over time, you don’t have to sacrifice the agility of the cloud in order to adhere to the regulations as they come in.

Gardner: We’ve talked about data sovereignty from a geographic perspective, but how about vertical industries? Are there certain industries that require that global reach, but also need to be highly regulated?

Anstey: The vast majority of the global banks are our customers already. We also have a very large footprint in the life sciences, which often has a similar nature in terms of the level of regulation, especially if you’re dealing with patient data in the field of clinical trials, for example.

But the reality is that, as this pendulum swings, the net is cast wider and wider for the regulation, to the point where any company that deals with personal data and needs to use that data for legitimate business purposes will now be covered by regulation. This isn’t just guidance now.

HPE Cloud
HPE Helion
Click Here to Learn More

When we get through to the next level of EU regulation, there are some serious fines, including criminal penalties for executives and fines of up to two percent of global revenue, which really makes people wake up. It will make a far wider group of companies wake up than the previous ones who knew that they were operating in a strict regulatory framework.

Gardner: So in other words, this probably is going to pertain to many more industries than they may have thought. This is really something that’s going to hit home for just about everybody.

Anstey: Absolutely. Every industry becomes a regulated industry at that point, when to do business you need to handle the type of data that gets covered by the regulation, especially if you are operating in the EU, but as we described, with more to follow.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:


Posted in big data, data center, Hewlett Packard Enterprise, HP | Tagged , , , , , , , , , | Leave a comment

ITSM automation and intelligence gains deliver self-service help to more users

The next BriefingsDirect IT support thought leadership discussion highlights how automation, self-service and big data analytics are combining to allow IT help desks to do more for less.

We’ll learn how automation and ITSM-driven insights endow help desk personnel with more knowledge and provide a single point of support for end users, regardless of their needs while still catering to their preferred method of help.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share the latest on how IT support is advancing in the era of bring your own device (BYOD), cloud, and tight budgets, are three experts, David Blackeby, Program Solution Owner for Cloud Services at Sopra Steria, based in the UK; Diana Wosik, Group Program Manager at Sopra Steria, based in Poland, and Mark Laird, Group Technical Architect at Sopra Steria, based in the UK. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start at a high level and talk about how support has changed, and why enabling self-service is so important nowadays. Mark, why is self-service such an important issue when it comes to IT help desk?

Deliver an Automated
Seamless SaaS Service Desk
Start your
Service Anywhere Trial

Laird: For us, there are probably a number of issues. We have a range across our customer base, from millennials, who are used to dealing with websites, mobile, tablets, who really don’t want to call a call center, and don’t want to end up talking to somebody on the phone, through to the legacy users who are much more used to picking up the phone, asking for help, and talking through a problem.

So they’re looking for a more human approach, human interaction, versus the millennials who want to fix it themselves, want to do it quickly, and really don’t want to talk to somebody about it. That’s introducing a range of problems and challenges.

Gardner: It sounds as if you need to deliver support in a spectrum of ways, but perhaps with a common core to that support function.

Underlying answer

Laird: The underlying answer to the problem, whatever the problem is, is likely to be the same. If you have a log-on issue, it will be a password reset or an account issue. It’s how you get that information out to the person who has the challenge.


If it’s a person on the phone, it’s easy enough to talk them through it. But if you have somebody who is coming through a self-service portal, you have to provide them with that same information. So yes, at times, you connect a single call, a single database, and send your knowledge environment to a range of callers.

Gardner: David, we’re being called on here to deliver support across the spectrum of modalities, methods, or even latency, but at the same time, many of the world governments are asking for austerity and savings in their budgets for IT. How are we able to reconcile this need for more variety and the delivery of help desk services, but cutting costs at the same time? Is there any way to reconcile them?

Blackeby: It’s part of the core challenge in the current world with austerity, where both our public and private customers are looking at how they can do more for less money.

IT has continuing cost pressures to reduce cost and overhead of providing IT.  At the same time, we talk about new methods of self-service, different types of platforms and different types of devices and this multi-channel effect that costs time, effort and money to invest in these technologies.


That’s the underlying driver for how it comes down to the service provider to do that. The only way we can do that is looking at industrializing that service delivery and automating processes, moving activities that may have previously been done by Level 2 and Level 3 resources. We’re looking at how we can move those to cheaper or lower-cost resources, such as a service desk, or in an ideal world, remove them entirely from the cost chain and drive the automation. So the activity increases the speed and the agility while reducing the cost of delivering the service.

Gardner: Diana, another variable in the mix here is the increased use of mobile devices, of fluidity of the user in terms of their geography, their location, even the time of day that they might be working, and of course there is a plethora of devices, if you want to bring your own device organization. How is mobility affecting this equation for a more complex approach to help desk?

Wosik: Mobility is very important nowadays, because everybody uses mobile devices, every single day. We need to ensure a single point of contact, so they all can approach their help desk at


any time they need, and they need the availability 24×7 for that.

Gardner: So, we’ve established that we have a need for more variability, addressing more types of help from more types of users. Tell me a bit more, Mark, about automation and self-service and how they support one another? What is it about automating processes that endows the user with more access to help, but then maybe that same feedback loop between the user and the support infrastructure can be brought to bear on future issues?

Laird: Automation is doing the same thing in a repeated, controlled fashion. Whether it’s a password reset or the delivery of a service or a server, what you’re doing is scripting. You’re putting into a workflow a process that a user can call on. Whether that user is an end user, an end customer, or in fact one of the operations team, it allows them to do that fairly standard process in a repeated quality controlled fashion.

And that can allow lower cost, potentially, as David said, bringing the tasks from maybe a qualified Level 3 expensive support person into an operations center, or in fact, maybe on to the self-service portal, where you’re not having to give access to systems to end users, but you are allowing them to run a script.

Double benefit

Gardner: David, perhaps you could help me understand why self-service is a benefit to both the receiver of the help, the end user, as well as the organization. What is it about self-service that refines process and benefits the deliverer of the help, but at the same time, gives more speed or perhaps options to the receiver of the help?

Blackeby: Essentially it supports both sides of the equation. From an end user perspective, it’s that instant gratification, I can go into a centralized portal. I can do my search or raise my request and I can be instantly satisfied with the response. I could be presented with a knowledge article that tells me how to fix my particular issue.

If I’m requesting a new service to be delivered through orchestration in the back end, I can make my request, and the orchestration comes in and drives the automated delivery of that service to me. So it increases the agility for the user and it reduces delays.

From the other side of the equation, looking at it from a service provider’s perspective, the more work the user can do themselves takes cost away from us as a service provider.

Historically, a user would have called the service desk, so as a part of that conversation you need to understand who the user is to provide them the service. Make sure it’s a service that they are potentially allowed to have and sort of help through the process. That means that we need a body to answer the phone, and the amount of time that we spend on a typical call from the user drives the cost from a support center perspective.

Even if you have a scenario where a user using the portal today, and still need ultimately a human interaction to deliver that service, we already know who they are, and will have asked relevant questions upfront which means we don’t have to ask the questions later on down the line when we try to deliver a service. That reduces the handling time by our agents and by the people who are delivering them the service.

Gardner: Before we dig into the how you do this, now that we have established why it’s an important new aspect of helpdesk, Diana, perhaps you can tell us a little bit about Sopra Steria, the organization, and to what degree they are supporting help desks in your markets?

Wosik: I can give you a good example of how it works in Poland and how the automation helps us out regarding the functionality of help desk.

We apply quite a few solutions, like virtual machine (VM) provisioning that has been automatically provisions the machines aligned to customer needs. There is a monitoring tool that is automated. So not only we monitor whatever is going on, but we’re also able to answer the needs very quickly, thanks to our automation services.

And then there’s the thing regarding the automatic deployment of our releases. Whenever there’s a new release of the system, we don’t need a bunch of people who are going to work on it. We can also deploy it very quickly in production, and that helps us to bring the solution as quickly as possible to our customer.

Higher-level view

Gardner: Could you give us a higher-level view of Sopra Steria, the organization, and to what degree help desk support is part of a larger portfolio of services?

Laird: We’re a European IT company. We run IT for a wide range of European customers. We deliver services. We write software. We do business process outsourcing. Essentially, if there’s a computer involved in there somewhere, that’s what we do.

We have a presence in 27 countries across Europe, in India, and then smaller offices in Singapore, Hong Kong, and China. We have 36,500 staff, and an annual turnover of about 3.5 billion euros. So, we’re a reasonably large company, one of the top 10 European IT companies.

For us, the service desk is the single point of contact. For all of our customers, that is their point of contact with us, whether it’s through the Global Delivery Center in Poland, where we’re offering French, German, English, small amounts of Spanish and Italian, or through some of the in-country service desks, such as the ones we have in France and the UK. So that is our single point of contact and it’s of key importance to us.

Blackeby: Just to follow on from that, the key piece of that is that it’s an intelligent service desk as opposed to a help desk. It’s really about having the phones manned by intelligent people who are able to both try and fix or resolve issues straight away, as opposed to just logging a call, creating a ticket, and passing it off to someone else.

Deliver an Automated
Seamless SaaS Service Desk
Start your
Service Anywhere Trial

Gardner: How is it that we’re providing those individuals on the front line with better knowledge? Are they getting more tools? Are they getting more data? Is this really just correlating a single point of access to the existing data? Is it all of the above? How do we empower those people to do this difficult help desk job better?

Blackeby: In the same way that we try to have a single point of entry for users, for a portal, it’s really the same piece for our support staff as well.

While there are many systems that underpin our service delivery, the key element we try to strive for is that the operators have a single place to work. It’s very much thorough the integration of various systems and data sources into a centralized repository, so that the person that’s trying to act on a ticket, request, or other activity has everything they need in one place, so they can immediately see what the issue is, see what the request is, and then deliver the service to that end user.

Gardner: It strikes me that whether it’s a help desk’s person or the end user, the more they use this, the more the data can be collected, the more knowledge can be harnessed from the interactions, and therefore brought back through a feedback loop into the next level of support.

Is the cost savings on this ultimately about you’re better able to understand the market because of the self-service, because of these portal approaches? Is that a big part of it?

Key items

Blackeby: It feeds into that. If you’re looking at industrializing or automating, you’re really looking for repeatable activities that are done time and time again. The data helps to support that. It identifies suitable candidates that are high volume, high throughput transactions that are really the key things that you want to focus on in terms of introducing automation into the environment, or automation into task elements in a given process. So, over time, it’s pretty much what we are doing.

As Mark mentioned, we’re a managed service provider (MSP), providing the services across many customers. So, a lot of the economies of scale we get are best practices that we apply in one account or particular scenarios or issues that we see in one, we can see correlations in other customer accounts as well. So we can bring those efficiencies and bring that investment we make and automation through our back office processes to benefit multiple customers.

Wosik: What is very well known right now is big data and smart analytics that will help us to gather all the information from our customers, so the more tickets and the more incidents are logged, the more information you can gather as well. This is gathered and analyzed. This is when we can provide more accurate and quicker answers to our customers. It’s something that has really impacted our quality of service.

Gardner: Let’s look also back to the systems, when we think about gathering information, more and more big data gathered from logs and other output data from the systems themselves, from the platforms. How are you at Sopra Steria managing the knowledge gathering from your systems and then applying that into this other knowledge base about the activities on your help desk and from the self-help portal?

Laird: We’re looking at some of the new technologies around smart analytics and big data, but we’re starting with some of the simpler approaches, which as David alluded to and as Diana mentioned earlier, are just the simple high-volume transactions, the things that we do on a regular basis that are maybe quality issues or maybe they are just time consuming, but those are the key ones we’re after.

Then, over the next three to six months, as we move into some of the newer technologies around smart analytics, for example, we’ll be taking some of the incidents and things coming into service desk, into the service management system, and looking at those and doing problem management on them.

Have we suddenly got an influx of incidents around our exchange platform? Is that actually indicating that there is an underlying problem or an underlying system error that we need to fix?

It’s starting to link all the various systems, whether it’s the business service monitoring system to the back end that the operations teams are using, or the service management platforms at the front that our service desk people are using, pulling all those together, tying them in with, for example, the configuration management platform, so that people are seeing the same information, both from a front-end user impacting view, or from a back-end infrastructure and service view.

Gardner: And I should think that would also help in more agility to do root-cause analysis and making it faster to time for resolution.

Automate and fix

Laird: Exactly. That back goes back to when we fix problems, close incidents, and if there’s a resolution in there, doing the analysis on them to identify common fixes. If an incident comes in or a particular type of incident comes in and we always do the same thing to it, we can automate that. We can actually either get the service desk or help desk people access to that quick fix or just automate it right at the start, so when that issue occurs, we automate and fix.

In some cases, that’s moving out of the customer’s view completely. We’re fixing it almost before there’s an impact.

Gardner: We’ve talked a bit about making these help desk approaches better from the end-user perspective, empowering the personnel in the help desk organization itself, and finding some new technologies and analysis benefits to propel that forward, but I would like to go back to the issue of cost.

How are we wringing out more cost from this process, perhaps things like identifying automation and what’s called shift left, better or earlier in the process. So, where are we targeting to get the most results when it comes to cost reduction in all of this?

Blackeby: It really talks about how people do transactions, what things are continually occurring that have a high amount of touch points to them. Some of that comes out through time.

One of the challenges we have when we take on a new customer is that you don’t have the excellent benefit of hindsight around how the organization works and what their common problems are. So, as we take on a new customer or a new contract, we have the ability to go and talk to their existing service provider or their in-house person. A lot of that comes out over time.

There are some standard things that we can recognize, because we have similar customers in similar marketplaces or industries and things that we would expect to get from the outset, and by looking at things like password reset tools and things like that are common and applicable across all types of clients.

Then, it’s a case of looking at your volumetrics over time, your repeatable activities, incidents and requests, identifying how can we drive the agility and improve the service levels that we’re delivering, and at the same time, reduce cost.

Take a simple thing like software deployment to users machines, historically, that might have been a call to the service desk. They might have dispatched a desk-side engineer or used remote control to be able to connect with a user’s device to go and install the software.

These days, more and more commonly, we can use software distribution, or automated software push tools, that don’t require human interaction at all. We can automatically deploy software to the user.

Zero-touch environment

That moves into that zero-touch type of environment. Through a portal request, we can manage the workflow around any approval activities. Then once fully approved, through the orchestration at the back-end, we can interface by software deployment solution to automate the delivery of that software to that endpoint device.

And we support many different types of devices now. We’ve seen more and more cases where not only are we talking about physical desktops or laptops, but also around how we manage mobile devices and tablet type devices as well, using mobility and mobile device management solutions.

Gardner: Let’s look at some of these solutions in practice. Sopra Steria has been doing this for some time and across a large marketplace. Do you have any examples that demonstrate when you can do this well that you get those benefits of self-help, common core data, more knowledgeable help desk, reduce costs, all at the same time?

Laird: One of the solutions we looked at in Poland, certainly around automation, was a really simple challenge that the operations team had as part of our Polish operation. Every morning, backups from a particular customer was taking them in the region of one hour to produce a backup report, look at the backup that had failed, re-run backups as appropriate, and then if backups had failed maybe consistently for a couple of days, escalating that out to support team.

We automated the whole thing. It’s all automated using HPE Operations Orchestration. The whole process now takes one of the team about five minutes in the morning, and it’s really a case of checking the output from the system.

So, we’ve saved somewhere in the region of just under an hour everyday for one person. It probably took two or three days to code the solution, but we’re saving a significant amount of time every day. We’re getting a much better quality report, and we’re able to pass that information out to our second-line and third-line teams earlier in the day, it gives them much more time to fix things.

One of the things that we’ve looked at now is automating the re-run of backups overnight. Rather than letting them go to maybe two or three days, they’re fixed overnight, and we run them within the backup window. It’s improving quality to the customer and a having significant impact on savings to the operations team.

Gardner: You mentioned the use of the HPE tools. Are there any other HPE platforms or approaches that are helping you bring in this common data. We talked about the analysis earlier that also helps in this equation of doing more with less.

HPE partner

Laird: We’re an HPE partner. We have been for over 10 years now, and we have quite a range of HPE tools across the portfolio, whether that’s from things like the Application Lifecycle Manager, through to HPE Service Manager.

We also have solutions like OMi doing things like event correlation, where we have events coming in from the monitoring solutions, whether that’s from HPE SiteScope or Operations Manager or from third party tools, like SCCM and some of the Nagios tools.

OMi is correlating those events and passing through to the service desk and the operations center the ones that actually need to be looked at. We’re filtering out more than 50 percent, 60 percent of the alerts. It reduces our cost. We’re filtering those alerts out at a much earlier point in the chain, and with that, we’re only raising incidents for ones that actually need to be escalated up to the teams.

We’re using tools and technology, to keep costs down and reduce the costs as far as we can.

Gardner: So as we think about being able to future-proof the support services, and by that I mean being able to adapt to a millennial audience, more distribution points, more types of help desk and automation, and that single portal, we also need to be thinking about being backwards compatible. Some organizations do want more of that human touch, the interactions, and perhaps some of the government organizations are interested in that as well.

What is it about the future direction of your services at Sopra Steria, some of the tools and technologies that you are employing from HPE, that allows you to feel confident about being both future proof and backwards compatible for your support?

Blackeby: One of the challenges that are coming more to the forefront these days is probably the adoption of cloud services. It’s a disruptive influence on traditional IT and how IT is delivered.

It’s a challenge for us the service providers to adapt to these. You’re talking about environments that can be built in minutes, bringing a whole new way of working, very fluid environments with auto-scaling where the number of resources that we are supporting and managing is growing and shrinking dynamically over time. So that’s really had a big sort of impact on how we deliver service.

Deliver an Automated
Seamless SaaS Service Desk
Start your
Service Anywhere Trial

We’ve recognized this and are looking at how we transform the service delivery. We’re becoming more reliant on the data that supports the service. So it’s very much around how we manage what’s out there, with a heavy reliance on things like configuration management systems, and discovery of IT resources.

As Mark said, there are things like event correlation, looking at patterns, trends and events so that we can increase the agility and really manage much higher volumes of applications, of servers and of users with a smaller number of people or with the same number of people.

Gardner: It is very exciting a lot is going on.

Tools and technologies

Blackeby: As a ratio you might have a scenario of a support person looking after an average 40 servers to now having to deal with realms of managing, so there are a 100-plus servers, but it’s only through the deployment of the tools and technologies that we can do that.

But at the same time, we still have a large legacy estate and legacy clients and we still need to support. So it’s really looking at how come we engineer our processes so that irrespective of what we are talking about legacy physical server workloads or perhaps on premise virtualized workloads as well as things that might be spun up inside Amazon Web Services or in Microsoft Azure public cloud environments that we provide that consistent level of service and service delivery irrespective of where the service is located or in which format it is delivered back to the customer or users.

Gardner: When I speak to developer organizations and IT production organizations operations, they’re seeing a compression and a large degree of collaboration between development and operations. Thus, the DevOps trend.

But when I listen to you, I’m hearing also a compression between operations and help desk in such a way that it benefits the entire IT process in a more automated and the more software-defined and the more data that’s made available, the tighter that compression seems to get. Am I perhaps describing seeing this idea of help desk, support and operations becoming more collaborative, more tightly aligned?

Laird: The whole concept of the operations team being hidden away in a back room and the service desk being the public face is changing. They’re becoming much more tightly aligned. Things that the operations team is doing have an almost immediate impact on what the service desk is looking at, and the service desk needs to have access to really all the information the operations team has got.

When the user is on the phone and has a problem with a service, it’s good if the service desk can actually say, “Yes, we know there’s a problem and we know what the problem is. We have an estimated fix time of 15 minutes.” That gives the user the warm feeling that you’re in control and you know what you’re doing.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Help desk, Hewlett Packard Enterprise, HP, ITSM | Tagged , , , , , , , , , , , , , | Leave a comment

Panel explores how the IT4IT Reference Architecture acts as a digital business enabler

The next BriefingsDirect expert panel discussion examines the value and direction of The Open Group IT4IT initiative, a new reference architecture for managing IT as a business.

IT4IT was a hot topic at The Open Group San Francisco 2016 conference in January, and the enterprise architect and IT leader attendees examined it from a variety of different angles. This panel, conducted live at the event, elevates the IT4IT discussion to the level of enabling digital business value.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

And so to learn more about how IT4IT aids businesses, we are joined by Chris Davis, Professor of Information Systems at the University of South Florida and also Chairman of The Open Group IT4IT Forum; Lars Rossen, a Distinguished Technologist at Hewlett Packard Enterprise (HPE) and a chief architect for the IT4IT program; Ryan Schmierer, Business and Enterprise Architect for IT at Microsoft, and David Wright, Chief Strategy Officer at ServiceNow. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: I hear IT4IT described as a standard, a framework, a methodology, and a business-enabler. Chris, is it all of those, is it more, is this a whole greater than the sum of the parts? Help us understand the IT4IT potential.

Davis: It could be seen as all of those. I have been academically in this space for 20 to 25 years, and the thing that is different, the thing that adds potential to this is the value-chain orientation.


As well as being a really potent technical standard, we’ve abstracted this to levels that can be immediately appreciated in the C Suite. People like Kathleen come along, they see it and get it, and that provides some traction. That is a very positive thing, and will enable us to pick up speed as people like Toine invite real penetration down to the CMDB level and so on.

We have this multilayer view. Lars and I articulated it as levels of abstraction, but I think the integration of Mike Porter’s stuff really adds some perspective to this technical standard that maybe isn’t present or hasn’t been present in other frameworks and tools.

Gardner: And as we explain this up the value chain into the organization, do you expect that IT4IT is something you would take to a board setting environment and have them understand this concept of a value stream and consolidating around that?

Davis: Yeah, I do. Some of the observations that were made yesterday about the persistence of models like value chain, value stream, and so on, still make enormous sense to people at the CIO level. That enables the conversation to begin and also provides the ability to see whereabouts, how much of the standard, which particular value streams, where in the organization (the various parts and perspectives) fit.

As well as being very potent and very prescriptive, we have that conceptual agility that the standard provides. I find it exciting and quite refreshing.

Organic development

Gardner: Lars, one thing that’s also interesting to me about IT4IT is that this was an organic development within IT organizations, for and by them. Tell us how, at HPE, you developed this, and why it was a good fit for The Open Group as a standardization process?

Rossen: A couple of things made us kick this off, together with Shell initially and then a lot of members came over the years. For us in HPE, it was around consumption of our toolsets. That’s where I came from.


I was sitting on the portfolio group and I said, well, we’re all drawing all of these diagrams around how it could fit together and we have these endless discussions with customers about whether this was right or this was wrong. I was completely disagreeing with all our friendly partners, as well as not so friendly competitors, about what was the right diagram.

Putting this into the open — and we chose Open Group for that particular reason; they have shown in the past that they can create these kinds of things — allowed us to have that common framework for defining the To-Be architecture for our customers. That simply made it much easier for us to sell our product suite. So it made a lot of business value for us.

And it also made it much easier for our consultancy service. We didn’t have to argue about the To-Be architecture; it was a given. Then, we can talk about how to actually implement it, which is much more interesting.

Gardner: And while we are speaking about HPE and your experience there, do you have any tangible metrics of success as to how this improved? You went through a large business separation of IT departments; that must have been a difficult process. Was there anything that the IT4IT approach brought to that particular activity that you can point to as a business driver or business benefit?

Rossen: I can. A very large organization is compartmentalized in many different ways, and you could say, well, how do all of these units interchange and work with each other, because it goes both ways; it’s not only the split, but it’s also all the acquisitions we’ve been doing over the years.

And then we have the framework that we can use and plot things in to, and we have a standardized toolset we can use and reuse over and over again.

Before we had IT4IT, we counted how many integrations we had between our various IT management products, and it ran to about 500. With IT4IT, we can drill down and see that there are only about 50 that are really interesting. Then, we can double down on those. We can now measure how much these are the ones that are being consumed moving forward, both internally within our service practice and as well as with our customer base.

Gardner: Ryan, at Microsoft, I’m wondering about Bimodal IT and Shadow IT. Because you perhaps have a more concentrated view on IT and you can control your organization, you don’t have that problem – or maybe you do. Is there is any degree of Bimodal IT at Microsoft or Shadow IT within your IT organization, have you addressed that, and has IT4IT been a use in that direction?

Consistency and repeatability

Schmierer: First, starting with the idea of Bimodal IT, we go back to some of the research and the thoughts coming from Gartner over the last couple of years about different parts of IT needing to work at different paces. Some need to be more agile and work faster; others need to be the foundational stalwarts of the organization, providing that consistency and that repeatability that we need.


At Microsoft, we tend to look at it a little bit differently. When you think about agile versus waterfall, it’s not a matter of one versus the other. Should we do one or the other? There’s a place for both of these. They are tools within our toolbox. Within IT, there are places where we want to move in a more agile way — where we want to move faster. There are also certain activities where waterfall is still an excellent methodology to drive the consistency and predictability that we need.

A good example of that comes with large releases. We may develop changes or features in a very agile way, but as we move towards making large changes to the business that impact large business functions, we need to roll those changes out in a very controlled, scripted way. So, we take a little bit different look at Bimodal than some companies do.

Your other question was on Shadow IT. One of the things that we have challenged a lot over the last year or so is this concept the role of the IT organization relative to the rest of the enterprise. As we think about that, we’re not thinking about IT as a service provider to the enterprise, but as a supporting function to the enterprise.

What does that mean? It means Shadow IT doesn’t exist. It just happens to be someone else within the organization providing that function. And so it becomes less of a question of controlling and preventing Shadow IT and more of embracing that outside-in approach and being able to assimilate those changes and coordinate them in a more structured way to manage things like risk and security.

Gardner: Well, we have heard that there’s a bridging of siloes benefit to IT4IT in either Bimodal or Shadow IT. Can you relay a way in which IT4IT helped you bridge silos and consolidate culturally and otherwise your IT efforts?

Schmierer: Absolutely. Very similar to some of the experiences that Lars explained at HPE, at Microsoft we’ve had a number of different product groups focusing on different products and solutions and service suites over the last few years.

As we’ve moved to more of a One Microsoft approach, we’re looking at, how to bring the organization and the enterprise together in a cohesive way?

IT plays a role in enabling that as a supportive function to the company and the IT4IT standard has been a great tool for us to have a common talking point, a common framework, to bridge those discussions about not only what we do internally within IT, but how the things that we do internally relate to the products and services that we sell out into the marketplace as well. Having that common framework, that common taxonomy, is not just about talking with customers; it’s about talking internally and getting the entire enterprise aligned.

Business service management

Gardner: Dave, as organizations are working at different paces toward being digital businesses, they might look to their IT organizations for leadership. We might, as a business, want to behave more like our IT organizations.

At ServiceNow I have heard you describe IT service management (ITSM) as one step toward business service management (BSM), rather than just ITSM. How do you see the evolution from ITSM to business service management and a digital business benefit? And how do you foresee IT4IT aiding and accelerating that?

Wright: The interesting thing about IT4IT is the fact that it conceptualizes the whole four stages that people go through on the journey. I suppose you could say the gift that ITIL gave IT was to give it an operational framework to work with.


Most other parts of the business haven’t got an operational framework. If you want to request something off most parts of the business, you will send them an email. If you want something off legal, you want something off marketing, send them an email. They haven’t got a system where they can request something.

If we take some of the processes described in IT4IT and publish that in a business-service catalog, you effectively allow everyone to have a single system of engagement. They might have their own back-end systems, they might have their own human capital management system, their own enterprise resource planning (ERP) system, but how do you engage and link all those companies together?

The other thing that IT has learned over a number of different implementations is how important the experience becomes, because if you can generate an experience where people want to use it, that’s what’s going to drive adoption of it as a function.

Let’s take this room as a whole. If we all sat together and built Uber, it would be crap. It would be really good for the taxi drivers, but it would be terrible for the people who actually wanted to request the service, and that’s because we tend to build everything from the inside out.

The fact we have now got a way to elevate that position and look at it from above, and understand all those components, and be able to track all those components from start to finish, and give people visibility in where you are in that process, that’s not just a benefit to IT; that’s a benefit to anyone who provides a service.

Gardner: As we also explore ways that we can evangelize and advocate for this in our organizations, it’s helpful to have places where it works first, the crawl-walk-run approach. Chris, can you help us understand areas where applying IT4IT early and often as a beachhead works?

Need and competence

Davis: Where you have the need and the competence. Back to my earlier point about how the standard can be envisioned, and the point that David just made, what we offer in IT4IT is something that’s not only prescriptive and ready to hand, but it’s also ready to mind, so people get it very quickly.

The quick wins are the important ones, not necessarily the low-hanging fruit, but the parts of the business where opportunities like the ones that David just suggested — if we were to try to do something like Uber — that would be too much.

If somewhere in an organization like Microsoft — where Kathleen is in-charge — there is a group that can gain rapid traction, that would be most effective. Then the telling of the early success stories; the work by Toine that shows how from the early stages in the development of the architecture, it was useful at Rabobank, that adds momentum.

Gardner: Lars, same question, where did you see this as getting traction best? Maybe it’s new efforts, greenfield application development, mobile-first type development, or maybe it’s some other area. Where might you point to as a great starting point to build this into an organization?

Rossen: It’s pretty simple actually. We’ve done more than 50, maybe a 100 engagements now using the IT4IT model with our customer base. Very often, it’s the central IT. It comes out of saying, “We’re too inconsistent.” It’s the automation story that comes first, and then typically you end up in a discussion around Detect to Correct. It’s a familiar area and people understand the various components that are involved in that.

But back to what you mentioned before is the layer approach that allows us to go in with a single slide. We can put it up in large format on the wall, and you can start to put Post-It notes on it. You don’t need to understand architecture. That implies that we can have decision makers coming in, and we break down a lot of siloes in the operations area, just with Detect to Correct. That’s where 99 percent of our engagements have been starting.

Then, the Request to Fulfill with the experience is where people want to go. That’s the Holy Grail, or one of the Holy Grails. There are actually two Holy Grails, and that’s just one of them. The other one is to be able to do Strategy to Portfolio, and no longer just say, “I have this application and I need to move it to the next version or whatever.” It’s understanding what are the services, not the applications, but the services I’m delivering to the business.

It isn’t until you have the value streams more in order that you can start building up that service backbone that is so crucial to IT4IT.

Gardner: Is there an element of educating the consumer of IT in an enterprise to anticipate services differently? Ryan, when you mentioned earlier the Request to Fulfill value stream, I can understand how that makes a great deal of sense from IT out to the organization. But do people have to make an adjustment in order to receive things as a value stream, to consume them, to think of asking things through the lens of your being a broker organization? What must we do to educate and help the consumer of IT understand that it might be a different ballgame?

Reducing friction

Schmierer: We need to start with the goal of reducing friction within the organization. Consumers of IT are operating in a changing landscape. I talked earlier about the network effect and how the environment is constantly evolving, constantly changing. As it does, the needs and desires of the people consuming technology and information will continue to change.

Request to Fulfill helps provide the mechanics for a corporate IT organization to become that broker of services. But if we look at that from a consumption perspective (from the users of services) it’s all about enabling them to change their mind, change their needs, change their business processes faster, and removing the friction that exists within the process of provisioning today.

If something is a new technology that they want to bring into their organization, because they see a potential to it, how do we get that in there faster? The whole Request to Fulfill value stream is about accelerating the time to value for new technology coming into the organization and reducing the friction of the request process.

Gardner: Dave, anything to offer on that same side, the consumption side, rather than the delivery perspective?

Wright:  We’re getting this breakdown now, where people are saying that it’s not about the CIs; it’s about the service that those CIs support, how you can take something that can have not a CI-centric CMDB, but a service-centric CMDB. How people can map those relationships. The whole consumption side of it is flipping now, as people’s expectations come in line.

The other thing I found specifically with the IT4IT concept is that people start to put together a kind of business logic very quickly around things. So they’ll look at the whole process. And I had someone said to me a few weeks ago, “If I understand the cost elements of each of those, I truly know what that service costs. Could I move and actually be able to manage my system based on what it’s costing the business not the fact it’s a server on problem or it’s a red light? It’s costing me x-amount of dollars a minute for this to be down and I’ve spent this much money actually building it and getting out.” But you have to have all those elements tied in, all the way from the portfolio element right the way through to the run element.

Gardner: So it really seems as if it also offers a value of rationalization, prioritization, but in business terms rather than IT terms. Is that correct?

Rossen: Correct.

Gardner: As I try to factor where this will work best, early, and often, not only would we look at specific parts of IT within organization, but we might look at specific companies as a culture, as a type of company but also vertical industries. I’ll go back to you, Dave, because ServiceNow has a fairly horizontal view of many different companies. Are there particular companies that you think it would be, as a culture or a type of company, better suited for adoption of IT4IT or in other vertical industries where this makes sense first?

Holistic process

Wright: The people I have seen who would be most disciplined about wanting to be able to look at things holistically right across the whole gamut have been the pharmaceutical companies. Pharmaceutical companies have come along and they’re obviously very regimented in the same way finances are. They’re the people who seem to be the early adopters of looking at this holistic process.

If I look at customers, the people who are adopting it first, at a low level, tend to be the financial institutions, but after that, the conversation tends to go through pharmaceuticals. I don’t think any one business has really nailed it, but this is a challenge of every company. Every company has an IT division, and they run IT, but their business isn’t to run IT; their business is inherently to provide financial services or develop drugs.

Looking at what processes people do to drive their core business, the people who are very regimented and disciplined tend to be the people who are saying there has to be a way we can gain more visibility into what we’re doing from an IT perspective.

Gardner: Ryan, thoughts on the similar question about where this is applicable either as a type of company or a vertical industry?

Schmierer: I’d look at who is most threatened by the changes going on in the world today. Where are cost pressures to drive efficiencies most prevalent because they’re going to have the most motivation to change quickly? I’d also look at companies that were early adopters of IT who, through their early adoption, have ended up with a lot of legacy debt that they’re trying to manage and they now need to rationalize that in order to get their total IT cost profile down.

In terms of specific verticals, there are pockets within each vertical or each industry that there are opportunities here. I’d look at it from a scale perspective. If you go back to the scale model that I shared this morning about the different sizes of organizations, a lot of small organizations don’t need this, and a lot of start-ups can build it into their DNA. Some of the companies that have more legacy (more mature enterprises) have more of a fundamental need for this type of structure and are going to be able to reap some benefits more quickly or with only a few pieces of it.

It’s a scale question and it’s a risk question. Who is under the most pressure to improve their cost performance?

Gardner: So if I do IT4IT correctly, how might I know a few months — six months, a quarter or two down the road – later that I can attribute improvement to that particular activity?

Rossen: There are a couple of different things that I believe can be done at an abstract level where actually within IT4IT trying to make more concrete key performance indicator (KPI) assessments of what would make sense in terms of measuring it. More abstractly, are you really embracing the multi-supplier options that reside in IT4IT. That’s one of the reasons we kicked it off. Shell has some good examples of what it costs to integrate a supplier. And that’s tremendous high cost typically, because you have to design how to exchange an incident every time over-and-over again, and then it becomes much more reusable.

That’s a place where you see that the cost of working with your partner should go down, and you can become a service broker. That’s a particular area where we would see benefits very quickly. But it’s also coming back to the original question or questions. That’s also where we see the typical companies that wants to pick it up are the companies that really are having that pain that it’s not a centralized IT any longer. It’s lines of business IT, it’s central, it’s suppliers and you yourself are supplying to others. If you have that problem then IT4IT is really good for you and you can quickly see benefits.

Gardner: Chris, thoughts on this notion of how do I attribute benefits in my IT organization at the business level to IT4IT?

Holy Grail for academics

Davis: This has been another Holy Grail for academics. We go all the way back to the 1970s constructive cost model and things like that. Lars hit the nail on the head. The other thing is what Cathleen said this morning. It will be less easily measured, more easily sensed, there will be changes in mindsets and so on. So it’s very difficult to articulate and measure, but we’re working on ways to make it much more tractable.

Wright: I’ve been implementing ITSM system since the mid-90s, but we still do one thing in the same way that’s truly weird and you are kind of hitting on this question. Can we define the outcomes?

Whenever anyone undertakes a project like this, they decide they’re going to completely redefine the way that IT manages itself as a business. You probably should design the outcomes in the metrics that you want before you put the system in. Almost everyone I can ever remember implements a system and goes “Cool, let’s write some reports.” And then you take the reports you can get and say, “We’d like a report that shows this,” and the consultant who put it in says, “Oh, you can’t get that.”

If only you step back and said, “Let’s think what we want and build a system that delivers that data,” is would provide a lot more value to the business.

Gardner: Well, I’ve had a chance to ask lots of questions. Let’s go now to our architects, the people in the trenches. Dave Lounsbury, CTO at The Open Group, help us out with some practical approaches to implementing IT4IT.

Lounsbury: First off, I want to mention that it’s really gratifying to see that new participants like Ryan and David come in and adopt this technology, and give us their insights. So thank you very much for participating, as well as our legacy folks. IT always has a legacy, right?


Each speaker mentioned the need for better data management as part of this process, and so this is a governance issue. And who in these evolving organizations should be responsible for data governance; is it the business, is it IT, is it a third entity that should be doing that? Any thoughts on that?

Schmierer: Let me take that one. We need to start by rethinking the idea of data governance. We’re trying to govern the data because we’re trying to create too much data. We’re spending far too much time adding overhead tasks to people who need to do their day jobs, people who are trying to execute on the value stream in order to generate data needed to make decision-making. When we don’t get the data that we’re looking for to drive decisions, we apply governance and we apply more overhead on top of it.

As we think about IT4IT and the fact that we have a value stream and a separate set of supporting functions, it gives us an opportunity to ask “How can we reduce the amount of data required to be generated within the value stream itself?”

The extra data points that someone collects as a part of a request or the status updates that are created as a part of a project or an agile release, how do we get to the point that we can derive that from the operational systems themselves and let people just do their jobs? If we’re not asking people to manually create data, there’s no need to create governance processes for it. That’s why IT4IT has a lot of value here. We’re going to get greater [quality] data by making people’s jobs easier.

Service backbone

Rossen: I’d like to answer that, very much in line to what you are saying. One of the purposes of the service backbone is that everything relates back to that. If you really follow it, everything would be available. You don’t need to do anything further in terms of data skews, any log message, any incident, or any report or set of data from the development. It can all be related back to the conceptual service and then you can have fun with creating the reports you want to do, but you don’t add any overhead to the individuals in the value chain.

Lounsbury: Can you elaborate on how best to address the people and mindset shifts you need to make as you transition to this kind of a model?

Schmierer: From a Microsoft perspective, it starts with valuing the individuals, the contributions they’ve made to the organization, and the opportunity for them to be a part of the future where the company is going. We need to make sure that we talk with individuals and reinforce that they are valuable and appreciated.

Change is always difficult. When you talk about changing skill sets, asking people to learn new skills, adopting new ways of working, it’s uncomfortable. We’re moving people out of their comfort zone and asking them to do something new. But I don’t think this one is difficult at all; it’s basic. Appreciate your people and tell them thank you.

Lounsbury: So given a complex service request demand by a business user, how will IT4IT assist me in designing a service with say, five different vendors?

Rossen: Well, the first thing is that within S2P, which is really where such a thing comes in, it’s a new service that needs to be introduced. We now have the framework for working on the conceptual service that we will make up whatever is requested. But everybody in the room here should probably appreciate the fact. We’re not throwing away all the good stuff that goes around TOGAF and architecture in general for the business. If it’s a very complex thing, you need to have an enterprise architecture worked out for that.

But it feeds into the pipeline of that, executing it. You can split it up into projects. You can still attract them as being part of the bigger things, but it does lead to that. A very important thing in IT4IT and in the industry in general is that you have to design small things that are making dependences to each other so one service depends on another service and so on. It’s not just an app on top of the infrastructure or platform infrastructure. It becomes much more complex with respect to that, but it’s the way the industry goes.

Lounsbury: What are the most important steps a small-to-medium sized enterprise (SME) could take to move to this service broker model that’s been advocated in IT4IT?

Wright: If it’s an SME, typically they’re going to be using multiple systems coupled together. There won’t be any real formality around it. But the first thing for them is to get a common place where they can go and request these services. So that catalog is going to be structured in a way that’s easy to use.

I have a funny story. We were looking at how we designed UI/UX for our customers to interact with software, and we hired a group of people who were 23 or 24 years old to build the UI. We were showing a lot of them a standard service-management type of process you go through, and he said it was very complex, and I said it was. He asked how people learn to use it? I said, “What typically happens is you roll the system out and then you send all your users on a training course.” He was horrified. He said, “You’re allowed to write a software that’s so bad, you have to train people how to use it?” I said, “Yes, I’ve made a good living for 25 years doing that.”

Service catalog

To be able to get a catalog, especially in a smaller business where you’ve perhaps got a younger workforce, more rapid turnover, or a potential to expand, it’s development system is where you don’t have to train people how to use them where it’s very intrusive.

I go onto this, I request something, and then suddenly something pops-up. I’ve got a task I need to do. It’s not like the going in sorting through records wondering what it all means and why have I got like 300 fields on the form and a couple of tabs to go through. It’s making work as simple as possible, that’s what’s going to drive the adoption of this.

But at a high level, what really drives the adoption is the visibility of the end result that you get from this, having that clarity of information. Imagine everyone in this room used to seeing incidents by category, so you can see a percentage of where you’re spending your time, you are on hardware issues, you are doing software upgrades. No other part of the business, especially in this consolidated business model, can see that.

If you go to human resources and ask for a breakdown of percentages, how much you spend on each different type of task, you’ll get some tribal knowledge ballpark figures. Same for legal, same for finance. Everyone who has been there for a while knows it, but there are no metrics. If you can provide those metrics at a top level, that just drives it further and further into the organization.

Lounsbury: One more, okay, so which one to choose? And of course people will be able to interact with these folks at the breaks and at our evening reception if I don’t get to your question. So how does IT4IT help in a situation where a company is trying to eliminate a data center and move to the public cloud? As a broker of services who owns the system integration and process services, how does that flow in the IT4IT model?

Rossen: I’ll take the first crack. Again it’s a classical scenario around saying where can you rationalize your portfolio? So do I outsource it, do I move the infrastructure to the cloud, do I still maintain the actual application, etc. You can’t make these decisions without having assistance of insight around what you’re actually running, how it’s being consumed, what business value does it bring, which goes back to strategy to portfolio, what conceptual services do you have, how are they currently implemented, how are they running, what is the quality, how many consumers are there on it?

If you have that data, it’s actually fairly easy to make these decisions, but typically most organizations, this exercises require 60 spread sheets, half a calendar year 60 people trying to figure that out and in the meantime it’s not really correct, right? And that’s again because you don’t have a service backbone, you don’t really have connected information, so implementing IT4IT will allow you to make these decisions much easier.

Schmierer:  Let me add onto that a little bit. As we talked about, “If you want to move something in a cloud, how can I get IT4IT to help me?” We have to remember that this is an area where the industry is evolving. We haven’t got it all figured out yet. IT4IT is a great starting point for having the conversation with those folks helping you in system integration and your cloud service provider to step through the questions about how things need to change, what needs to be done differently. “What are the things that the consuming IT organization no longer needs to do because the cloud service provider is doing for them?”

For now, start by using IT4IT as a checklist, use it as a starting point for brokering the conversation to ask if we’ve thought about everything. Over time, this will get repeatable — it will become a common pattern, and we’ll just know and won’t need to have that conversation. But for now, IT4IT is a great reference model to help us have that conversation.

Gardner: Would it not make sense for you as a consumer of cloud services to wonder whether your cloud provider is using IT4IT and wouldn’t that give you a common denominator by which to pursue some of these benefits?

Tool certification

Rossen: That would certainly be in the future when we come to tool certification within The Open Group. A cloud provider would also need to be certified to saying, well, if you find my service, I can actually provide you with an incident interface according to the standards, so it’s easy for you to hand over and go back and forth if there are issues just to take one example, right?

Gardner: Any more to offer from anyone?

Schmierer: One thing I can offer is this: since the IT4IT standard launched in Edinburgh three months ago, I can’t tell you how many emails I receive from our account teams and from customers who are asking us this exact question.

Customers are asking the question about IT4IT, how it plays into the service provider landscape and how they can use it to drive the conversation. So the word is getting out, and the best thing you can do as a consumer of this stuff, as you go work with different service providers is to ask the questions, and ask their opinion and their thoughts on it.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in Enterprise architect, enterprise architecture, The Open Group | Tagged , , , , , , , , , , | Leave a comment

How Etsy uses big data for ecommerce to put buyers and sellers in the best light

The next BriefingsDirect big data case study discussion explores how Etsy, a global e-commerce site focused on handmade and vintage items, uses data science to improve buyers and sellers’ discovery and shopping experiences.

We’ll learn how mining big data at speed and volume helps Etsy define and distribute top trends, and allows those with specific interests to find items that will best appeal to them.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about leveraging big data in the e-commerce space, please join Chris Bohn aka “CB,” a Senior Data Engineer at Etsy, based in Brooklyn, New York. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about Etsy for those that aren’t familiar with it. I’ve heard it described as it’s like being able to go through your grandmother’s basement. Is that fair?

CB: Well, I hope it’s not as musty and dusty as my grandmother’s basement. The best way to describe it is that Etsy is a marketplace. We create a marketplace for sellers of handcrafted goods and the people who want to buy those goods.

We’ve been around for 10 years. We’re the leader in this space and we went public in 2015. Just some quick little metrics. The total of value of the merchandise sold on Etsy in 2014 was about $1.93 billion. We have about 1.5 million sellers and about 22 million buyers.

Gardner: That’s an awful lot of stuff that’s being moved around. What does the big data and analytics role bring to the table?

CB: It’s all about understanding more about our customers, both buyers and sellers. We want to know more about them and make the buying experience easier for them. We want them to be able to find products easier. Too much choice sometimes is no choice. You want to get them to the product they want to buy as quickly as possible.

We also want to know how people are different in their shopping habits across the geography of the world. There are some people in different countries that transact differently than we do here in the States, and big data lets us get some insight into that.

Gardner: Is this insight derived primarily from what they do via their clickstreams, what they’re doing online? Or are there other ways that you can determine insights that then you can share among yourself and also back to your users?

Data architecture

CB: I’ll describe our data architecture a little bit. When Etsy started out, we had a monolithic Postgres database and we threw everything in there. We had listings, users, sellers, buyers, conversations, and forums. It was all in there, but we outgrew that really quickly, and so the solution to that was to shard horizontally.


Now we have many hundreds of sharded MySQL servers, horizontal. Then we decided that we needed to do some analytics on this stuff. So we scratched our heads. This was about five years ago. So we said, “Let’s just set up a Postgres server and we’ll copy all the data from these shards into the Postgres server that we call BI server.” And we got that done.

Then, we kind of scratched our heads and said, “Wait a minute. We just came full circle. We started with a monolithic database, then we went sharded, and now all the data is back monolithic.”

It didn’t perform well, because it’s hard to get the volume of big data into that database. A relational database like Postgres just isn’t designed to do analytic-type queries. Those are big aggregations, and Postgres, even though it is a great relational database, is really tailored for single-record lookup.

So we decided to get something else going on. About three-and-a-half years ago, we set about searching for the replacement to our monolithic business-intelligence (BI) database and looked at what the landscape was. There were a number of very worthy products out there, but we eventually settled on HPE Vertica for a number of reasons.

One of those is that it derives, in large part, from Postgres. Postgres has a Berkeley license. So  companies could take it private. They can take that code and they don’t have to republish it out to the community, unlike other types of open source copyright agreements.

So we found out that the parser was right out of Postgres and all the date handling and typecasting stuff that is usually different from database to database was exactly spot-on the same between Vertica and Postgres. Also, data ingestion via the copy command is the best way to bulk-load data, exactly the same in both, and it’s the same format.

We said, “This looks good, because we can get the data in quickly, and queries will probably not have to be edited much.” So that’s where we went. We experimented with it and we found exactly that. Queries would run unchanged, except they ran a lot faster and we were able to get the data in easily.

We built some data replication tools to get data from the shards and also some legacy Postgres databases that we had laying around for billing and got that all data into HPE Vertica.

Then, we built some tools that allowed our analysts to bring over custom tables they had created on that old BI machine. We were able to get up to speed really quickly with Vertica, and boom, we had an analytics database that we were able to hit the ground running with it.

Gardner: And is the challenge for you about the variety of that data? Is it about the velocity that you need to move it in and out? Is it about simply volume that you just have so much of it, or a little of some of those?

All of the above

CB: It’s really all of those problems. Velocity-wise, we want our replication system to be eventually consistent, and we want it to be as near real-time as possible. There is a challenge in that, because you really start to get into micro-batching data in.

This is where we ended up having to pay off some technical debt, because years ago, disk storage was fairly pricey, and databases were designed to minimize storage. Practices grew up around that fact. So data would get deleted and updated. That’s the policy that the early originators of Etsy followed when they designed the first database for it.

Start Your

HPE Vertica
Community Edition Trial Now

Eventually what we have got now is lossy data. If someone changes the description or the tags that are associated with a listing, the old ones go away. They are lost forever. And that’s too bad, because if we kept those, we can do analytics on a product that wasn’t selling for a long time and all of a sudden it started selling. What changed? We would love to do analytics on that, but we can’t do it because of the loss of data. That’s one thing that we learned in this whole process.

But getting back to your question here about velocity and then also the volume of data, we have a lot of data from our production databases. We need to get it all into Vertica. We also have a lot of clickstream data. Etsy is a top 50 website, I believe, for traffic, and that generates a lot of clicks and that all gets put into Vertica.

We run big batch jobs every night to load that. It’s important that we have that, because one of the biggest things that our analytics like to do is correlate clickstream data with our production data. Clickstream data doesn’t have a lot of information about the user who is doing those clicks. It’s just information about their path through the site at that time.

To really get a value-add on that, you want to be able to join on your user details tables, so that you can know where this person lives, how old they are, or their buying history in the past. You need to be able to join those, too, and we do that in HPE Vertica.

Gardner: CB, give us a sense about the paybacks, when you do this well, when you’ve architected, and when you’ve paid your technical debts, as you put it. How are your analysts able to leverage this in order to make your business better and make the experience of your users better?

CB: When we first installed Vertica, it was just a small group of analysts that were using it. Our analytics program was fairly new, but it just exploded. Everybody started to jump in on it, because all of a sudden, there was a database with which you could write good SQL, with a rich SQL engine, and get fantastic results quickly.

The results weren’t that different from what we were getting in the past, but they were just coming to us so fast, the cycle of getting information was greatly shortened. Getting result sets was so much better that it was like a whole different world. It’s like the Pony Express versus email. That’s the kind of difference it was. So everybody started jumping in on it.

More dashboards

Engineers who were adding new facets of the product wanted to have dashboards, more or less real time, so they could monitor what the thing was doing. For example, we added postage to Etsy, so that our sellers can have preprinted labels. We’d like to monitor that in real time to see how it’s this going. Is it going well or what?

That was something that took a long time to analyze before we got into big-data analytics. All of a sudden, we had Vertica and we could do that for them, and that pattern has repeated with other groups in the company.

We’re doing different aspects of the site. All of a sudden, you have your marketing people, your finance people, saying, “Wow, I can run these financial reports that used to take days in literally seconds.” There was a lot of demand. Etsy has about 750 employees and we have way more than 200 Vertica accounts. That shows you how popular it is.

Start Your

HPE Vertica
Community Edition Trial Now

One anecdotal story. I’ve been wanting to update Vertica for the past couple of months. The woman who runs our analytics team said, “Don’t you dare. I have to run Q2 numbers. Everybody is working on this stuff. You have to wait until this certain week to be able to do that.” It’s not just HPE Vertica, but big data is now relied on for so many things in the company.

Gardner: So the technology led to the culture. Many times we think it’s the other way around, but having that ability to do those easy SQL queries and get information opened up people’s imagination, but it sounds like it has gone beyond that. You have a data-driven company now.

CB: That’s an astute observation. You’re right. This is technology that has driven the culture. It’s really changed the way people do their job at Etsy. And I hear that elsewhere also, just talking to other companies and stuff. It really has been impactful.

Gardner: Just for the sake of those of our readers who are on the operations side, how do you support your data infrastructure? Are you thinking about cloud? Are you on-prem? Are you split between different data centers? How does that work?

CB: I have some interesting data points there for you. Five-plus years ago, we started doing Hadoop stuff, and we started out spinning up Hadoop in Amazon Web Service (AWS).

We would run nightly jobs. We collected all of the search terms that were used and buying patterns and we fed these into MapReduce jobs. The output from that then went into MATLAB, and we would get a set of rules out of that, that then would drive our search engine, basically improving search.

Commodity hardware

We did that for a while and then realized we were spending a lot of money in AWS. It was many thousands of dollars a month. We said, “Wait a minute. This is crazy. We could actually buy our own servers. This is commodity hardware that this can run on, and we can run this in our own data center. We will get the data in faster, because there are bigger pipes.” So that’s what we did.

We created what we call Etsydoop, which has got 200+ nodes and we actually save a lot of money doing it that way. That’s how we got into it.

We really have a bifurcated data analytics, big-data system. On the one hand, we have Vertica for doing ad hoc queries, because the analysts and the people out there understand SQL and they demand it. But for batch jobs, Hadoop rocks, and it’s really, really good for that.

But the tradeoff is that those are hard jobs to write. Even a good engineer is not going to get it right every time, and for most analysts, it’s probably a little bit beyond their reach to get down, roll up their sleeves, and get into actual coding and that kind of stuff.

But they’re great at SQL, and we want to encourage exploration and discovering new things. We’ve discovered things about our business just by some of these analysts wildcatting in the database, finding interesting stuff, and then exploring it, and we want to encourage that. That’s really important.

Gardner: CB, in getting to understand Etsy a little bit more, I saw that you have something called Top Trends and Etsy Finds, ways that you can help people with affinity for a product or a craft or some interest to pursue that. Did that come about as a result of these technologies that you have put in place, or did they have a set of requirements that they wanted to be able to do this and then went after you to try to accommodate it? How do you pull off that Etsy Finds capability?

CB: A lot of that is cross-architecture. Some of our production data is used to find that. Then, a lot of the hard crunching is done in Vertica to find that. Some of it is MapReduce. There’s a whole mix of things that go into that.

I couldn’t claim for Etsy Finds, for example, that it’s all big data. There are other things that go in there, but definitely HPE Vertica plays a role in that stuff.

I’ll give you another example, fraud. We fingerprint a lot of our users digitally, because we have problems with resellers. These are people who are selling resold mass-produced stuff on Etsy. It’s not huge, but it’s an annoyance. Those products compete against really quality handmade products that our regular sellers sell in their shops.

Sometimes it’s like a game of Whack-a-Mole. You knock one of these guys down — sometimes they’re from the Far East or other parts of the world — and as soon as you knock one down, another one pops up. Being able to capture them quickly is really important, and we use Vertica for that. We have a team that works just on that problem.

What’s next?

Gardner: Thinking about the future, with this great architecture, with your ability to do things like fraud detection and affinity correlations, what’s next? What can you do that will help make Etsy more impactful in its market and make your users more engaged?

CB: The whole idea behind databases and computing in general is just making things faster. When the first punch-card machines came out in the 1930s or whatever, the phone companies could do faster billing, because billing was just getting out of control. That’s where the roots of IBM lie.

As time went by, punch cards were slow and they wanted to go faster. So they developed magnetic tape, and then spinning rust disks. Now, we’re into SSDs, the flash drives. And it’s the same way with databases and getting answers. You always want to get answers faster.

We do a lot of A/B testing. We have the ability to set the site so that maybe a small percentage of users get an A path through the site, and the others a B path, and there’s control stuff on that. We analyze those results. This is how we test to see if this kind of button work better than this other one. Is the placement right? If we just skip this page, is it easier for someone to buy something?

So we do A/B testing. In the past, we’ve done it where we had to run the test, gather the data, and then comb through it manually. But now with Vertica, the turnaround time to iterate over each cycle of an A/B test has shrunk dramatically. We get our data from the clickstreams, which go into Vertica, and then the next day, we can run the A/B test results on that.

The next step is shrinking that even more. One of the themes that’s out there at the various big data conferences is streaming analytics. That’s a really big thing. There is a new database out there called PipelineDB, a fork of Postgres. It allows you to create an event steam into Postgres.

You can then create a view and a window on top of that stream. Then you can pump your event data, like your clickstream data, and you can join the data in that window to your regular Postgres tables, which is really great, because we could get A/B information in real time. You set up a one minute turnaround as opposed to one day. I think that’s where a lot of things are going.

If you just look at the history of big data, MapReduce started about 10 years ago at Google, and that was batch jobs, overnight runs. Then, we started getting into the columnar stores to make databases like Vertica possible, and it’s really great for aggregation. That kicked it up to the next level.

Another thing is real-time analytics. It’s not going to replace any of these things, just like Vertica didn’t replace Hadoop. They’re complementary. Real-time streaming analytics will be complementary. So we’re continuing to add these tools to our big data toolbox.

Gardner: It has compressed those feedback loops if we provide that capability into innovative, creative organization. The technology might drive the culture, and who knows what sort of benefits they will derive from that.

All plugged in

CB: That’s very true. You touched earlier about how we do our infrastructure. I’m in data engineering, and we’re responsible for making sure that our big databases are healthy and running right. But we also have our operations department. They’re working on the actual pipes and hardware and making sure it’s all plugged in. It’s tough to get all this stuff working right, but if you have the right people, it can happen.

I mentioned earlier about AWS. The reason we were able to move off of that and save money is because we have the people who can do it. When you start using AWS extensively, what you’re doing is you are paying for a very high priced but good IT staff at Amazon. If you have got a good IT staff of your own, you’re probably going to be able to realize some efficiencies there, and that’s why really we moved over. We do it all ourselves.

Gardner: Having it as a core competency might be an important thing moving forward. The whole idea behind databases and computing in general is just making things faster.

CB: Absolutely. You have to stay on top of all this stuff. A lot is made of the word disruption, and you don’t go knocking on disruption’s door; it usually knocks on yours. And you had better be agile enough to respond to it.

I’ll give you an example that ties back into big data. One of the most disruptive things that has happened to Etsy is the rise of the smartphone. When Etsy started back in 2005, the iPhone wasn’t around yet; it was still two years out. Then, it came on the scene, and people realized that this was a suitable device for commerce.

Start Your

HPE Vertica
Community Edition Trial Now

It’s very easy to just be complacent and oblivious to new technologies sneaking up on you. But we started seeing that there was more and more commerce being done on smartphones. We actually fell a little bit behind, as a lot of companies did five years ago. But our management made decisions to invest in mobile, and now 60 percent of our traffic is on mobile. That’s turned around in the past two years and it has been pretty amazing.

Big data helps us with that, because we do a lot of crunching of what these mobile devices are doing. Mobile is not the best device maybe for buying stuff because of the form factor, but it is a really good device for managing your store, paying your Etsy bill, and doing that kind of stuff. So we analyzed all that and crunched it in big data.

Gardner: And big data allowed you to know when to make that strategic move and then take advantage of it?

CB: Exactly. There are all sorts of crossover points that happen with technology, and you have to monitor it. You have to understand your business really well to see when certain vectors are happening. If you can pick up on those, you’re going to be okay.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, data analysis, Hadoop, Hewlett Packard Enterprise, HP, HP Vertica | Tagged , , , , , , , , , | Leave a comment