HP ART documentation and readiness tools bring better user experiences to Nordic IT solutions provider EVRY

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Nordic IT solutions provider EVRY has taken automation and agility to new heights in its training and documentation of IT products and services, and found that even small steps can make a valuable return on user adoption patterns.

By using HP’s Adoption Readiness Tool (ART) to help its employees work better with IT management solutions and processes, EVRY, based in Oslo, has gained new advantages in the adoption and understanding of both new and existing technology.

BriefingsDirect had an opportunity to learn first-hand how EVRY mastered production of documentation and readiness tools when we interviewed Sigve Sandvik, Solution Adviser at EVRY, at the recent HP Discover 2013 Conference in Barcelona. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Learn more about HP ART.]

Here are some excerpts:

Gardner: Tell us first a little about EVRY, what your organization is, how big they are, and what you do.

Sandvik: EVRY is Norway’s biggest IT solution provider. We’re the result of a merger between two of the former biggest companies in Norway. We’re approximately 10,000 employees, based in 50 locations, mostly in the Nordic region and the Baltic, and we also have some colleagues in India.

Sandvik

Gardner: So you are both a big user of IT, as well as helping your customers  improve their businesses through better IT practices?

Sandvik: That’s true. My team, called the ITSM Tools Department, delivers tools to our employees globally, and also directly to our customers. But most of my customers are also my colleagues.

Gardner: What are some of the problems that you have had has as you have tried to get the most out of HP Service Manager?

Global tool

Sandvik: HP Service Manager is used widely in EVRY. It’s a global tool, and all employees can access the system. Since it is a global tool, there are lots of people out there who need to know how it works.

For example, if they are entitled to just to call the internal help desk, they can do that. But some may not be allowed to call the help desk. They need to register a ticket themselves. They need to have a place to find information. That’s the main issue when it comes to HP Service Manager, and what we need in terms of documentation and user guides.

Gardner: Tell us about your journey. What did you do and how did you discover HP’s Adoption Readiness Tool, or ART? How did it work for you?

Sandvik: Actually, it was a coincidence that we discovered ART. My former manager was attending a conference, I am not sure which one, but it was an HP Conference. He discovered that there is a product out there that could actually help us make our documentation and user guides better.

Before, when we signed up with an external vendor, they helped us with recording the process, for example, and on making a new interaction. They helped us with that, and they also made the voice-over and printouts or the text from the voice. So it was basically a video you can play back.

The problem with that, of course, is that when we got the video, it was already out of date because we had already moved on with the next release of our system. So the product wasn’t optimal anymore. Besides, we had to pay the vendor a certain amount of money, and then if we wanted to change it, they billed us extra for it.

Gardner: Explain how you were able to make the time-to-value compressed. How you were able to create these documents, this training, these assets, but in a way that they weren’t obsolete by the time you were able to use them?

Sandvik: With the external vendor we had used before, the product was already made. We weren’t able to change it, but with HP ART we were in a position where we could, within an hour, make a small simulation and present it in a portable format — for example as a PDF or Word document. We could also  present it on-screen, with voice, and in multiple languages as well. But the most important thing is that we were able to maintain the user guides and the documentation as we go. So we could just add new parts and edit parts of the documentation we already had.

Gardner: And have you been able to expand the products and services that you have been developing these assets for? How widely are you using ART?

Available resources

Sandvik: Today we are using ART not as much as I would like to. In a perfect situation, I think EVRY would really benefit if we made even more user guides with HP ART.

We have made a lot of user documentation, which we send to our customers, vendors and external subcontractors. The responses we get from these are really good. Also, the response we get internally in our organization — when they see that we have these products and these user guides — is that they want more. We would really benefit if we could only find time to make more documentation.

Gardner:  For others who may have not been using this real-time and adaptive training capability yet, now that you have been doing it for a while, do you have any tips or suggestions for them? Do you have any words of wisdom for others who are considering this?

Sandvik: There are three things needed to make good simulations. You should concentrate on making small bits. Do not make the recordings too large. You should also think about how you want to present your documentation. Concentrate on one bit at a time, and then put that all together. For example, in an online course, with HP ART, you’re able to assemble several simulations together.

In our online course, where you have the embedded menu, you can run the whole course, or the users can click on a specific item of interest. They don’t have to run through the entire course. They can just click on the specific item they want to learn a little bit more. So I would start by making small recordings first.

I also recommend spending time on fixing the template. When you buy the product, you will have the HP fonts and logo. We’ve spent some time adapting the tool so it has the EVRY logo, colors, and fonts in the template. It looks nice and is familiar to our employees.

Gardner:  Have you been able to measure how the users of the product or products gain from the use of ART? Is there a soft or hard metric?

Sandvik: No, we haven’t. Perhaps we should measure how we’ve improved our learning or our own internal use of the user guides. That is perhaps something we will have a look at.

With HP ART, for example, you can also make assessments of where your users have viewed simulations, and then on the next page you will be tested. So we could easily track which employees have taken the given course. We haven’t yet asked our employees if they really use the documentation more.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in HP | Tagged , , , , , , , , , | Leave a comment

Workforce of the future – and why preparing means rethinking human resources now

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

The next BriefingsDirect thought-leader interview focuses on the fascinating subject of preparing for the workforce of the future. It’s now clear that we are entering into a very diverse and even unprecedented work environment — something the world perhaps has never known.

But how do enterprises prepare, and how do they create the means to analyze and manage the transition to very different work environments? BriefingsDirect had an opportunity to learn first-hand at the recent 2014 Ariba LIVE Conference in Las Vegas.

To learn more about hiring and acquiring talent and managing a diverse and socially engaged — and even more knowledge-driven workforce — we sat down with Shawn Price, President for Global Cloud and Line of Business at SAP, and the former President of SuccessFactors, now part of SAP. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Now, this is a really fascinating subject for me, this new diversity and this talent-oriented workforce. But companies must be thinking, how do I reduce risk? How do I think about making this an opportunity rather than a challenge?

Price: Dana, if you think about it, what you have just described is the company that has already started to take steps to make sure that they don’t get caught unprepared for the future.

A lot of companies are having to respond to dramatic changes in their operating models, where they’re driving revenues against the backdrop of a global economy. And companies are required to deploy flexible workforces that can be engaged and can change with this fast business environment that we’re in.

We have a scenario here in the US market, in particular, where every day, 10,000 people turn 65 and that will continue for the next 19 years. So you have an experienced staff that’s leaving the workforce. Then, on the front-end, you have a talent shortage. There just are not enough millennials to replace that exodus that’s occurring.

The astute companies of the future have mapped this out, have laid a plan, have started to build warm pools of talent, and understand this in everything that they do. They’ve tied their strategy to their people strategy and acquisition, and they’re really managing it for the cultural nuances that the regions that they operate in require, and they’re pretty flexible and nimble.

More diversity

Gardner: One of the characteristics, as I understand it, is that there is more of a diversity in the ways in which employees or talent are engaged with a company, many more varieties than the full-time, 40 hours a week, 9-to-5 employee type. This seems to have some upside, but it’s different. You don’t manage folks in the same way when they’re working through these different models.

Price: There’s a real movement and emphasis on personal brand. To a degree, you’re starting to see free agents in the market. When you look at it on the retention side, planning strategy to the right people, to the right place, and at the right time, with a lot of millennials that are entering the workforce there is a clear six month delineation. That is the greatest risk for your company to lose that talent.

They’ve come in, they’ve looked around, and they’ve decided whether they’re actually advancing their personal brand, their knowledge base. It’s less about hierarchy and the old models of the past. They’re making a call on whether they’re actually advancing and learning or it’s more tenure based. The way we measure is different as well.

One measurement that’s completely different than anything that we’ve done in the past is your engagement index: How much do you contribute? How much is your content consumed? How engaged are you in the day-to-day?

Gardner: I suppose there’s another complicating factor. I’ve read that by 2020, there will be five different generations working together, and each with a very different set of skills, experiences, expectations, and behaviors. It means that companies can’t have a one-size-fits-all approach to this. In fact, they might have to be able to have multiple ways of engaging.

How does that factor into what you’re talking about, the workplace of the future. I guess we should talk about the enterprise of the future, and that they need to have a diversified approach, not just a single way to engage?

Price: You’re absolutely right. If you look at talent acquisition today, it has shifted. The focus used to be in the past on how do I put as many people as possible through my applicant tracking system, but today it’s far more strategic. It’s where do I need them and how much do they cost. We need start to create a relationship at a much earlier point in order to create these warm pools of talent.

Second, it’s really asking who are my best performers, where are they from, and how do I get more of them? If you have a particularly productive intern program, for example, which college are you drawing talent from that’s performing the best in the specific function? So the workforce of the future looks vastly different.

The power of how talent is acquired has shifted subtly as well. It used to be solidly in the hands of the employers, but today it’s a two-sided equation. If I’m hiring somebody, I often do interviews over SMS or text, because it’s a stream of consciousness. It’s not a prefabricated dialogue.

But I also look at their LinkedIn profile, which is how they want the world to see them. I look at their social media profile. And then the most valuable thing to me is the peer references that exist in my network that can validate that that individual is who they’re representing themselves to be.

The flip side

But the flip side of that is we have websites that allow the applicant to look into the leadership style. They can connect to their network of people that may have worked for that leader in the past. They can see if the culture that they’re espousing is working.

So you have the fundamental shift in how you’re attracting, retaining, and working with talent that is completely different from things in the past and the way that we have done it.

What the future will hold is that we’ll go to a point where you will carry your composite profile of who you are and that will be made up of both social media external to the company, the LinkedIn profiles, and in some cases, even your performance reviews, where it’s appropriate to externalize it. All of that will go in your employee record that will follow you throughout your career. Systems will automatically update that, and so it will be much more consumable.

Smart companies that are innovating are redefining the processes by which they engage. In retail, for example, you have seasonal workforce, and that seasonal workforce typically has to go through the entire recruiting process again if they come back the following year.

Maybe I had a good experience year one. If I reapply, now I’m going through the website, now I’m doing verifications. Why can’t we re-imagine the on-boarding to be, “Dana, you worked with us last year. You did a terrific job. We’d like to have you back. Is everything the same?” In fact, you can put an  application on your smartphone with which you can make sure that the information is accurate, and then turn you on as an employee in the system automatically.

Instead, we’re encumbered in many ways to systems and thinking of the past around some of these talent acquisition processes that are so core to delivering on the strategy.

Gardner: Shawn, thinking about the past, I suppose we used to measure things pretty directly — productivity measurements, top line, bottom line. Is there a new way to measure whether we’re doing this correctly, whether we’re getting the best workforce and best talent, ramping up to give ourselves the resources we need as organizations to meet our own goals? Should we not think about this in productivity terms? What’s the right set of metrics?

Price: It’s funny. We will always be top-line and bottom-line driven to some degree, but the measurement isn’t necessarily productivity. Maybe it’s rethinking processes that can have a material impact. One is this learning management notion, where I was describing engagement. Imagine you are on-boarding to a new company. The most important thing for me as that hiring manager is to get you up to speed and, in your words, productive as quickly as possible.

How did we do that in the past? We put you in a training course or maybe state-of-the-art, an online web-based training course that would run days, if not weeks, on end to try and have you assimilate everything that we needed you to learn.

Moving ahead

The new world, which is not based on that, is trying to move that on-boarding and productivity ahead. The way that we’re doing it is we are saying, “We already own all of the subject matter expertise required to on-board somebody. Wouldn’t it be cool if, before you even join the company, you could connect by a social network, not one of these isolated ghost towns that stand on their own, but a social network connected to HR or connected to Ariba?”

We could have you engage with somebody doing your same job, so you could ask that person anything you want before you got there — what should I read, what should I learn, who should I talk to, what’s my first week like? That engagement was already occurring.

Then, when you arrive in the company,  it would be your compliance learning, and the normal HR functional learning, but you would also take advantage of subject matter expertise.

Today, the way that we learn is not in large chunks of data. Think about YouTube. It’s web downloadable, consumable in five minutes on my mobile device. That content that I need in order to be enabled and to thus be productive is available from my coworkers in the form of a five-minute video. Or there’s this advent of massive online communities that are producing content. Or I may choose to bring in an expert from outside the company to create content.

The visualization that I have is Khan Academy, where the most complex topics are searchable and digestible via mobile in 15 minutes. That’s where we’re seeing the shift from just pure top line and bottom line to rethinking what on-boarding and engagement look like, and what does that ultimately do to the acceleration of someone’s comprehension? There are many, many examples.

Gardner: Are you saying that companies need to start to become more open and social and create content and the media and mechanism, so that they can be in a sense part of this community? And how far along are companies in actually doing that?

Price: Many times social networks are established as standalone entities and they become ghost towns after a while. You kind of lose interest because they lack content and context.

When you attach it to an actual application, you can publish dynamically to that community, and you can search and see, for example, what was the number one search content today, this week, this month.

Increasingly, I’m starting to see things on people’s resumes like their engagement index, which says, “I was the number one producer of content for my company that was consumed by the social network.” You’re seeing stack rankings of that nature and form.

Cloud strategy

Social will become, and has become, an enormous component of our cloud strategy. In fact, today, we sit with more than 12 million subscribers on our social platform.

We have a large hotel chain that is actually using it to manage contract labor and part-time labor, because they want the engagement. They want the connection, but they want to be able to connect differently to them than the employee who is a full-time employee. And this hotel chain has over 170,000 contractors in their communities, and they’re grabbing information and all the expectations.

The other part of social, of course, is the mobile side of it. Our networks and our access to vast amounts of skills that would have in the past been hidden are now available to peruse, almost like a skills catalog within your own organization. You’ll find things that you didn’t even realize you had in pockets of the globe. People’s skills that you wouldn’t necessarily have on file even are now apparent through that dialogue.

Most companies are going through this transformation in HR because of the macro trends we’ve been describing. What they’re ultimately trying to figure out is how do I create a strategy? How do I build a set of applications that allows me to execute against that strategy and measure whether I’m performing? And how do I drive cost out of it?

For many companies, they visualize this at the top line and the strategic level, but they also visualize it as a process, and they think of that process as recruit to retire. We believe that you can start anywhere, but you’re going to end up with this process that’s interconnected.

Maybe your starting point is recruiting, because you have a lack of talent or you’re opening or you’re expanding. Maybe you have a learning management on compliance, or maybe you have performance and goals where you’re actually measuring the progress. You can start with any application and interconnect it over time.

We have actually completed the entire portfolio of applications end to end in the market. Think about Ariba’s connection with HR, which seems like a funny thing to say. On the Ariba Network today, we have 1.5 million connected companies. We’re adding one every three seconds. Imagine in that supply chain, in that labor pool, what would be available to you if you were to publish a job requisition, for example.

So we look at it as recruit to retire. Where the world is going though is our cloud applications. Today, we manage in excess of 35 million subscribers, the byproduct of people working with us like that is two-fold. One, they tell us very quickly what they like and don’t like, which allows us to innovate very quickly. But the other side that you raised is the predictive side.

Predictive analysis

HR is really wide open right now for predictive analytics, and the only way you get there is by having scale of people using your system. Today, for example, we have built 2,000 key performance indicators (KPIs) and benchmarks to be able to tell us things like, what is my management bench strength, today, 30 day, six months, and a year from now? Who is ready and who is going to be ready, because of course that’s dependency to your strategy?

Or what’s the voluntary turnover rate? We talked about five generations in the workforce. For experienced workers, what’s my voluntary turnover rate versus the millennial workforce?

Where we’re getting to is really being able to correlate multiple indexes to give us a predictive view of what’s going to happen. And that’s pretty exciting. That’s a pretty big breakthrough that we’ve seen.

Gardner: If I understand you correctly, Shawn, we’re talking about being able to analyze what’s going on inside your company across many aspects of the business to better know what your requirements are going to be vis-à-vis talent and in human resources. But you’re also analyzing externally something like the Ariba Network and/or social environments so that you can then, if you can’t hire, you can procure, or perhaps the boundaries between them are shifting as we get into more services procurement and we automate.

But the key here, I think, is the analytics. We need to analyze better what we’re doing and how we are doing it, but we also need to analyze what’s going on externally. Sometimes, that’s difficult without a third party, a partner, or a platform. How do you advise companies to be able to do this sort of comprehensive analytics capability?

Price: It’s a great point. We have analytics on a particular application. So if you want to instrument learning, that exists. There is analytics that cross the recruit-to-retire spectrum.

But then you hit on a really good point. How am I in category, in mining for retention for this cost of worker, or how am I for recruitment and retention ratio relative to a 100 other minds? You’re absolutely right. You can do it within an app, across an app, and using the power of the 35 million subscribers look at patterns that exist within an industry or a best practice.

The community component of this is really fascinating — contributing best practices in new ways to look at things and new indexes that companies build and publish to the cloud so our communities can consume those new ways of looking at a particular process is an exciting time. The byproducts are 2,000 KPIs that you subscribe to, to not only give you what is best in category in your industry.

Gardner: Are there some examples of being able to create campaigns that start to pull this together? It seems to have an impact across many parts of the business. We need to think about change. We need to put in the technology. We need to think more social, engage people in different ways, and think about sourcing of talent in different ways.

Is there any precedent that you can point to of a campaign of some sort that has begun to make the shift? Perhaps there’s a methodology that we can look at.

Price: We’re at a state in the market and the technology today where it’s really a matter of imagination more than anything else.

If you take retail, they have always had a historic problem of getting the right amount of talent, in the right place, at the right time, as seasonal as they are. They may have two weeks of hyper growth and they may have a great season or a bad season, but if they’re slow, they can’t hire enough talent.

So retail has re-imagined hiring. Of course it doesn’t fit all, but in some large global multinational chains, they found that the actual people that shop in their locations is the same demographic of people that work in their locations.

So they said if we can build a smartphone app that would allow you to apply while you are in the store, and the manager in the store at that time can see your resume or your LinkedIn profile, we can put you together and collapse this formal hiring process of weeks into potentially hours. This is just a complete re-imagination of recruiting. They collapsed all of their hiring from weeks to days.

We’re seeing this across all areas of the business, the ability to transform and visualize data. Where did I get that last recruit from that was so exceptional, and what is the profile of that individual? Talent doesn’t necessarily look like we think it looks from the past. Talent comes in every gender, every diversity, and from every corner of the globe. So what patterns do we have in our workforce that we want to replicate? The impact isn’t just productivity, as we described. It’s the engagement and contribution.

Creating a connection

Then, if you think about some of the other areas, you just follow this example. If I’m joining a company as a new sales rep, that application should be smart enough to look within my company for people who have worked with me before, create a connection over social and say, do you want to go for coffee, congratulations.

Maybe it goes out and sources over the Ariba Network for all of my laptop, my mobile, everything that I need. And if it’s really smart, it takes all of my contacts and pushes it into my customer cloud, because I will have been selling to the same people over and over. That’s an example of a process that will run across four legs of the application stack. We’ve never been at a more exciting time — ever.

Gardner: When you were speaking, you reminded me of the mantra several years ago in customer relationship management (CRM) of know your customer well, know them end-to-end. It now sounds as if we need to apply that to the employee.

Price: Absolutely. If you don’t, and you don’t really have the engagement level, you’ll probably have a talent shortage, because you’re not measured hierarchically any more. You’re not measured on the old traditional way. It’s about what you get in your personal brand. The informed companies of the future will know their workforces better than anyone and know how to replicate and scale them up or down at will and on-board them instantaneously.

Gardner: Perhaps the corporation of the future isn’t a single brand, but an amalgamation  of many thousands of brands for all the people contributing to their common goals?

Price: Absolutely.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Posted in Ariba, SAP, social media | Tagged , , , , , , , , | Leave a comment

NASCAR attains intimacy and affinity with fans worldwide using big data analytics

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Auto racing powerhouse NASCAR has engineered a way to learn more about its many fans — and their likes and dislikes — using big-data analysis. The result is they can rapidly adjust services and responses to keep connected best to those fans across all media and social networks.

It’s a story of getting at all the information and data that’s generated constantly from social media, news media, broadcast media — and then making the analysis instantly available as easy-to consume and relate visualizations.

BriefingsDirect had an opportunity to learn first-hand how NASCAR newly engages with its audiences using big data and the latest analysis platforms when we interviewed Steve Worling, Senior Director of IT at NASCAR, based in Daytona Beach, Fla. at the recent HP Discover 2013 Conference in Barcelona. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the context of what you’re trying to do with your fan base, and then how technology comes to bear on that.

Worling: NASCAR has been around for 65 years, and we have probably one of the most loyal fan bases out there. NASCAR really wants to understand what our fan base is saying about our sport. How do we engage with them, how are we really bringing our sport to their entertainment, and what’s the value of that?

Worling

So NASCAR partnered with HP to build a first-of-its kind of Fan and Media Engagement Center. That’s a new platform for us that allow us to listen to the social media outletsTwitter, Facebook, Instagram, all of those social media outlets — to understand what the fans are talking about.

Something unique about this platform is that it also allows us to bring in the traditional media news sites. What is the media saying about our sport, and then how do you tie those conversations together to get a really nice single pane of glass on the overall conversation? What are our fans are saying, what are the news media saying, and how does that help and benefit our industry as a whole?

Gardner: It sounds like you don’t want get some of the data — but all of the data.

Want to know everything

Worling: We absolutely want to know everything that’s being said across all of those platforms. We saw about 18 million impressions in our first year of the platform. That’s impressions across the social side and the news-media side. It was big, and this was our first year at it.

On the news media side, we’re only collecting from a small sample right now. Next year, we’re going to really enhance that and grow that from a few different news sites to hundreds of sites, as well as start to bring a more of awareness to our fans around social interaction.

So we’re expecting to see that number grow significantly. This year, as I said, a solid 18 million tweets overall translates to about 110,000 tweets during a race day, even up to about 15,000 tweets per minute.

NASCAR is a predominantly US-based sport, but we are growing internationally. Today, we have a series in Mexico. We have a series in Canada as well, and we just expanded into Europe with our Whelen Euro Series.

This platform will also help us engage and understand how the sport is performing in those markets. What’s the sentiment of the fans? It’s really a great platform to allow us to right anything that we might be doing wrong. So if we need to enhance the marketing or enhance the engagement of those tracks, we’re able to do that through this platform.

Our sport is unique, because there is a vast community that makes up our sport. You have a NASCAR governing body and that’s what I represent. Then, there is a large race track ownership. We call those promoters, and those are the folks who are selling tickets and getting you out to the race track.

Then, we have our teams and our drivers, and those are independent contractors. So you have those that are involved in the sport, and then our sponsors and our partners that help bring all of that together and make this ecosystem. That is NASCAR.

We’re able to collect data on all of those different constituents, and then share that value. I’ll give you a great example. This year, HP became a great partner with us around our Fan and Media Engagement Center.

Share the value

Our goal over the next couple years, as we work with HP, is to be able to sit down with them and share the value and what their sponsorship and their partnership brings to NASCAR. We want to develop and grow the relationship for a longer period of term. We give them real data on their activation and involvement in the NASCAR industry.

Gardner: We now know that customer information is being shared in whole new ways. How do you then take the technology and get a handle on it so that you can perform what you want?

Worling: We partner with HP, as I said, to build this platform. We’re leveraging products like their IDOL engine. The Explore capability from their Autonomy platform allows us to ingest all of this different data, put it together, and then really start building that single pane of glass to understand what these conversations are — whether there is a breaking story around activation within our sport, or something else.

As it’s collecting this data, the platform starts to stitch it together so that we can understand what the conversation is. So it’s taking that news outlet information, taking the social sentiment, and putting it together to make sense of it. It’s taking all of that unstructured data, structuring it, and then giving us the analytics that allow us to understand the conversation — and react appropriately.

It could be a story that makes sense and is telling the right story, or it could be a story that needs a little bit of direction from NASCAR to make sure that we’re getting the right story out there.

So HP building that with Autonomy has been very valuable. We’re getting ready to deploy HP Vertica on top of that now to allow us to take this large amount of data we’re getting and putting it into the Vertica data infrastructure. Then we can start making even more connection points and more rationalization, and then being able to layer other tools on top of it — things like Tableau Software — to help us with visualization.

One of the new things that I’m excited about is in telling our story about our great command center. It’s a showcase piece that you can come and see what we’re actually reporting on the analytics. We’re going to build a map of the U.S. that allows us to give us the hotspots of information.

So as people are tweeting, maybe good or bad, in California, you might get a big red spot. We can drive down into that, understand what that data is, and then engage through our dot-com platforms and other media outlets to make sure that we’re saying the right story or addressing the concerns that are out there.

Gardner: I saw on the stage at HP Discover in Barcelona that Facebook put up a very impressive map that was built using Vertica. It shows their actual installed base and the connections between them. Of course, it looks very much like a map of the world, but it’s a map of Facebook.

Amazing visualization

Worling: That was an amazing visualization, and I can’t wait to be able to do the same thing. I thought that was a really neat and I’d love to be able to get the resolution of the world like they have, but I will be happy to get a great, rich US look. That was totally a cool thing, and I hope that we can do the same thing as well.

Gardner: So one of the great things about what you have been doing is getting all the data. One of the bad things you’ve been doing is getting all the data. How do you move beyond this being a fire hose and make it actionable?

Worling: As I mentioned, we’re storing everything in IDOL today. We’ll be migrating to Vertica shortly to help us with the consumption. For us, this year, it’s been a little bit of we just didn’t know what we didn’t know. We weren’t really sure what kind of data we were going to see and how we were going to react to it. Our sport is a great sport, but like any sport or any business, there’s always a little controversy with it, and we experienced some of that this year. So it was more of a great platform to help us do crisis management.

As we dealt with the situations that came up, we were able to get data from this and react to it appropriately. But we’ve also started to learn some proactive things to think about.

As we launch a new car this year, our Gen-6 Car, what is the engagement or sentiment from our fans? We’ve been able to do some deep analytic research on what that is and get valuable information to be able to hand GM, who launched this car with us this year and say, “This is the results of the news” instantly — a lot of big data.

As I said, we have 18 million impressions this year, which was phenomenal, and I don’t think we had a bar to set. Now, we’ve have set the bar for next year and I think with Vertica and IDOL [part of HP HAVEn], we’re positioning ourselves or have the right platform that allow us to grow extensively as we look to the future.

Gardner: Once you start getting big-data capabilities and driving more data into it, you get hungry for more data. You’ll start thinking about places to acquire it, doing joins, and then finding even better analysis. Any thoughts as to where you might go next, now that you’ve tapped the social-media environment?

Worling: There are two ways to answer that. One, we’re going to continue to grow the social media side. I mentioned the things that we’re doing today with Facebook and Twitter. Instagram really is the next big piece of integration for us.

For NASCAR, it’s important for us to engage younger people in that Gen Y, Millennial Generations. Instagram is a key component to do that. So that’s going to be a big focus for us in getting that integrated and then just keeping an eye out for the new social solutions or offerings that are coming out and how we keep them integrated.

Traditional media

Then, we’re going to start working on the traditional news media as well. As I mentioned, it’s going to be key for us to understand the press impacts. That’s very relevant for our CEO and Chairman. I didn’t mention, but we’ll also be bringing in video from our broadcast partners. We broadcast nationally in the US, as well as in 198 countries worldwide. That story is very important to us.

We’ll be growing a lot of that next year. The second side of that is our business becomes more aware of this tool. We’ve been getting just inundated with requests, some from the sales guys, as they’re trying to develop new sales, how we should value what it means to be part of our sport. There are renewals in the sales process as well, the value of the partners that are already existing and then taking it to our drivers.

A great story I love to tell is about a young and upcoming driver that started in our Camping World Truck Series. This year has to build his brand. He has a brand that he needs to develop and get out there.

We brought him into the Fan and Engagement Center and spent about three or four hours taking him through different analytics, different use cases of information around his brand, and helped him understand what it meant to be good. We showed him the things he needs to develop, and the things that he wasn’t so good at, so he could take that away and work better on those. We’re definitely seeing a lot of requests from the industry: How does this platform benefit them and how do they get rich data out of it?

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in big data, Business intelligence, Cloud computing, data analysis, HP | Tagged , , , , , , , , , , , | Leave a comment

Ariba announces business network analysis enhancements, first transactions conducted under AribaPay

LAS VEGAS — Ariba wrapped up its annual Ariba LIVE user conference today with a series of announcements, chief among them news that its AribaPay automated B2B payments settlements service is now operational, and a demo of pending mobile applications for its flagship procurement, supplier management and cash flow optimization business applications.

Most of the announcements this week by Ariba, an SAP company, involved enhancements to the Ariba Network. The goal is to drive new levels of connectivity, collaboration, and insight — all required to optimize buying, selling, and managing cash in today’s networked economy. That activity then sets the stage for gathering unique data sets that spans global regions and hundreds of business verticals. And that community- and business process-generated data allows for unprecedented analytics and business intelligence capabilities that Ariba then delivers back to its network participants.

“Real-time response is no longer enough, predictive is the power of analyzed business networks,” says Rachel Spasser, Ariba CMO. “It’s time to embrace networks and their intelligence to transform commerce.”

Ariba also declared a clear path for bringing more of its apps and services onto the SAP HANA platform, as well as to the HANA cloud. Standardizing on the HANA cloud provides broad synergies with Ariba across other SAP assets (also headed to HANA cloud) such as ERP and business apps, SuccessFactors, CRM functions, and many business analytics capabilities.

“Business today is different than it was 10 – even 5 – years ago. It’s more social and mobile. It’s faster and smarter. And it demands a whole new way of operating,” said Sanish Mondkar, Executive Vice President and Chief Product Officer at Ariba, speaking at The Cosmopolitan hotel here. “With the latest enhancements to the Ariba Network, companies can effectively change the game, leveraging the wisdom and insights of entire communities to enable new processes that drive unprecedented innovation and outcomes.”

Through enhanced spend, process, and network-derived insights, delivered as part of the latest Ariba releases, companies can execute a more intelligent and automated sourcing process that drives better business. Those services are also extending to new countries, as Ariba is — with significant new SAP financial investment — building out data centers in China and Russia. “Our globalization innovations will cover 75 countries and 70% of the world’s commerce,” said Mondkar.

For example, with Ariba Spend Visibility now powered by SAP HANA (the first Ariba service to land on HANA), companies can analyze more spend data more quickly. Data loads 20 times faster and is immediately accessible once a sourcing project, contract, or invoice is initiated. Ariba said that Spend Visibility on SAP HANA takes report generation from minutes/hours to seconds.

Companies can perform more complex analyses based on an expanded set of variables, including cost centers, purchase price variances, and micro regions, and receive results in real time. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

With Ariba Sourcing and Ariba Discovery now integrated in Ariba Spot Quote, companies can create and execute one-time purchases for goods or services that are not handled by their supplier base in a fully automated manner. Requests for quote (RFQ) are automatically generated from a backend enterprise resource planning (ERP) system and sourcing events created within Ariba. Suppliers are then invited either as part of the RFQ or from matching through Ariba Discovery – all without any human intervention.

Spot buys are challenging for most companies because they require quick turnaround, and buyers generally lack efficient or effective methods to source them. Leveraging solutions like Ariba Spot Quote, organizations can reduce the time it takes to find the right suppliers from weeks to days or even hours and drive cost reductions of between two and five percent on average.

Selling organizations can more quickly and easily identify new business opportunities and collaborate with trading partners to enhance and expand relationships.

Ariba also shed new light o its next generation of apps and services, code-named Alexandria, after the famed library of the ancient world in Egypt.

The new platform will provide a single source of supplier information, easy syndication (including across SAP) to manage supplier information, and stewardship of network information and broad integrations, said Mondkar. Again the goal is to both create integrated business functions and a data resource capability that generates new and highly valued business analysis — even to help predict outcomes so businesses can act quickly to their markets and supply chains.

An those same business apps, process flows and the predictive analysis will soon delivered to the enterprise mobile tier. Mondkar showed on the stage here today a really impressive mobile iOS apps and alerts demo for Ariba procurement and supplier processes. It allows the professionals to access their processes, data and alerts any time and any where. I especially like the emphasis on making the app UI and mobile UI context-aware and dynamic to the users’ situation. The mobile apps will become generally available in the fall of this year, said Ariba.

Custom products

New custom products and services categories within Ariba Discovery were announced to enable better matching of buyer postings to seller opportunities and the automatic delivery of more qualified leads.

With daily delivery of leads automatically categorized as “new,” “best match,” or “closing soon,” selling organizations can efficiently monitor and manage opportunities and ensure that nothing is missed. And with new email capabilities, buyers can view and reply to sellers directly from their email client without logging into Ariba Discovery, speeding responses and enhancing collaboration.

Through enhancements to Ariba Invoice Management, companies can expand touchless invoice processing to new spend categories on a global basis, while also presenting new opportunities to enforce compliance and improve collaboration among trading partners.

Project-based invoices, common for construction, engineering, repairs and facilities management, are among the most critical expenses to manage, and most difficult to control. With Ariba Services Invoicing, companies can extend the smart invoicing process – where e-invoices undergo an automated validation process upon supplier submission so that only accurate and approved invoices reach accounts payable – to these complex spend categories.

Buyers can review and approve service entry sheets—which detail the services performed before, not after, invoice submission. Suppliers can then “flip” approved service entry sheets into invoices to create the perfect payable.

Brazil and Mexico are among the countries with the most complex tax requirements for electronic invoices. By extending e-invoicing support to these countries, the Ariba Network provides the infrastructure needed to receive and validate e-invoices that meet these stringent regulations, along with an e-invoice archive and audit trail for additional business controls, so that global organizations can expand their e-invoice initiatives to these fast-growing Latin American markets.

“The business of tomorrow will be different from today,” Mondkar said. “But companies that leverage the Ariba Network – and the enhancements unveiled today –can future proof their operations and make better decisions that create significant advantage.”

To learn more the Ariba Network and the value it can deliver for your organization, visit: www.ariba.com.

First transactions on AribaPay

Ariba has also announced the first transactions completed on AribaPay, its cloud-based B2B payments solution. Arlington Computer Products, a provider of IT productivity solutions, was the first to dip its toe into the water, participating in an early access program and has completed the first payments using the new Ariba service.

A fully integrated, end-to-end solution, AribaPay completes the procure-to-pay cycle embedded in the Ariba Network by integrating it to Discover’s trusted global payments infrastructure to effect payments at maturity.

With AribaPay on the Ariba Network, ordering, billing, and settlement processes between buyers and sellers are fully integrated. Companies can reduce costs, reduce risk, and complete the procure-to-pay process faster, easier, and with greater transparency and security.

AribaPay creates value on both sides of the transaction. Buyers who create purchase orders and receive invoices through the Ariba Network will now be able to close the procure-to-pay loop by sending payments to their suppliers using Ariba Pay. That results in less paper, less risk, and less effort in managing bank account information and related data. Buyers will uncover and resolve disputes faster, monitor on-going payments better and lower their processing costs and fraud risk.

Sellers will be able to receive rich remittance information and gain a clear and reliable view into future payments. They’ll be able to track and trace payments, uncover payment problems earlier, and ultimately reconcile faster while lowering their payment processing costs.

Other benefits include:

  • Lower processing costs
  • Richer remittance advice
  • Elimination of paper checks and invoices
  • Fewer payments lost to escheatment
  • Ability to track and trace transactions
  • Faster reconciliation and dispute resolution

AribaPay is in use and will be made generally available in the United States during the second quarter of 2014. Support for additional countries will be announced in the second half of 2014.

You may also be interested in:

Posted in Ariba, big data, Networked economy, SAP | Tagged , , , , , , , , , | Leave a comment

Top 25 SharePoint influencers: Social influence nicely collides with collaboration clouds

Educating technology markets and communities is now, more than ever, a function of influence, social media, digital word of mouth — and of cultivating an ecosystem of trusted collaboration. The role of social media and the power of personal connections have really become dominant forces in how people learn how to find and use technology.

This is such a major departure from the past — marketing has changed more in the last 5 years than the previous 50 — that any instruction on what actually works best these days is highly prized.

Therefore, I’ve been keenly interested in how the hottest new wave of marketing — digital social influence — intersects with one the hottest areas of cloud computingsoftware-as-a-service (SaaS) collaboration and documents/objects sharing.

Do these new waves double-up, cancel out, create an interference pattern, or harmonize in new and valuable ways? I really want to know.

So I helped guide the creation of a list of the top 25 U.S. SharePoint influencers, recently unveiled at the recent SharePoint 2014 Conference in Las Vegas. The effort was sponsored by harmon.ie, the provider of a collaboration app for a single-screen, seamless user experience anytime, anywhere, and on any device. Here’s the full list.

I’ll be doing a BriefingsDirect podcast soon with a panel of these newly identified influencers on SharePoint challenges and opportunities to help find out what matters most in the SharePoint ecosystem.

I’ve been an avid observer of Microsoft for more than 20 years. At one time, there was no better company in all of IT at marketing and evangelizing into multiple user bases: developers, channel, independent software vendors (ISVs), end users, and enterprises.

But what about now, given that the evangelizing game has changed so much? Can Microsoft, which itself is undergoing wrenching transitions, be as good at the new marketing ways as in the past? Again, I really want to know.

Tip of the arrow

Industry influencers are the tip on the arrow of how social media can be a positive force for sharing knowledge — whether it’s on Twitter or blogs or at events. With influencers, I benefit from them and they benefit from me. This is the same for anyone engaged in social media. So it’s really a powerful way to learn and to then add something to that learning process to then make it almost a hive-mentality or ecosystem-mentality affair.

It’s no surprise that of the top 25 SharePoint influencers that we identified, 48 percent are SharePoint specialists or consultants. The other two major categories are senior executives and SharePoint architects/engineers/developers. And gender equality had a good year, as the growth in women SharePoint influencers over the previous year — up to 4 spots and 16 percent of the list — marks a significant rise over only 1 female influencer on the 2013 list.

It’s the people in the know, in the technology trenches, that see both the forest and trees that can then effectively influence the way markets learn.

So I am very bullish on the role of influencers as a spark or catalyst for larger social interactions and for new wave knowledge transfer. Those social interactions in a discrete community like the SharePoint ecosystem are a big part of what drives innovation, and can help users as well as supplier companies understand the best course for products and services.

Of course, the role of SharePoint, Yammer, and Office 365 is rapidly shifting into more of a cloud-collaboration, hive offering itself. So how apropos for it to use social influence to evangelize tools that consequently promote social influence and knowledge sharing?

You have to remember that SharePoint, once likened to a corporate intranet, is extending into a cloud platform with SharePoint 2013. It’s far more integratable. And, there are rich features for security and versioning of file data. Exchange is also becoming predominately cloud-oriented in its new function drive.

Boundaries blurring

The boundaries then between all these Microsoft communications “products” are blurring because they are increasingly bundles of cloud data services.

Marketing products is so 1990s. Letting the social-hive support a fast-evolving hive of cloud-based collaboration services is so now. That’s why Microsoft must become adept at social and must properly influence the influencers. There’s no better way to scale down its cloud services — SharePoint and Office 365 chief among them. Single office/home office (SOHO) and small- and medium-sized businesses (SMBs) are where these services should grow like crazy, especially for those used to working with Exchange, Outlook, and Office. They will be guided by their chosen social milieu and peers, not by Microsoft’s spec sheets and sales force.

So check out the list of top SharePoint influencers, and also consider how the role of influencers is not so much outsized as right-sized. As companies and workers seek better collaboration and coordination among themselves, they will look to among themselves for the best means. SharePoint needs to be this chosen common controlled hive for employees, especially through the mobile tier.

And look for our BriefingsDirect podcast soon with a panel of these latest influencers. Many are returning because this is the third consecutive annual list of SharePoint influencers.

It seems to me that the voice of the community, whether it’s around a product or service or an industry or a vertical market, should be really important to vendors like Microsoft. They should be, I think, pretty sensitive to learning from the community, and I hope the community takes the opportunity to voice its opinion and make its requirements known as these companies produce new products and adjusts their strategies.

But this is a time where change is ripe, and Microsoft is in a position to react in a way that only benefits the users. The influencers can certainly show them how.

You may also be interested in:

Posted in SharePoint | Tagged , , , , , , , | Leave a comment

HP HAVEn CTO Mundada on new ways for businesses to gain transformation from big data and new wave analysis

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Big data capabilities and advanced business analytics have now become essential to nearly any business development activity.

The benefits that enterprises can get if they can get their hands around big data analytics and apply it to business challenges are quickly being documented — and they come as big new profits and major market advantages. Industries around the world are rapidly seeking transformational projects using big data to gain competitive advantage.

As part of the next edition of the HP Big Data Podcast Series, BriefingsDirect sat down with two HP executives to learn how these advanced analytics seekers can best accomplish their goals. The insights gleaned include how companies worldwide are best capturing myriad knowledge, gaining ever deeper analysis, and rapidly and securely making those insights available to more people on their own terms.

So join this executive-level discussion highlighting how the latest version of HP HAVEn produces new business analytics value and strategic return with Girish Mundada, Chief Technology Officer for HP HAVEn, and Dan Wood, Worldwide Solution Marketing Lead for Big Data at HP Software. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We’re in a fascinating time because analytics and big data are now top of mind. What was once relegated to a fairly small group of data scientists and analysts as reporting tools — and I am thinking about business intelligence (BI) — has really now become a comprehensive capability that’s proving essential to nearly any business strategy.

What’s behind this eagerness to gain big-data capabilities and exploit analytics so broadly?

Wood: We’re starting to see some very clear quantification of the value and the benefits of big data. It’s fair to say that big data is probably the hottest topic in the industry.

Wood

There’s a lot of talk across all forms of media about big data right now, but what’s happened is that credible publications like the “Harvard Business Review,” for example, have started to put solid numbers around the benefits that enterprises can get if they can get their hands around big-data analytics and apply it to business challenges.

For example, Harvard Business Review is saying that, on average, data-driven organizations will be five percent more productive and six percent more profitable than their competitors.

Worth chasing after

Think about that. A six-percent distinct profitability increase would double the stock price for a lot of organizations. So there really is a prize worth chasing after.

What we’re seeing, Dana, is much more widespread interest across the organization and not just within IT. We’re seeing line-of-business leaders understanding and, in many organizations, actually starting to benefit from big-data analytics.

They’re able to analyze the call logs in a call center, better understand the clickstreams on a website, and better understand how customers are using products. All of these are ways of analyzing large amounts of data and directly tying it to specific line-of-business problems.

That’s where we are right now. Industries around the world are going through transformational projects using big data to gain competitive advantage.

Gardner: It’s interesting too, Dan, that they’re not just taking these as individual data sets and handling them individually, but increasingly businesses are combining them, and finding new relationships, and doing things that they really couldn’t have done before.

Wood: Absolutely. It’s the idea of 360-degree view of their internal operations, or of their external customer trends and needs — and it’s come from combining data sets.

For example, they’re combining social media analytics on customers with the call logs into the call center, with internal systems of record around the customer relationship management (CRM) and ongoing customer transactions. It’s by combining all those insights that the real big-data opportunity reveals itself.

Gardner: And the sources for those insights and data, of course, are across almost any type of information asset. It’s not a just structured data or data that your application standard is around — it’s getting all the data all of the time.

Wood: That’s right. In some ways, this industry label of big data is perhaps not the most helpful, because it’s not just the volume of data that is the challenge and the opportunity for the business. It’s the variety of sources, as you’ve alluded to, and also the velocity at which that data is moving.

The business needs to get hold of these multiple sources of data and immediately be able to apply the analytics, get the insights, and make the business decisions. This is why still the vast majority of that data that’s available to an enterprise remains dark.

Unused and unexploited

It’s unused and unexploited. Organizations, with their traditional analytics systems, are struggling to get the meaning and insights from all these data types that we mentioned. These include unstructured information, such as social media sentiment, voice recordings, potentially even video recordings, and the structured and semi-structured things like log files and data center data. For many organizations, getting the information quickly enough out of their CRM and enterprise resource planning (ERP) systems is a challenge as well.

Gardner: So we see that there’s a great desire to do this, and there are great returns on being able to do this well. We talked about some of the general challenges. What specifically is holding people up?

Is this an issue of cost, complexity, or skills? Why aren’t companies able to move beyond this small fraction of the available information to which they could be applying such important insight and analytics?

Wood: It’s a complexity and a skills challenge, as you mentioned. The systems they have today, Dana, typically aren’t set up to able to analyze these vast amounts of unstructured information, and also to be able to analyze the structured data at a speed needed by the organization.

Think about the need to analyze immediately a clickstream from an online shopping application or a pay-to-use application that an organization has. That is, a rapid-scale analysis of a large amount of structured data. Typically, the analytic systems that organizations have had aren’t able to cope with that or with the unstructured human information.

This is why HP has created the HAVEn Big Data Platform, and Girish will talk in more detail about this, and how it brings together the analytics engine needed to address these issues.

Just as importantly, there’s the ecosystem around HAVEn, which includes HP experts and services and services from partners, to bring together the skills needed to turn this data collection into useful information.

And there are skills around data scientists, as well — skills around understanding the right questions the line of business needs to be asking, and understanding actually how to visualize and represent the data.

Gardner: What were the guiding principles that you were thinking of when HAVEn was being put together?

Talking to customers

Mundada: HAVEn came together not by creating it in a dark room somewhere in the back office. It came together by talking to customers. On a regular basis, I meet with some of HP’s largest customers worldwide, getting input from them. And they’re telling us what their current problems are.

Mundada

Let me see if I can describe the landscape in a typical organization, and we can go from there. You’ll see why we created HAVEn.

Let’s visualize four different waves of data. Back in early ’60s,’70s, even part of the ’80s, mainframes were the primary way to process data, and we used them for operationalizing certain parts of data processing, where data was extremely high-value. If you look at the cost of the systems, it was phenomenal.

Then came the next wave in the ‘80s, where we went into what I call client-server computing, and we already know several companies that were created in this space.

I’ve lived in Silicon Valley for almost 30 years now, and a whole bunch of new companies were born in this space. I worked for a company, Postgres, which became Illustra, then became Informix, and became IBM. If you look at that entire wave of OLTP technologies, we created data-processing technologies designed to solve basic business problems.

Application software was created: CRM, supplier relationship management (SRM), you name it. Many companies that did consulting around that were created, too. That was that second wave after the mainframe.

Then came the third wave, where we took this data from all these transactional systems, brought them together to find out some basic analysis, which we now call business analytics, to find out “who is my most profitable customer, what are they buying, why are they buying,” and things of that nature.

We created companies for that wave, too, and many technologies. Exadata, Teradata, Netezza, and a whole bunch of companies and applications were born in that space. That wave lasted for quite a while.

What we’re seeing now is that from 2003 onward, something very fundamental has happened. At least, that’s the way I’ve been seeing this. If you look at the three Vs that Dan has described — volume, velocity, and variety — we’re talking about volumes that are growing exponentially. In the past, they were growing linearly. That creates a very different kind of requirement.

More importantly, if you look at the variety that Dan mentioned, that’s really the key driver in my mind. People are now routinely bringing in machine data, human data, and your traditional structured warehouses — all of them together.

If you visualize a bar graph, you would see that 10 percent of the data that we now can monetize is coming from traditional sources, whereas 90 percent of the data that we need to monetize is now sitting in machine data and human data.

High velocity analytics

What we’re trying to do with HAVEn is create a combined platform, where you can combine these three different data types and do very high-velocity analytics.

As a simple example, if you look at Apache Web Server logs, that data is used historically by the security people to see if anybody is breaking in. That data was being used by operational people to see if machines aren’t overloaded.

More importantly the digital marketing guys now want to look at that data to see who’s coming to their website, what they’re buying, what they’re not buying, why they’re buying, and which geographies they’re coming from. Then, they want to combine all these data sets with their existing structured data to make sense out of it.

Today, it’s a mess in the market. When we talk to our partners and customers, they’re saying that they have point solutions for each of these things, and if you want to combine that data, it’s really hard. That’s why we had to create HAVEn.

HAVEn is the fourth wave. HAVEn is specifically about big data, the fourth wave. If you look at HP’s portfolio, we sell products and services across each of these waves, and the fastest growing wave right now is the big-data wave. It’s growing at about 35 percent a year, according to Gartner, and that’s why we’re excited about it.

Gardner: Now we know why you created it and what it’s supposed to do. Tell us a little bit more about what’s included in HAVEn and why it is that you’ve been able combine product and platform to solve this very difficult task.

Mundada: If you look at what’s required now to process big data in its entirety, one product no longer can do it all. There is a very famous paper written by some university professors titled “One size does not fit all.” It proves that different data structures are able to solve different kinds of data problems far more efficiently.

One way to think about big data is to think of it as a pile of dirt. It’s a big pile. In that pile, there’s gold, silver, platinum, iron, and other metals you don’t even know. If the cost of mining that data is high, obviously you’re going to go after only the platinum and some known objects that you care about, because that’s all you can afford.

HAVEn is about bringing that cost of processing down to a very, very low level so you can go after more metals. That means you have to bring together a set of technologies to be able to solve this. If you look at the last three years, HP has made very significant amounts of investments in the big-data space.

Best of breed

We bought companies that were best of breed to try to solve specific problems. We bought Autonomy, Vertica, ArcSight, Fortify, TippingPoint, 3PAR Data, and Knightsbridge.

Now, we have a set of technologies to be able to combine them into a unique experience. Think of it almost like Microsoft Office. Before you had Microsoft Office, you would buy a word processor from one company, a spreadsheet from another company, and presentation software from a third company.

Let’s say you wanted to create a simple table. If you had created it in a word processor or even a spreadsheet, you couldn’t mix and match that. It was impossible to mix and match very different types.

Then, Microsoft came to the table and said, “Look, here’s a simplified solution.” If you want to create a table, go ahead and create it in PowerPoint. Or if you want to create more complicated thing, put it in Excel. Then, take that Excel and put it in PowerPoint. Or, you can put the whole thing into a Word document. That was the beauty of what Microsoft did.

We’re trying to do something similar for big data, make it very easy for people to combine all these different engines and the different data types and write simple applications on it.

Gardner: What beyond the products and binding them together makes HAVEn unique?

Mundada: HAVEn is really two different concepts. There’s the HAVEn data platform, which we’ll talk about now, and there’s a HAVEn ecosystem, which I’ll mention in a minute.

HAVEn means Hadoop, Autonomy, Vertica, Enterprise Security, and “n” applications. That’s the acronym. So let’s look at one of these pieces, and why we need an architecture like this.

As I said, today you need to combine different sets of data techniques to solve different problems, and they have to work seamlessly. That’s what we did with HAVEn. I’ve been with HAVEn from day zero, before the project concept started, and I can tell you why and how we added these pieces and how we’re trying to integrate them better.

If you look at Hadoop as an ecosystem part of that HAVEn, our story with Hadoop at HP is that Hadoop is an integral part of HAVEn. We see a lot of our customers and partners betting on Hadoop and we think it’s a good thing to keep Hadoop open and non-proprietary.

Leading vendors

We also today work with all leading Hadoop vendors, so we have shipping appliances as well as reference architectures for both Cloudera and Hortonworks, and we’re working now with MapR to create similar infrastructure. That’s our Hadoop’s story.

We’ve also found that our customers are saying they want some flexibility in Hadoop. Today, they may want one vendor, and tomorrow, they may decide to go to another vendor for whatever business reasons they choose. They want to know if we can provide a simple management tool that works across multiple Hadoop distributions.

As an example, we had to extend our Business Service Management (BSM) portfolio, so we can manage Hadoop, Vertica, hardware, storage, and networking all from within one environment. This is simply operationalizing it. Having a standardized set of hardware that matches multiple Hadoop distributions was another thing we had to do. There are many such enterprise-class innovations that you’ll see coming from HP.

But more than that, we also found that Hadoop is really good for certain kinds of applications today, and obviously, the community will extend that. You will see more and more innovations coming from that community and ecosystem.

Today, there are several areas where there are holes in Hadoop, or maybe they’re not as strong as commercial products. One such area that you see is SQL. The SQL phase of Hadoop is going to be one of the key differentiators across the different Hadoop packaging.

In that area, we have a technology called Vertica, which is the V part of HAVEn, and you’ll see companies like Facebook, using a combination of both Hadoop and Vertica.

The classic use case we see is that people will bring all kinds of raw data, put it into Hadoop, and do some batch processing there. Hadoop is great as a file system, a batch processing environment. But then they’ll take pieces of that data and want to do deep analytics on it, like a regression analytics, and they will put it into Vertica.

Vertica is, is an analytic database platform, and I will break up those three words. It’s a database. It looks and feels like a database. It has SQL on it, open database connectivity (ODBC), and Java database connectivity (JDBC) connectivity. You can run all kinds of tools on it, the ones you are used to, Tableau, Pentaho, and Informatica. So from that perspective it’s a regular database.

What’s different is that it’s custom built for the fourth wave. It’s an analytic database, and by that, I mean the underlying algorithms are completely designed from the ground up. Michael Stonebraker who created the key products in the first wave and the second wave — Ingres and Postgres — also created this at MIT from the ground up.

Data today

The intuition was that if you look at the processing of data today, it’s gone from having 10 to 20 columns per row to possibly thousands of columns. A social media company, for example, might have 10,000 pieces of information on me, and while they do processing, it’s going more linear. It’s going regression-oriented in a sense. You might say “Girish, age x, lives here, and likes y. What’s the likelihood somebody else may like it?”

It’s meant for that kind of deep analytical processing, a column-oriented structure. In those kinds of applications, this database technology tends to be magnitudes faster — tens of times faster. That’s one example of Hadoop and Vertica, and we can talk more about other pieces Autonomy and Enterprise Security with you.

Gardner: So we see that there’s a platform that you put together. There’s an ecosystem that’s supporting that. There are these binding standards that make the ecosystem and the platform more synergistic. But other people are doing the same thing. What’s making HAVEn different? What is it about HAVEn that you think is going to be a winner in the marketplace?

Mundada: There are two different answers to it. Let me talk about how we’ve taken just not the SQL piece of Hadoop, but how we extend it with other parts of HP that are unique to HAVEn. It’s the breadth of it. Let’s see how we extend this simple combination of Hadoop and Vertica.

I said it’s an analytic database platform. If you look at that platform piece of it, with Vertica, we’re able to drop in other code that are user-defined and user-written. For example, you can drop in R language routines, Java, C++, or C language routines directly into the database. Now, we’re now able to combine that richness across our portfolio.

Autonomy, which is the A part of HAVEn, is a unique technology. It’s one of a kind. Some of the largest governments and some of the largest organizations in the world, such as banks and financial institutions, have this in production in what it’s meant for, human information processing, which is audio, video, and text.

As an example, you could take a video stream and ask simple questions. Tell me if an object is moving from point A to point B, or tell me what’s in the object. Is it a human? Is it a car? Can you read car number plates automatically?

And you could do some really sophisticated applications. Taking a car, we have cases where police cars have video cameras mounted on the side, and as they’re driving by in a parking lot, they can take photos of the number plates and compare it to stolen cars.

Crime detection

Imagine being able to take that technology and combining it automatically, through simple SQL-like or simple REST API-like commands with SQL, with your existing data and creating very sophisticated applications to understand your customer or for crime detection and things like that?

Now let’s bring in the third of part of the puzzle, the E part, which is Enterprise Security. That’s also unique. We have an entire portfolio, both for security as well as for operations management.

If you look at enterprise security and if you look at the Gartner Magic Quadrant, HP’s product set has been in the leader space for several years in a row. They are the number one vendor in that area.

Now, think about our portfolio of ArcSight, Fortify, Tipping Point, and other ESP products. Imagine being able to take the data-collection algorithms of those, bringing it into this common platform of HAVEn, combining it with other structured and unstructured data with just simple commands. That’s something we can do uniquely.

Operations management is another area where we have hundreds of these machine logs. We can collect them, break them open into modular pieces, and create new applications. You can go look at our website, Operations Analytics, where with a simple slider, you can go back and forth in time to millions of log files as if they were structured data.

We can do that uniquely, because we have that entire collection. Our BSM portfolio has been on the market for 30 years. It’s one of the leaders. This is the HP OpenView platform and this is one of the things we can do uniquely at HP, bring all these things together.

That’s the breadth of our portfolio, but it simply doesn’t stop at this platform level. Remember, I said that there are two concepts. There is a platform, and then there is the ecosystem. Let’s look at the platform level first.

We have the whole of HAVEn. We have the connectors, and we ship these 700 connectors out of the box. With simple commands, you can bring in social-media data in every language written. You can bring in machine logs and structured logs. That’s the platform.

Let’s extend it further into the ecosystem part. The next thing that people were saying was, “We want to use something very open. We have our own visualization tools. We have our own extract, transform, load (ETL) tools that we’re used to. Can you just make them work?” And we said, “Sure.”

That’s one of the things that we’re able to do now. With simple SQL, we can essentially write simple queries across structured and unstructured data. Using Tableau Software, or any other tool that you like, we can access this data through our connectors, but, more importantly, it let’s you hook in your existing ETL tools into this — completely transparently.

Breadth and openness

So that’s the openness of the platform, the breadth and the openness of the platform. Breadth is not just about the software platform, but it’s about HP’s strength to bring together hardware, software, and services.

Even with the platform, the HAVEn components in the middle, the connectors, and being able to match them with matching hardware, our customers are asking, “Can you give us matching hardware for Hadoop, so we don’t have to spend time setting it up?” That’s one of things that HP can uniquely do, but more importantly we have appliances for Vertica, for example, which are standardized.

If you look at the other side, our customers are also saying, “We understand that HP wants to provide us all this, but we like openness and we like other partners.” So we said, “Fine, we’ll leave this entire ecosystem open.” Our software will work with HP hardware and we can optimize, but we also commit to working on everybody else’s hardware.

Our cloud story is that we’ll work on Amazon, as well as OpenStack. For example, if you want to build a hybrid cloud, where part of your data resides on HP or your private environment using OpenStack, that’s fine. If you want to put it in Amazon or Rackspace, no problem. We’ll help you bridge all these. These are the kinds of enterprise-cloud innovations that HP is able to do, and we’re open to this.

So to answer your question very succinctly, if there were three things I would pick where HP is different, one is our breadth of our portfolio. We have very large breadth that we’ve brought together.

It’s the openness of the platform. HP is known to be a very open company. If you look our Hadoop story, we have an example. We didn’t create a proprietary Hadoop. We kept it open. If you look at our virtualization, we didn’t go and force a virtualization technology on you. We kept it open.

More importantly, if there is one key thing that you want to take home from what we’ve done with HAVEn, it’s not about feeds and not about speeds. It’s about business value.

The reason we created HAVEn was to create that iPhone-like environment or Android-like environment, where the vision is that you should be able to go to a website, say you have standardized on the HAVEn platform, and then, be able to point and click and download an application.

The end part of HAVEn is really the business value of it, and that’s how we see HAVEn as unique. There is nobody else, as far as we know, that has that end-vision, where you can build the applications yourself using standard tools — SQL, ODBC, REST API, JDBC — or you can buy ready-made software that HP Software has created.

We have packages across service, operations, and digital marketing. Or you can go with a partner. The partner could be HP Enterprise Services, Accenture, Capgemini, or any of those big partners. That’s something unique about the HP big-data ecosystem that doesn’t exist anywhere else today.

Applications

Gardner: Applications are something that take advantage of the platform, the capabilities, the breadth and depth of the data, and information.

I wonder if you could explain a little bit more about the application side of HAVEn, perhaps through examples of what people are already doing with these applications, and how they’re using them in their business setting?

Mundada: That’s actually one of the most exciting parts of my job. As I said, I meet literally 100 customers a month. I’m traveling across the continents, and the use-cases of big data that I see are truly phenomenal. It really keeps you very motivated to keep doing more.

Let’s look at a very broad level of why these things matter. Big data is not just about monetary profits. It’s really about what I call extended profits. It doesn’t have to be monetary. If you look at a simple example, we have medical companies using data, using our technologies, to dramatically speed up drug discovery hundreds of times more than they were able to with Hadoop.

That translates into just saving lives. At our recent Discover show in Barcelona, we saw that a very innovative organization is using our technology to look at bio-diversity and save wildlife in the Amazon.

That’s unique, but those are like edge cases. If you look at a regular enterprise, what they want to do at a very high level falls into three categories: Applications that HP itself is building, applications that partners are building, and applications that customers themselves are building.

There are three applications I’ll mention. In terms of increasing revenue, we have a product that we ship called Digital Marketing Hub, and it combines the power of Autonomy and Vertica to analyze all of your customer analytics.

You’re able to take your call center logs, your social media feeds, your emails, your phone interactions and find out what the customer is really is saying, what they want and don’t want, and then, being able to optimize that interaction with the customer to create more revenue.

More precise answers

For example, when a customer calls knowing what they want, obviously you can tell them more precise things. That’s one example.

Let’s look at another example, where you want to decrease your bottom line or decrease your costs. Operational Analytics is another software product we ship. We’re able to drive down costs of debugging network troubles by 80 percent by combining all these logs from machines on a very frequent basis.

We can look at this and say. “At this second, every machine was okay. A second later, machines have gone down.” I can look exactly at the incremental logs that showed up, using a simple pen like a pointer, going through SQL-like data. That’s unique.

Those are the kinds of applications we’re able to create. It’s not just these two. The other thing people want is improve products and services. We have something called Service Anywhere, where as you’re calling or as you’re typing in commands and saying you want to find information about that, the system is able to understand the meaning of what you’re saying.

Notice that this is not keyword search. This is meaning, where it’s able to go through existing case reports from customers, look at existing resolutions, and then say, “Okay, this might solve your problem automatically.”

Imagine what that impacts. Your customers are happy, because the answers are quicker. We call this ticketless ID, but more important, look at some other interesting ways of how this affects a company.

For example, I was recently in Europe. I was talking to a very large telco there, and they said, “We have something like 20,000 call-center operators who are taking calls from customers. Each call volume might take six minutes and some of them are repeat calls. That’s really our problem.”

We worked out something that roughly could save them two minutes per call. That translates to about a $100 million net saving per year. That’s really phenomenal. Those are one kind of application that HP built.

Now imagine a customer wanting to build the same application themselves. That’s the beauty of the HAVEn platform. On the same platform, you can buy HP built applications or you can build your own.

Let’s look at NASCAR as an example. They did something very similar for customer analytics. They are able to — while the race is happening — understand audio, television channels, radio, broadcast, and social media and bring that all together as if it’s one unique piece of data.

Then, they’re able to use that data in really innovative ways to further their sport and to create more promotional dollars for just not themselves, but even the participants. That’s unique — being able to analyze mass scale human data.

Looking to the future

Gardner: Well, we’ve learned a lot about the market, the demand, why big data makes so much sense. There is very large undertaking by HP around HAVEn, and what it’s getting in terms of openness, platforms, breadth, and these great examples of applications. But we also need to look to the future.

What’s coming next in terms of HAVEn 2.0 or HAVEn 1.5? Dan, could you update us on how things are progressing, what you have in mind for the next versions of these products and, therefore, the whole increasing as sum of the parts increases?

Wood: Dana, we’ve just announced HAVEn 2.0. The way Girish explained HAVEn there in terms of the platform and the ecosystem and continuous innovation now is around both of those pieces. It’s really important to us to be driving the ecosystem, as well as the platform. So I’ll speak to HAVEn 2.0 and one of the feature that’s the focus in driving HP forward.

In terms of the platform, there are the analytics engines that we have. Girish mentioned they were best in class at the time that HP acquired them, and we continue to invest in R and D across Autonomy, IDOL, Vertica, and the ArcSight Logger product. We recently announced new versions of all three of those, improving the analytics capability and the usability and, just as importantly, increasing the interoperability.

For example, we now have integration of the ArcSight Logger with the Autonomy IDOL engine for analyzing unstructured human information. A really great use case of this is Logger was previously enabling IT to understand data movements and potential threats and the risks in the organization.

For example, if I were sending 50 percent of my email to a competitor, you could combine that capability with the unstructured information analysis in Autonomy and understand by that the information layer exactly what’s in that email, 50 percent of which is going to a competitor.

Let’s start putting that together and getting a powerful view of what an individual is doing and whether it’s a risky individual in the organization, integrating those HAVEn engines and putting more effort on integrating it into the Hadoop environment as well.

For example, we have just announced integration Hadoop connectors for Autonomy. A lot of people are saying that they’re building this data lake with Hadoop and they want to have the capability of putting some analytics into the unstructured information that exists in that Hadoop data lake. Clearly, we’ve also got integration with Vertica in the Hadoop environment as well.

The other key thing within that on the engine is IDOL OnDemand. At the moment, on an early-access program, we’re making the IDOL engine available to developers as a cloud-based offering. This is to encourage the independent developer community to take components of IDOL with that social media analytics, whether it’s video or audio recognition, and start building that into their own applications.

We believe the power of HAVEn will come from the combination of HP-provided applications and also third-party applications on top.

Early-access program

We’re facilitating that with this initial early-access program on IDOL OnDemand, and also, we’re investing in developer programs to make the whole HAVEn development platform far easier for partners and independent developers to work with.

We’ve set up a HAVEn developer website, and stay tuned for some really fun events online and physical events, where we’ll be getting the developer community together.

In terms of those applications that make the whole HAVEn ecosystem come to life, Girish has mentioned some of them that we have announced over the last few weeks. So I’ll give you a quick recap on those.

We have the Operations Analytics and Service Anywhere apps, both aimed at the CIO. And we have the Digital Marketing Hub from HP aimed at marketing leaders in the organizations. These are three applications that HP has packaged on the HAVEn platform.

And along with the HAVEn 2.0 announcement, we’re really pleased that six of the leading SI partners — Accenture, Capgemini, Deloitte, PwC, Accenture and Wipro — themselves have put marketing applications on top of HAVEn. And those guys have gotten fascinating mixtures of very industry-specific analytics applications and more horizontal apps based on the priorities that they’re chasing after.

So we’re really excited about that and expect to see many more announcements of partner applications over the next few months.

The final piece of HAVEn 2.0 to support this whole ecosystem thing is a marketplace that we’ve launched, where we’re populating our solutions and partner solutions to facilitate the whole commerce side of those applications taking off in the market.

One-stop resource

The first place to go is hp.com/haven. That’s your one-stop resource for information on this platform, all of the engines that Girish alluded to. You can get the inspiration from some amazing customer case studies we have on there — insights from experts like Girish and other people who are talking in depth about the individual engines.

And as you rightly say, Dana, it’s finding the right on-ramp for yourself.  You can look at the case studies we have, the use cases on big data in particular industries, and take a look at what the specific pain point you have today. That’s the hp.com/haven website, and that gives you all of that information.

You can also drill down from there, if you’re a developer, and find the tools and resources that we’ve spoken about to enable you to start building apps on top of HAVEn. That’s one part.

The whole power of HP behind this HAVEn platform is in enabling, from an infrastructure and services point of view, to start building these big data analytics. A couple of key things here.

We started to build fully configured appliances around Hadoop and Vertica. So the Converged System’s team in HP has launched the ConvergedSystem 300, which enables you to have Vertica and Hadoop on a pre-configured appliance. That’s a great starting point for someone early on in the big-data analytics life cycle.

To expand on that, the Technology Services team is able to do full consulting on how to optimize the overall infrastructure from the point of view of processing, sharing, and storing this vast amount of information that all organizations are coping with today. That will then start to put in things like 3PAR storage systems and other innovations across the HP hardware business.

Another place where I see customers often needing some help to get started is in understanding exactly what the questions are that we need to be asking in terms of analytics and exactly what algorithms and analytics we need to put in place to get going. This is where the Big Data Discovery Experience Services from HP come in.

This is provided by the Enterprise Services Group (ESG). Those guys have data scientists and industry experts who can actually help customers go through the design phase for a big-data platform and than offer the HAVEn infrastructure supported by the ESG Services team.

Finally, Dana, come and see us on the road. We’ll be at HP Discover in Las Vegas June 10-12. We’re putting together several road shows and events across the main regions in Europe, the Americas, and in Asia Pacific, where we will be taking HAVEn on the road, too. Take a look at that hp.com/haven website, and details of the events will be found on there.

Key messages

Mundada: There are two key messages: big data is really important and it’s disrupting business. Your competitors are going to do it. You have a choice to either lead and do it yourself or you will be forced to follow. It’s one of those things that are disrupting industries worldwide.

Now, when you think of big data, don’t think of pieces and don’t think of piece parts. It’s not like you need a separate solution for human information, another for machine logs, and another for structured data. You almost have to think of it holistically, because there are many kinds of newer applications that I’m seeing regularly, where you have to bring all these data types together and create joint applications.

Whichever technologies that you choose and settle on, think of that Microsoft Office-like experience. You want to combine integrated solution across the entire stack and there aren’t that many available in the market today. So whoever you work with, make sure that you’re able to handle that entire piece as one giant puzzle.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in big data, data analysis, HP | Tagged , , , , , , , , , , , | Leave a comment

Fast-changing demands on data centers drive need for uber data center infrastructure management

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Once the province of IT facilities planners, the management and automation of data centers has rapidly grown in scope and importance.

As software-driven data centers have matured and advanced to support unpredictable workloads like hybrid cloud, big data, and mobile applications, the ability to manage and operate that infrastructure efficiently has grown increasingly difficult.

At the same time, as enterprises seek to rationalize their applications and data, centralization and consolidation of data centers has made their management even more critical — at ever larger scale and density.

So how do enterprise IT operators and planners keep their data centers from spinning out of control despite these new requirements? How can they leverage the best of converged systems and gain increased automation, as well as rapid analysis for improving efficiency?

BriefingsDirect recently posed such questions to two experts from HP Technology Services to explore how new integrated management capabilities are providing the means for better and automated data center infrastructure management (DCIM).

To learn more on how disparate data center resources can be integrated into broader enterprise management capabilities and processes, now join Aaron Carman, HP Worldwide Critical Facilities Strategy Leader, and Steve Wibrew, HP Worldwide IT Management Consulting Strategy and Portfolio Lead. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions. [Learn more about DCIM.]

Here are some excerpts:

Gardner: What’s forcing these changes in data center management and planning and operations? What are these big new requirements? Why is it becoming so difficult?

Carman: In the past, folks were dealing with traditional types of services that were on a traditional type of IT infrastructure. Standard, monolithic-type data centers were designed one-off. In the past few years, with the emergence of cloud and hybrid service delivery, as well as some of the different solutions around convergence like converged infrastructures, the environment has become much more dynamic and complex.

Hybrid services

So, many organizations are trying to grapple with, and deal with, not only the traditional silos that are in place between facilities, IT, and the business, but also deal with how they are going to host and manage hybrid service delivery and what impact that’s going to have on their environment.

Carman

It’s not only about what the impact is going to be on rolling out new infrastructure solutions like converged infrastructures from multiple vendors, but how to increasingly provide more flexibility and services to their end users as digital services.

It’s become much more complex and it’s a little bit harder to manage, because there are many, separate types of tools that they use to manage these environments, and it has continued to increase.

Gardner: Steve, I suppose too that with ITIL v3 and more focus on a service-delivery model, even the very goal of IT has changed.

Wibrew: That’s very true. We’re seeing a trend in the change and role of IT to the business. Previously IT was a cost center, an overhead to the business, to deliver the required services. Nowadays, IT is very much the business of an organization, and without IT, most organizations simply cease to function. So IT, its availability and performance, is a critical aspect of the success of the business.

Gardner: What about this additional factor of big data and analysis as applied to IT and IT infrastructure? We’re getting reams and reams of data that needs to be used and managed. Is that part of what you’re dealing with as well?

Wibrew

Wibrew: That’s certainly a very important part of the converged-management solution. There’s been a tremendous explosion in the amount of data, the amount of management information, that’s available. If you narrow that down to the management information associated with operating management and supporting data centers from the facility to the applications, to the platforms right up to the services to the business, clearly that’s a huge amount of information that’s collected or maintained on a 24×7 basis.

Making good and intelligent decisions on that is quite a challenge for many organizations. Quite often, we would be saying that people still remain in isolated silo teams without good interaction between the different teams. It’s a challenge trying to draw that information together so businesses can make intelligent choices based on analytics of that end-to-end information.

Gardner: Aaron, I’ve heard that word “silo” now a few times, siloed teams, siloed infrastructure, and also siloed management of infrastructure. Are we now talking about perhaps a management of management capabilities? Is that part of your story here now?

Added burden

Carman: It is. For the most part, most organizations when faced with trying to manage these different areas, facilities IT and service delivery, have come up with their own set of run books, processes, tools, and methodologies for operating their data center.

When you put that onto an organization, it’s just an added burden for them to try to get vendors to work with one another and integrate software tools and solutions. What the folks that provide these solutions have started to realize is that there needs to be an interoperability between these tools. There has never really been a single tool that could do that, except for what has just emerged in the past few years, which is DCIM.

HP really believes that DCIM is a foundational, operational tool that will, when properly integrated into an environment, become the backbone for operational data to traverse from many of the different tools that are used to operate the data center, from IT service management (ITSM), to IT infrastructure management, and the critical facilities management tools.

Gardner: I suppose yet another trend that we’re all grappling with these days is the notion of things moving to as-a-service, on-demand, or even as a cloud technology. Is that the case, too, with DCIM, that people are looking to do this as a service? Are we starting to do this across the hybrid model as well?

Carman: Yes. These solution providers are looking toward how they can penetrate the market and provide services to all different sizes of organizations. Many of them are looking to a software-as-a-service (SaaS) model to provide DCIM. There has to be a very careful analysis of what type of a licensing model you’re going to actually use within your environment to ensure that the type of functionality you’re trying to achieve is interoperable with existing management tools. [Learn more about DCIM.]

Wibrew: Today, clients have a huge amount of choice in terms of how they provision and obtain their IT. Obviously, there are the traditional legacy environments and the converged systems and clients operate in their own cloud solutions.

Or maybe they’re even going out to external cloud providers and some interesting dynamics that really do increase the complexity of where they get services from. This needs to be baked into that converged solution around the interoperability and interfacing between multiple systems. So IT is truly a business supporting the organization and providing end-to-end services.

Organizations struggling

Carman: Most organizations are really struggling to introduce DCIM into their environment, since at this point, it’s really viewed as more as a facilities-type tool. The approach from different DCIM providers varies greatly on the functions and features they provide in their tool. Many organizations are struggling just to understand which DCIM product is best for them and how to incorporate into a long term strategy for operations management.

So the services that we brought to market address that specifically, not only from which DCIM tool will be best for their environment, but how it fits strategically into the direction they want to take from hosting their digital services in the future.

Gardner: Steve, I think we should also be careful not to limit the purview of DCIM. This is not just IT. This does include facilities, hybrid and service delivery model, management capabilities. Maybe you could help us put the proper box around DCIM. How far and why does it go or should we narrow it so that it doesn’t become deluded or confused?

Wibrew: Yeah, that’s a very good question, an important one to address. What we’ve seen is what the analysts have predicted. Now is the time, and we’re going to see huge growth in DCIM solutions over the next few years.

DCIM has really been the domain of the facilities team, and there’s traditionally been quite a lack of understanding of what DCIM is all about within the IT infrastructure management team. If you talk to lot of IT specialists, the awareness of DCIM is still quite limited at the moment. So they certainly need to find out more about it and understand the value that DCIM can bring to IT infrastructure management.

I understand that features and functions do vary, and the extent of what DCIM delivers will vary from one product to another. It’s very good certainly around the facilities space in terms of power, cooling, and knowing what’s out on the data center floor. It’s very good at knowing what’s in the rack and how much power and space has been used within the rack.

It’s very good at cable management, the networks, and for storage and the power cabling. The trend is that DCIM will evolve and grow more into the IT management space as well. So it’s becoming very aware of things like server infrastructure and even down to the virtual infrastructure, as well, getting into those domains.

DCIM will typically have work protectabilities for change in activity management. But DCIM alone is not the end-to-end solution, and we realized the importance of the need to integrate it with the full ITSM solutions and platform management solutions. A major focus, over the past few months, is to make sure that the DCIM solutions do integrate very well with the wider IT service-management solutions to provide that integrated end-to-end holistic management solution across the entire data-center ecosystem.

Great variation

Carman: With DCIM being a newer solution within the industry, I want to be very careful about calling folks DCIM specialists. We feel that we have a very great knowledge of the solutions out there. They vary so greatly.

It takes a collaborative team of folks within HP, as well as with the client, to truly understand what they’re trying to achieve. You could even pull it down to what types of use cases they’re trying to achieve for the organization, which tool works best and in interoperability and coordination with the other tools and processes they have.

We have a methodology framework called the Converged Management Framework that focuses on four distinct areas for a optimized solution and strategy for starting with business goals and understanding what the true key performance indicators are and what dashboards are required.

It looks at what the metrics are going to be for measuring success and couples that with understanding organizationally who is responsible for what types of services we provide as an ultimate service to our end user. Most of the time, we’re focusing on the facilities in IT organization. [Learn more about DCIM.]

Also, those need to be aligned to the process and workflows for provisioning services to the end users, supported directly by a system’s reference architecture, which is primarily made up of operational management tools and software. All those need to be supported by one another and purposefully designed, so that you can meet and achieve the goals of the business.

When you don’t do that, the time it takes for you to deliver services to your end user lengthens and costs money. When you have separate tools that are not referencing single points of data, then you’re spending a lot of time rationalizing and understanding if you have the accurate data in front of you. All this boils down to not only cost but having a resilient operations, knowing that when you’re looking at a particular device or setup devices, you truly understand what it’s providing end to end to your users.

Wibrew: If you think about the possibilities in the management of facilities, the IT infrastructure, right up to services of a business, end-to-end, is very large and very, very complex. We have to break it down into small or more manageable chunks and focus on the key priorities.

Most-important priorities

So we look at the trans-organization, work with them to identify to them what their most important priorities are in terms of their converged-management solution and their journey.

It’s heavily structured around ITSM and ITIL processes, and we’ve identified some great candidates within ITIL for integration between facilities in IT. It’s really a case of working out the prioritized journey for that particular client. Probably one of the most important integrations would be to have a single view of the truth of operational data. So it would be unified asset information.

CMDBs within a configuration management system might be the very first and important integration between the two, because that’s the foundation for other follow-on services until you know what you’ve got, it’s very difficult to plan, what you need in the future in terms of infrastructure.

Another important integration that is now possible with these converged solutions is the integration of power management in terms of energy consumption between the facilities and the IT infrastructure.

If you think about managing the power consumption of things like efficiency of the data center with PoE, generally speaking, in the past, that would be the domain of the facilities team. The IT infrastructure would simply be hosted in the facility.

The IT teams didn’t really care about how much power was used. But these integrated solutions can be more granular, far more dynamic around energy consumption with much more information being collected, not just at a facility level but within the racks and in the power-distribution units (PDUs), and in the blade chassis, right down to individual service.

We can now know what the energy consumption is. We can now incentivize the IT teams to take responsibility for energy management and energy consumption. This is a great way of actually reducing a client’s carbon foot print and energy consumption within the data center through these integrated solutions.

Gardner: Aaron, I suppose another important point to be clear on is that, like many services within HP Technology Services, this is not just designed for HP products. This is an ecumenical approach to whatever is installed in terms of product facility management capability. I wonder if you could explain a bit more HP’s philosophy when it comes to supporting the entire portfolio. [Learn more about DCIM.]

Carman: HP’s professional services we’re offering in this space are really agnostic to the final solution. We understand that a customer has been running their environment for years and has made investments into a lot of different operational tools over the years.

That’s a part of our analysis and methodology, to come in and understand the environment and what the client is trying to achieve. Then we put together a strategy, a roadmap of different products, that will help them achieve their goals that are interoperable.

Next level

We continue to transform them to the next level of abilities or capabilities that they are looking to achieve, especially around how they provision services and help them become, at the end, most likely a cloud-service provider to their end users, where heavy levels of automation are built in, so that they can get digital services to their end users in a much shorter period of time.

Gardner: I realize this is fairly new. It was just on Jan. 23 that HP announced some new services that include converged-management consulting, and that management framework was updated with new technical requirements. You have four new services organized with the management workshop, roadmap, design implementations, and so forth. [Learn more about DCIM.]

So this is fairly new, but Steve Wibrew, is there any instance where you’ve worked with some organization and that some of the really powerful benefits of doing this properly have shown through? Do you have any anecdotes you can recall of an organization that’s done this and maybe some interesting ways that it’s benefited them, maybe unintended consequences?

Data-center transformation

Wibrew: The starting point is to understand what’s there in the first place. I’ve been engaged with many clients where if you ask them about inventory, what’s in the data center, you get totally different answers from different groups of people within the organization. The IT team wants to put more stuff into the data center. The facilities team says, “No more space. We’re full. We can’t do that.”

I found that when you pull this data together from multiple sources and get a consistent feel of the truth, you can start to plan far more accurately and efficiently. Perhaps the lack of space in the data center is because there may be infrastructure that’s sitting there, powered on, and not being utilized by anybody.

It’s a fact that we’re redundant. I’ve had many situations where, in pulling together a consistent inventory, we can get rid of a lot of redundant equipment, allowing space for major initiatives and expansion projects. So there are some examples of the benefits of consolidated inventory and information.

Gardner: As we look a few years out at big-data requirements, hybrid cloud requirements, infrastructure KPIs for service delivery, energy, and carbon pressures? What’s the outlook in terms of doing this, and should we expect that there will be an ongoing demand, but also ongoing and improving return on investments you make, vis-à-vis these consulting services and DCIM?

Carman: Based upon a lot of the challenges that we outlined earlier in the program, we feel that in order to operate efficiently, this type of a future state operational-tools architecture is going to have to be in place, and DCIM is the only tool poised to become that backbone between the facilities and IT infrastructures.

So more-and-more, with a lot of the challenges of my compute footprint shrinking and having a different requirements that I had in the past, we’re now dealing with a storage or data explosion, where my data center is all filled up with storage files.

As these new demands from the business come down and force organizations onto new types of technology infrastructure platforms they haven’t dealt within the past, it requires them to be much more flexible when they have, in most cases, very inflexible facilities. That’s the strength of DCIM and what it can provide just in that one instance.

But more-and-more, the business is expecting digital services to almost be instant. They want to capitalize on the market at that time. They don’t want to wait weeks or months for enterprise IT to provide them with a service to take advantage of a new service offering. So it’s forcing folks into operating differently, and that’s where converged management is poised to help these customers.

Looking to the future

Gardner: Steve, when you look into your crystal ball and think about how things will be in three to five years, what is it about DCIM rather and some of these services that you think will be most impacting?

Wibrew: I think the trend we’re going to see is a far greater adoption of DCIM. It’s only deployed in a small number of data centers at the moment. That’s going to increase quite dramatically, and this could be a much tighter alignment between how the facilities are run and how the IT infrastructure is operated and supported. It could be far more integrated than it is today.

The roles of IT are going to change, and a lot of the work now is still around design, planning, scripting, and orchestrating. In the future, we’re going to see people, almost like a conductor in an orchestra, overseeing the operations within the data center through leading highly automated and optimized processes, which are actually delivered by automated solutions.

Gardner: I benefited greatly in learning more about DCIM on the HP website. There were videos, white-papers, and blog-posts. So, there’s quite a bit of information for those interested in learning more about DCIM. HP Technology Services website was a great resource for me. [Learn more about DCIM.]

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Posted in big data, data analysis, data center, Data center transformation, DCIM, HP | Tagged , , , , , , , , , , , , , | Leave a comment