OCSL sets its sights on the Nirvana of hybrid IT—attaining the right mix of hybrid cloud for its clients

The next BriefingsDirect digital transformation case study explores how UK IT consultancy OCSL has set its sights on the holy grail of hybrid IT — helping its clients to find and attain the right mix of hybrid cloud.

We’ll now explore how each enterprise — and perhaps even units within each enterprise — determines the path to a proper mix of public and private cloud. Closer to home, they’re looking at the proper fit of converged infrastructure, hyper-converged infrastructure (HCI), and software-defined data center (SDDC) platforms.

Implementing such a services-attuned architecture may be the most viable means to dynamically apportion applications and data support among and between cloud and on-premises deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

To describe how to rationalize the right mix of hybrid cloud and hybrid IT services along with infrastructure choices on-premises, we are joined by Mark Skelton, Head of Consultancy at OCSL in London. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: People increasingly want to have some IT on premises, and they want public cloud — with some available continuum between them. But deciding the right mix is difficult and probably something that’s going to change over time. What drivers are you seeing now as organizations make this determination?

Accelerate Your Business
With Hybrid Cloud from HPE
Learn More

Skelton: It’s a blend of lot of things. We’ve been working with enterprises for a long time on their hybrid and cloud messaging. Our clients have been struggling just to understand what hybrid really means, but also how we make hybrid a reality, and how to get started, because it really is a minefield. You look at what Microsoft is doing, what AWS is doing, and what HPE is doing in their technologies. There’s so much out there. How do they get started?

We’ve been struggling in the last 18 months to get customers on that journey and get started. But now, because technology is advancing, we’re seeing customers starting to embrace it and starting to evolve and transform into those things. And, we’ve matured our models and frameworks as well to help customer adoption.

Gardner: Do you see the rationale for hybrid IT shaking down to an economic equation? Is it to try to take advantage of technologies that are available? Is it about compliance and security? You’re probably temped to say all of the above, but I’m looking for what’s driving the top-of-mind decision-making now.

Start with the economics

Skelton: The initial decision-making process begins with the economics. I think everyone has bought into the marketing messages from the public cloud providers saying, “We can reduce your costs, we can reduce your overhead — and not just from a culture perspective, but from management, from personal perspective, and from a technology solutions perspective.”

Skelton

CIOs, and even financial officers, are seeing economics as the tipping point they need to go into a hybrid cloud, or even all into a public cloud. But it’s not always cheap to put everything into a public cloud. When we look at business cases with clients, it’s the long-term investment we look at. Over time, it’s not always cheap to put things into public cloud. That’s where hybrid started to come back into the front of people’s minds.

We can use public cloud for the right workloads and where they want to be flexible and burst and be a bit more agile or even give global reach to long global businesses, but then keep the crown jewels back inside secured data centers where they’re known and trusted and closer to some of the key, critical systems.

So, it starts with the finance side of the things, but quickly evolves beyond that, and financial decisions aren’t the only reasons why people are going to public or hybrid cloud.

Gardner: In a more perfect world, we’d be able to move things back and forth with ease and simplicity, where we could take the A/B testing-type of approach to a public and private cloud decision. We’re not quite there yet, but do you see a day where that choice about public and private will be dynamic — and perhaps among multiple clouds or multi-cloud hybrid environment?

Skelton: Absolutely. I think multi-cloud is the Nirvana for every organization, just because there isn’t one-size-fits-all for every type of work. We’ve been talking about it for quite a long time. The technology hasn’t really been there to underpin multi-cloud and truly make it easy to move on-premises to public or vice versa. But I think now we’re getting there with technology.

Are we there yet? No, there are still a few big releases coming, things that we’re waiting to be released to market, which will help simplify that multi-cloud and the ability to migrate up and back, but we’re just not there yet, in my opinion.

Gardner: We might be tempted to break this out between applications and data. Application workloads might be a bit more flexible across a continuum of hybrid cloud, but other considerations are brought to the data. That can be security, regulation, control, compliance, data sovereignty, GDPR, and so forth. Are you seeing your customers looking at this divide between applications and data, and how they are able to rationalize one versus the other?

Sketon: Applications, as you have just mentioned, are the simpler things to move into a cloud model, but the data is really the crown jewels of the business, and people are nervous about putting that into public cloud. So what we’re seeing lot of is putting applications into the public cloud for the agility, elasticity, and global reach and trying to keep data on-premises because they’re nervous about those breaches in the service providers’ data centers.

That’s what we are seeing, but we are seeing an uprising of things like object storage, so we’re working with Scality, for example, and they have a unique solution for blending public and on-premises solutions, so we can pin things to certain platforms in a secure data center and then, where the data is not quite critical, move it into a public cloud environment.

Gardner: It sounds like you’ve been quite busy. Please tell us about OCSL, an overview of your company and where you’re focusing most of your efforts in terms of hybrid computing.

Rebrand and refresh

Skelton: OCSL had been around for 26 years as a business. Recently, we’ve been through a re-brand and a refresh of what we are focusing on, and we’re moving more to a services organization, leading with our people and our consultants.

We’re focusing on transforming customers and clients into the cloud environment, whether that’s applications or, if it’s data center, cloud, or hybrid cloud. We’re trying to get customers on that journey of transformation and engaging with business-level people and business requirements and working out how we make cloud a reality, rather than just saying there’s a product and you go and do whatever you want with it. We’re finding out what those businesses want, what are the key requirements, and then finding the right cloud models that to fit that.

Gardner: So many organizations are facing not just a retrofit or a rethinking around IT, but truly a digital transformation for the entire organization. There are many cases of sloughing off business lines, and other cases of acquiring. It’s an interesting time in terms of a mass reconfiguration of businesses and how they identify themselves.

Skelton: What’s changed for me is, when I go and speak to a customer, I’m no longer just speaking to the IT guys, I’m actually engaging with the finance officers, the marketing officers, the digital officers — that’s he common one that is creeping up now. And it’s a very different conversation.

Accelerate Your Business
With Hybrid Cloud from HPE
Learn More

We’re looking at business outcomes now, rather than focusing on, “I need this disk, this product.” It’s more: “I need to deliver this service back to the business.” That’s how we’re changing as a business. It’s doing that business consultancy, engaging with that, and then finding the right solutions to fit requirements and truly transform the business.

Gardner: Of course, HPE has been going through transformations itself for the past several years, and that doesn’t seem to be slowing up much. Tell us about the alliance between OCSL and HPE. How do you come together as a whole greater than the sum of the parts?

Skelton: HPE is transforming and becoming a more agile organization, with some of the spinoffs that we’ve had recently aiding that agility. OCSL has worked in partnership with HPE for many years, and it’s all about going to market together and working together to engage with the customers at right level and find the right solutions. We’ve had great success with that over many years.

Gardner: Now, let’s go to the “show rather than tell” part of our discussion. Are there some examples that you can look to, clients that you work with, that have progressed through a transition to hybrid computing, hybrid cloud, and enjoyed certain benefits or found unintended consequences that we can learn from?

Skelton: We’ve had a lot of successes in the last 12 months as I’m taking clients on the journey to hybrid cloud. One of the key ones that resonates with me is a legal firm that we’ve been working with. They were in a bit of a state. They had an infrastructure that was aging, was unstable, and wasn’t delivering quality service back to the lawyers that were trying to embrace technology — so mobile devices, dictation software, those kind of things.

We came in with a first prospectus on how we would actually address some of those problems. We challenged them, and said that we need to go through a stabilization phase. Public cloud is not going to be the immediate answer. They’re being courted by the big vendors, as everyone is, about public cloud and they were saying it was the Nirvana for them.

We challenged that and we got them to a stable platform first, built on HPE hardware. We got instant stability for them. So, the business saw immediate returns and delivery of service. It’s all about getting that impactful thing back to the business, first and foremost.

Building cloud model

Now, we’re working through each of their service lines, looking at how we can break them up and transform them into a cloud model. That involves breaking down those apps, deconstructing the apps, and thinking about how we can use pockets of public cloud in line with the hybrid on-premise in our data-center infrastructure.

They’ve now started to see real innovative solutions taking that business forward, but they got instant stability.

Gardner: Were there any situations where organizations were very high-minded and fanciful about what they were going to get from cloud that may have led to some disappointment — so unintended consequences. Maybe others might benefit from hindsight. What do you look out for, now that you have been doing this for a while in terms of hybrid cloud adoption?

Skelton: One of the things I’ve seen a lot of with cloud is that people have bought into the messaging from the big public cloud vendors about how they can just turn on services and keep consuming, consuming, consuming. A lot of people have gotten themselves into a state where bills have been rising and rising, and the economics are looking ridiculous. The finance officers are now coming back and saying they need to rein that back in. How do they put some control around that?

That’s where hybrid is helping, because if you start to hook up some workloads back in an isolated data center, you start to move some of those workloads back. But the key for me is that it comes down to putting some thought process into what you’re putting into cloud. Just think through to how can you transform and use the services properly. Don’t just turn everything on, because it’s there and it’s click of a button away, but actually think about put some design and planning into adopting cloud.

Gardner: It also sounds like the IT people might need to go out and have a pint with the procurement people and learn a few basics about good contract writing, terms and conditions, and putting in clauses that allow you to back out, if needed. Is that something that we should be mindful of — IT being in the procurement mode as well as specifying technology mode?

Skelton: Procurement definitely needs to be involved in the initial set-up with the cloud  whenever they’re committing to a consumption number, but then once that’s done, it’s IT’s responsibility in terms of how they are consuming that. Procurement needs to be involved all the way through in keeping constant track of what’s going on; and that’s not happening.

The IT guys don’t really care about the cost; they care about the widgets and turning things on and playing around that. I don’t think they really realized how much this is going to cost-back. So yeah, there is a bit of disjoint in lots of organizations in terms of procurement in the upfront piece, and then it goes away, and then IT comes in and spends all of the money.

Gardner: In the complex service delivery environment, that procurement function probably should be constant and vigilant.

Big change in procurement

Skelton: Procurement departments are going to change. We’re starting to see that in some of the bigger organizations. They’re closer to the IT departments. They need to understand that technology and what’s being used, but that’s quite rare at the moment. I think that probably over the next 12 months, that’s going to be a big change in the larger organizations.

Gardner: Before we close, let’s take a look to the future. A year or two from now, if we sit down again, I imagine that more micro services will be involved and containerization will have an effect, where the complexity of services and what we even think of as an application could be quite different, more of an API-driven environment perhaps.

So the complexity about managing your cloud and hybrid cloud to find the right mix, and pricing that, and being vigilant about whether you’re getting your money’s worth or not, seems to be something where we should start thinking about applying artificial intelligence (AI), machine learning, what I like to call BotOps, something that is going to be there for you automatically without human intervention.

Does that sound on track to you, and do you think that we need to start looking to advanced automation and even AI-driven automation to manage this complex divide between organizations and cloud providers?

Skelton: You hit a lot of key points there in terms of where the future is going. I think we are still in this phase if we start trying to build the right platforms to be ready for the future. So we see the recent releases of HPE Synergy for example, being able to support these modern platforms, and that’s really allowing us to then embrace things like micro services. Docker and Mesosphere are two types of platforms that will disrupt organizations and the way we do things, but you need to find the right platform first.

Hopefully, in 12 months, we can have those platforms and we can then start to embrace some of this great new technology and really rethink our applications. And it’s a challenge to the ISPs. They’ve got to work out how they can take advantage of some of these technologies.

Accelerate Your Business
With Hybrid Cloud from HPE
Learn More

We’re seeing a lot of talk about Cervalis and computing. It’s where there is nothing and you need to spin up results as and when you need to. The classic use case for that is Uber; and they have built a whole business on that Cervalis type model. I think that in 12 months time, we’re going to see a lot more of that and more of the enterprise type organizations.

I don’t think we have it quite clear in our minds how we’re going to embrace that but it’s the ISV community that really needs to start driving that. Beyond that, it’s absolutely with AI and bots. We’re all going to be talking to computers, and they’re going to be responding with very human sorts of reactions. That’s the next way.

I am bringing that into enterprise organizations for how we can solve some business challenges. Service test management is one of the use cases where we’re seeing, in some of our clients, whether they can get immediate response from bots and things like that to common queries, so they don’t need as many support staff. It’s already starting to happen.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Hewlett Packard Enterprise | Tagged , , , , , , , , | Leave a comment

Fast acquisition of diverse unstructured data sources makes IDOL API tools a star at LogitBot

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how high-performing big-data analysis powers an innovative artificial intelligence (AI)-based investment opportunity and evaluation tool. We’ll learn how LogitBot in New York identifies, manages, and contextually categorizes truly massive and diverse data sources.

By leveraging entity recognition APIs, LogitBot not only provides investment evaluations from across these data sets, it delivers the analysis as natural-language information directly into spreadsheets as the delivery endpoint. This is a prime example of how complex cloud-to core-to edge processes and benefits can be managed and exploited using the most responsive big-data APIs and services.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To describe how a virtual assistant for targeting investment opportunities is being supported by cloud-based big-data services, we’re joined by Mutisya Ndunda, Founder and CEO of LogitBot and Michael Bishop, CTO of LogicBot, in New York. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s look at some of the trends driving your need to do what you’re doing with AI and bots, bringing together data, and then delivering it in the format that people want most. What’s the driver in the market for doing this?

Ndunda: LogitBot is all about trying to eliminate friction between people who have very high-value jobs and some of the more mundane things that could be automated by AI.

Ndunda

Today, in finance, the industry, in general, searches for investment opportunities using techniques that have been around for over 30 years. What tends to happen is that the people who are doing this should be spending more time on strategic thinking, ideation, and managing risk. But without AI tools, they tend to get bogged down in the data and in the day-to-day. So, we’ve decided to help them tackle that problem.

Gardner: Let the machines do what the machines do best. But how do we decide where the demarcation is between what the machines do well and what the people do well, Michael?

Bishop: We believe in empowering the user and not replacing the user. So, the machine is able to go in-depth and do what a high-performing analyst or researcher would do at scale, and it does that every day, instead of once a quarter, for instance, when research analysts would revisit an equity or a sector. We can do that constantly, react to events as they happen, and replicate what a high-performing analyst is able to do.

Gardner: It’s interesting to me that you’re not only taking a vast amount of data and putting it into a useful format and qualitative type, but you’re delivering it in a way that’s demanded in the market, that people want and use. Tell me about this core value and then the edge value and how you came to decide on doing it the way you do?

Evolutionary process

Ndunda: It’s an evolutionary process that we’ve embarked on or are going through. The industry is very used to doing things in a very specific way, and AI isn’t something that a lot of people are necessarily familiar within financial services. We decided to wrap it around things that are extremely intuitive to an end user who doesn’t have the time to learn technology.

So, we said that we’ll try to leverage as many things as possible in the back via APIs and all kinds of other things, but the delivery mechanism in the front needs to be as simple or as friction-less as possible to the end-user. That’s our core principle.

Humanization of Machine Learning
For Big Data Success
Learn More

Bishop: Finance professionals generally don’t like black boxes and mystery, and obviously, when you’re dealing with money, you don’t want to get an answer out of a machine you can’t understand. Even though we’re crunching a lot of information and  making a lot of inferences, at the end of the day, they could unwind it themselves if they wanted to verify the inferences that we have made.

Bishop

We’re wrapping up an incredibly complicated amount of information, but it still makes sense at the end of the day. It’s still intuitive to someone. There’s not a sense that this is voodoo under the covers.

Gardner: Well, let’s pause there. We’ll go back to the data issues and the user-experience issues, but tell us about LogitBot. You’re a startup, you’re in New York, and you’re focused on Wall Street. Tell us how you came to be and what you do, in a more general sense.

Ndunda: Our professional background has always been in financial services. Personally, I’ve spent over 15 years in financial services, and my career led me to what I’m doing today.

In the 2006-2007 timeframe, I left Merrill Lynch to join a large proprietary market-making business called Susquehanna International Group. They’re one of the largest providers of liquidity around the world. Chances are whenever you buy or sell a stock, you’re buying from or selling to Susquehanna or one of its competitors.

What had happened in that industry was that people were embracing technology, but it was algorithmic trading, what has become known today as high-frequency trading. At Susquehanna, we resisted that notion, because we said machines don’t necessarily make decisions well, and this was before AI had been born.

Internally, we went through this period where we had a lot of discussions around, are we losing out to the competition, should we really go pure bot, more or less? Then, 2008 hit and our intuition of allowing our traders to focus on the risky things and then setting up machines to trade riskless or small orders paid off a lot for the firm; it was the best year the firm ever had, when everyone else was falling apart.

That was the first piece that got me to understand or to start thinking about how you can empower people and financial professionals to do what they really do well and then not get bogged down in the details.

Then, I joined Bloomberg and I spent five years there as the head of strategy and business development. The company has an amazing business, but it’s built around the notion of static data. What had happened in that business was that, over a period of time, we began to see the marketplace valuing analytics more and more.

Make a distinction

Part of the role that I was brought in to do was to help them unwind that and decouple the two things — to make a distinction within the company about static information versus analytical or valuable information. The trend that we saw was that hedge funds, especially the ones that were employing systematic investment strategies, were beginning to do two things, to embrace AI or technology to empower your traders and then also look deeper into analytics versus static data.

That was what brought me to LogitBot. I thought we could do it really well, because the players themselves don’t have the time to do it and some of the vendors are very stuck in their traditional business models.

Bishop: We’re seeing a kind of renaissance here, or we’re at a pivotal moment, where we’re moving away from analytics in the sense of business reporting tools or understanding yesterday. We’re now able to mine data, get insightful, actionable information out of it, and then move into predictive analytics. And it’s not just statistical correlations. I don’t want to offend any quants, but a lot of technology [to further analyze information] has come online recently, and more is coming online every day.

For us, Google had released TensorFlow, and that made a substantial difference in our ability to reason about natural language. Had it not been for that, it would have been very difficult one year ago.

At the moment, technology is really taking off in a lot of areas at once. That enabled us to move from static analysis of what’s happened in the past and move to insightful and actionable information.

Ndunda: What Michael kind of touched on there is really important. A lot of traditional ways of looking at financial investment opportunities is to say that historically, this has happened. So, history should repeat itself. We’re in markets where nothing that’s happening today has really happened in the past. So, relying on a backward-looking mechanism of trying to interpret the future is kind of really dangerous, versus having a more grounded approach that can actually incorporate things that are nontraditional in many different ways.

So, unstructured data, what investors are thinking, what central bankers are saying, all of those are really important inputs, one part of any model 10 or 20 years ago. Without machine learning and some of the things that we are doing today, it’s very difficult to incorporate any of that and make sense of it in a structured way.

Gardner: So, if the goal is to make outlier events your friend and not your enemy, what data do you go to to close the gap between what’s happened and what the reaction should be, and how do you best get that data and make it manageable for your AI and machine-learning capabilities to exploit?

Ndunda: Michael can probably add to this as well. We do not discriminate as far as data goes. What we like to do is have no opinion on data ahead of time. We want to get as much information as possible and then let a scientific process lead us to decide what data is actually useful for the task that we want to deploy it on.

As an example, we’re very opportunistic about acquiring information about who the most important people at companies are and how they’re connected to each other. Does this guy work on a board with this or how do they know each other? It may not have any application at that very moment, but over the course of time, you end up building models that are actually really interesting.

We scan over 70,000 financial news sources. We capture news information across the world. We don’t necessarily use all of that information on a day-to-day basis, but at least we have it and we can decide how to use it in the future.

We also monitor anything that companies file and what management teams talk about at investor conferences or on phone conversations with investors.

Bishop: Conference calls, videos, interviews.
Audio to text

Ndunda: HPE has a really interesting technology that they have recently put out. You can transcribe audio to text, and then we can apply our text processing on top of that to understand what management is saying in a structural, machine-based way. Instead of 50 people listening to 50 conference calls you could just have a machine do it for you.

Gardner: Something we can do there that we couldn’t have done before is that you can also apply something like sentiment analysis, which you couldn’t have done if it was a document, and that can be very valuable.

Bishop: Yes, even tonal analysis. There are a few theories on that, that may or may not pan out, but there are studies around tone and cadence. We’re looking at it and we will see if it actually pans out.

Gardner: And so do you put this all into your own on-premises data-center warehouse or do you take advantage of cloud in a variety of different means by which to corral and then analyze this data? How do you take this fire hose and make it manageable?

Bishop: We do take advantage of the cloud quite aggressively. We’re split between SoftLayer and Google. At SoftLayer we have bare-metal hardware machines and some power machines with high-power GPUs.

Humanization of Machine Learning
For Big Data Success
Learn More

On the Google side, we take advantage of Bigtable and BigQuery and some of their infrastructure tools. And we have good, old PostgreSQL in there, as well as DataStax, Cassandra, and their Graph as the graph engine. We make liberal use of HPE Haven APIs as well and TensorFlow, as I mentioned before. So, it’s a smorgasbord of things you need to corral in order to get the job done. We found it very hard to find all of that wrapped in a bow with one provider.

We’re big proponents of Kubernetes and Docker as well, and we leverage that to avoid lock-in where we can. Our workload can migrate between Google and the SoftLayer Kubernetes cluster. So, we can migrate between hardware or virtual machines (VMs), depending on the horsepower that’s needed at the moment. That’s how we handle it.

Gardner: So, maybe 10 years ago you would have been in a systems-integration capacity, but now you’re in a services-integration capacity. You’re doing some very powerful things at a clip and probably at a cost that would have been impossible before.

Bishop: I certainly remember placing an order for a server, waiting six months, and then setting up the RAID drives. It’s amazing that you can just flick a switch and you get a very high-powered machine that would have taken six months to order previously. In Google, you spin up a VM in seconds. Again, that’s of a horsepower that would have taken six months to get.

Gardner: So, unprecedented innovation is now at our fingertips when it comes to the IT side of things, unprecedented machine intelligence, now that the algorithms and APIs are driving the opportunity to take advantage of that data.

Let’s go back to thinking about what you’re outputting and who uses that. Is the investment result that you’re generating something that goes to a retail type of investor? Is this something you’re selling to investment houses or a still undetermined market? How do you bring this to market?

Natural language interface

Ndunda: Roboto, which is the natural-language interface into our analytical tools, can be custom tailored to respond, based on the user’s level of financial sophistication.

At present, we’re trying them out on a semiprofessional investment platform, where people are professional traders, but not part of a major brokerage house. They obviously want to get trade ideas, they want to do analytics, and they’re a little bit more sophisticated than people who are looking at investments for their retirement account.  Rob can be tailored for that specific use case.

He can also respond to somebody who is managing a portfolio at a hedge fund. The level of depth that he needs to consider is the only differential between those two things.

In the back, he may do an extra five steps if the person asking the question worked at a hedge fund, versus if the person was just asking about why is Apple up today. If you’re a retail investor, you don’t want to do a lot of in-depth analysis.

Bishop: You couldn’t take the app and do anything with it or understand it.

Ndunda: Rob is an interface, but the analytics are available via multiple venues. So, you can access the same analytics via an API, a chat interface, the web, or a feed that streams into you. It just depends on how your systems are set up within your organization. But, the data always will be available to you.

Gardner: Going out to that edge equation, that user experience, we’ve talked about how you deliver this to the endpoints, customary spreadsheets, cells, pivots, whatever. But it also sounds like you are going toward more natural language, so that you could query, rather than a deep SQL environment, like what we get with a Siri or the Amazon Echo. Is that where we’re heading?

Bishop: When we started this, trying to parameterize everything that you could ask into enough checkboxes and forums pollutes the screen. The system has access to an enormous amount of data that you can’t create a parameterized screen for. We found it was a bit of a breakthrough when we were able to start using natural language.

TensorFlow made a huge difference here in natural language understanding, understanding the intent of the questioner, and being able to parameterize a query from that. If our initial findings here pan out or continue to pan out, it’s going to be a very powerful interface.

I can’t imagine having to go back to a SQL query if you’re able to do it natural language, and it really pans out this time, because we’ve had a few turns of the handle of alleged natural-language querying.

Gardner: And always a moving target. Tell us specifically about SentryWatch and Precog. How do these shake out in terms of your go-to-market strategy?
How everything relates

Ndunda: One of the things that we have to do to be able to answer a lot of questions that our customers may have is to monitor financial markets and what’s impacting them on a continuous basis. SentryWatch is literally a byproduct of that process where, because we’re monitoring over 70,000 financial news sources, we’re analyzing the sentiment, we’re doing deep text analysis on it, we’re identifying entities and how they’re related to each other, in all of these news events, and we’re sticking that into a knowledge graph of how everything relates to everything else.

It ends up being a really valuable tool, not only for us, but for other people, because while we’re building models. there are also a lot of hedge funds that have proprietary models or proprietary processes that could benefit from that very same organized relational data store of news. That’s what SentryWatch is and that’s how it’s evolved. It started off with something that we were doing as an import and it’s actually now a valuable output or a standalone product.

Precog is a way for us to showcase the ability of a machine to be predictive and not be backward looking. Again, when people are making investment decisions or allocation of capital across different investment opportunities, you really care about your forward return on your investments. If I invested a dollar today, am I likely to make 20 cents in profit tomorrow or 30 cents in profit tomorrow?

We’re using pretty sophisticated machine-learning models that can take into account unstructured data sources as part of the modeling process. That will give you these forward expectations about stock returns in a very easy-to-use format, where you don’t need to have a PhD in physics or mathematics.

You just ask, “What is the likely return of Apple over the next six months,” taking into account what’s going on in the economy.  Apple was fined $14 billion. That can be quickly added into a model and reflect a new view in a matter of seconds versus sitting down in a spreadsheet and trying to figure out how it all works out.

Gardner: Even for Apple, that’s a chunk of change.

Bishop: It’s a lot money, and you can imagine that there were quite a few analysts on Wall Street in Excel, updating their models around this so that they could have an answer by the end of the day, where we already had an answer.

Gardner: How do the HPE Haven OnDemand APIs help the Precog when it comes to deciding those sources, getting them in the right format, so that you can exploit?

Ndunda: The beauty of the platform is that it simplifies a lot of development processes that an organization of our size would have to take on themselves.

The nice thing about it is that a drag-and-drop interface is really intuitive; you don’t need to be specialized in Java, Python, or whatever it is. You can set up your intent in a graphical way, and then test it out, build it, and expand it as you go along. The Lego-block structure is really useful, because if you want to try things out, it’s drag and drop, connect the dots, and then see what you get on the other end.

For us, that’s an innovation that we haven’t seen with anybody else in the marketplace and it cuts development time for us significantly.

Gardner: Michael, anything more to add on how this makes your life a little easier?

Lowering cost

Bishop: For us, lowering the cost in time to run an experiment is very important when you’re running a lot of experiments, and the Combinations product enables us to run a lot of varied experiments using a variety of the HPE Haven APIs in different combinations very quickly. You’re able to get your development time down from a week, two weeks, whatever it is to wire up an API to assist them.

In the same amount of time, you’re able to wire the initial connection and then you have access to pretty much everything in Haven. You turn it over to either a business user, a data scientist, or a machine-learning person, and they can drag and drop the connectors themselves. It makes my life easier and it makes the developers’ lives easier because it gets back time for us.

Gardner: So, not only have we been able to democratize the querying, moving from SQL to natural language, for example, but we’re also democratizing the choice on sources and combinations of sources in real time, more or less for different types of analyses, not just the query, but the actual source of the data.

Bishop: Correct.

Ndunda: Again, the power of a lot of this stuff is in the unstructured world, because valuable information typically tends to be hidden in documents. In the past, you’d have to have a team of people to scour through text, extract what they thought was valuable, and summarize it for you. You could miss out on 90 percent of the other valuable stuff that’s in the document.

With this ability now to drag and drop and then go through a document in five different iterations by just tweaking, a parameter is really useful.

Gardner: So those will be IDOL-backed APIs that you are referring to.

Ndunda: Exactly.

Bishop: It’s something that would be hard for an investment bank, even a few years ago, to process. Everyone is on the same playing field here or starting from the same base, but dealing with unstructured data has been traditionally a very difficult problem. You have a lot technologies coming online as APIs; at the same time, they’re also coming out as traditional on-premises [software and appliance] solutions.

Humanization of Machine Learning
For Big Data Success
Learn More

We’re all starting from the same gate here. Some folks are little ahead, but I’d say that Facebook is further ahead than an investment bank in their ability to reason over unstructured data. In our world, I feel like we’re starting basically at the same place that Goldman or Morgan would be.

Gardner: It’s a very interesting reset that we’re going through. It’s also interesting that we talked earlier about the divide between where the machine and the individual knowledge worker begins or ends, and that’s going to be a moving target. Do you have any sense of how that changes its characterization of what the right combination is of machine intelligence and the best of human intelligence?

Empowering humans

Ndunda: I don’t foresee machines replacing humans, per se. I see them empowering humans, and to the extent that your role is not completely based on a task, if it’s based on something where you actually manage a process that goes from one end to another, those particular positions will be there, and the machines will free our people to focus on that.

But, in the case where you have somebody who is really responsible for something that can be automated, then obviously that will go away. Machines don’t eat, they don’t need to take vacation, and if it’s a task where you don’t need to reason about it, obviously you can have a computer do it.

What we’re seeing now is that if you have a machine sitting side by side with a human, and the machine can pick up on how the human reasons with some of the new technologies, then the machine can do a lot of the grunt work, and I think that’s the future of all of this stuff.

Bishop: What we’re delivering is that we distill a lot of information, so that a knowledge worker or decision-maker can make an informed decision, instead of watching CNBC and being a single-source reader. We can go out and scour the best of all the information, distill it down, and present it, and they can choose to act on it.

Our goal here is not to make the next jump and make the decision. Our job is to present the information to a decision-maker.
Gardner: It certainly seems to me that the organization, big or small, retail or commercial, can make the best use of this technology. Machine learning, in the end, will win.

Ndunda: Absolutely. It is a transformational technology, because for the first time in a really long time, the reasoning piece of it is within grasp of machines. These machines can operate in the gray area, which is where the world lives.

Gardner: And that gray area can almost have unlimited variables applied to it.

Ndunda: Exactly. Correct.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, data analysis, Hewlett Packard Enterprise | Tagged , , , , , , , , , , , , , | Leave a comment

How lastminute.com uses machine learning to improve travel bookings user experience

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how online travel and events pioneer lastminute.com leverages big-data analytics with speed at scale to provide business advantages to online travel services.

We’ll explore how lastminute.com manages massive volumes of data to support cutting-edge machine-learning algorithms to allow for speed and automation in the rapidly evolving global online travel research and bookings business.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how a culture of IT innovation helps make highly dynamic customer interactions for online travel a major differentiator, we’re joined by Filippo Onorato, Chief Information Officer at lastminute.com group in Chiasso, Switzerland. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Most people these days are trying to do more things more quickly amid higher complexity. What is it that you’re trying to accomplish in terms of moving beyond disruption and being competitive in a highly complex area?

Join myVertica
To Get The Free
HPE Vertica Community Edition

Onorato: The travel market — and in particular the online travel market — is a very fast-moving market, and the habits and behaviors of the customers are changing so rapidly that we have to move fast.

Disruption is coming every day from different actors … [requiring] a different way of constructing the customer experience. In order to do that, you have to rely on very big amounts of data — just to style the evolution of the customer and their behaviors.

Gardner: And customers are more savvy; they really know how to use data and look for deals. They’re expecting real-time advantages. How is the sophistication of the end user impacting how you work at the core, in your data center, and in your data analysis, to improve your competitive position?

Onorato

Onorato: Once again, customers are normally looking for information, and providing the right information at the right time is a key of our success. The brand we came from was called Bravofly and Volagratis in Italy; that means “free flight.” The competitive advantage we have is to provide a comparison among all the different airline tickets, where the market is changing rapidly from the standard airline behavior to the low-cost ones. Customers are eager to find the best deal, the best price for their travel requirements.

So, the ability to construct their customer experience in order to find the right information at the right time, comparing hundreds of different airlines, was the competitive advantage we made our fortune on.

Gardner: Let’s edify our listeners and reader a bit about lastminute.com. You’re global. Tell us about the company and perhaps your size, employees, and the number of customers you deal with each day.

Most famous brand

Onorato: We are 1,200 employees worldwide. Lastminute.com, the most famous brand worldwide, was acquired by the Bravofly Rumbo Group two years ago from Sabre. We own Bravofly; that was the original brand. We own Rumbo; that is very popular in Spanish-speaking markets. We own Volagratis in Italy; that was the original brand. And we own Jetcost; that is very popular in France. That is actually a metasearch, a combination of search and competitive comparison between all the online travel agencies (OTAs) in the market.

We span across 40 countries, we support 17 languages, and we help almost 10 million people fly every year.

Gardner: Let’s dig into the data issues here, because this is a really compelling use-case. There’s so much data changing so quickly, and sifting through it is an immense task, but you want to bring the best information to the right end user at the right time. Tell us a little about your big-data architecture, and then we’ll talk a little bit about bots, algorithms, and artificial intelligence.

Onorato: The architecture of our system is pretty complex. On one side, we have to react almost instantly to the search that the customers are doing. We have a real-time platform that’s grabbing information from all the providers, airlines, other OTAs, hotel provider, bed banks, or whatever.

We concentrate all this information in a huge real-time database, using a lot of caching mechanisms, because the speed of the search, the speed of giving result to the customer is a competitive advantage. That’s the real-time part of our development that constitutes the core business of our industry.

Gardner: And this core of yours, these are your own data centers? How have you constructed them and how do you manage them in terms of on-premises, cloud, or hybrid?

Onorato: It’s all on-premises, and this is our core infrastructure. On the other hand, all that data that is gathered from the interaction with the customer is partially captured. This is the big challenge for the future — having all that data stored in a data warehouse. That data is captured in order to build our internal knowledge. That would be the sales funnel.

So, the behavior of the customer, the percentage of conversion in each and every step that the customer does, from the search to the actual booking. That data is gathered together in a data warehouse that is based on HPE Vertica, and then, analyzed in order to find the best place, in order to optimize the conversion. That’s the main usage of the date warehouse.

On the other hand, what we’re implementing on top of all this enormous amount of data is session-related data. You can imagine how much a data single interaction of a customer can generate. Right now, we’re storing a short history of that data, but the goal is to have two years’ worth of session data. That would be an enormous amount of data.

Gardner: And when we talk about data, often we’re concerned about velocity and volume. You’ve just addressed volume, but velocity must be a real issue, because any change in a weather issue in Europe, for example, or a glitch in a computer system at one airline in North America changes all of these travel data points instantly.

Unpredictable events

Onorato: That’s also pretty typical in the tourism industry. It’s a very delicate business, because we have to react to unpredictable events that are happening all over the world. In order to do a better optimization of margin, of search results, etc, we’re also applying some machine-learning algorithm, because a human can’t react so fast to the ever-changing market or situation.

In those cases, we use optimization algorithms in order to fine tune our search results, in order to better deal with a customer request, and to propose the better deal at the right time. In very simple terms, that’s our core business right now.

Gardner: And Filippo, only your organization can do this, because the people with the data on the back side can’t apply the algorithm; they have only their own data. It’s not something the end user can do on the edge, because they need to receive the results of the analysis and the machine learning. So you’re in a unique, important position. You’re the only one who can really apply the intelligence, the AI, and the bots to make this happen. Tell us a little bit about how you approached that problem and solved it.

Join myVertica
To Get The Free
HPE Vertica Community Edition

Onorato: I perfectly agree. We are the collector of an enormous amount of product-related information on one side. On the other side, what we’re collecting are the customer behaviors. Matching the two is unique for our industry. It’s definitely a competitive advantage to have that data.

Then, what you do with all those data is something that is pushing us to do continuous innovation and continuous analysis. By the way, I don’t think something can be implemented without a lot of training and a lot of understanding of the data.

Just to give you an example, what we’re implementing, the machine learning algorithm that is called multi-armed bandit, is kind of parallel testing of different configurations of parameters that are presented to the final user. This algorithm is reacting to a specific set of conditions and proposing the best combination of order, visibility, pricing, and whatever to the customer in order to satisfy their research.

What we really do in that case is to grab information, build our experience into the algorithm, and then optimize this algorithm every day, by changing parameters, by also changing the type of data that we’re inputting into the algorithm itself.

So, it’s an ongoing experience; it’s an ongoing study. It’s endless, because the market conditions are changing and the actors in the market are changing as well, coming from the two operators in the past, the airline and now the OTA. We’re also a metasearch, aggregating products from different OTAs. So, there are new players coming in and they’re always coming closer and closer to the customer in order to grab information on customer behavior.

Gardner: It sounds like you have a really intense culture of innovation, and that’s super important these days, of course. As we were hearing at the HPE Big Data Conference 2016, the feedback loop element of big data is now really taking precedence. We have the ability to manage the data, to find the data, to put the data in a useful form, but we’re finding new ways. It seems to me that the more people use our websites, the better that algorithm gets, the better the insight to the end user, therefore the better the result and user experience. And it never ends; it always improves.

How does this extend? Do you take it to now beyond hotels, to events or transportation? It seems to me that this would be highly extensible and the data and insights would be very valuable.

Core business

Onorato: Correct. The core business was initially the flight business. We were born by selling flight tickets. Hotels and pre-packaged holidays was the second step. Then, we provided information about lifestyle. For example, in London we have an extensive offer of theater, events, shows, whatever, that are aggregated.

Also, we have a smaller brand regarding restaurants. We’re offering car rental. We’re giving also value-added services to the customer, because the journey of the customer doesn’t end with the booking. It continues throughout the trip, and we’re providing information regarding the check-in; web check-in is a service that we provide. There are a lot of ancillary businesses that are making the overall travel experience better, and that’s the goal for the future.

Gardner: I can even envision where you play a real-time concierge, where you’re able to follow the person through the trip and be available to them as a bot or a chat. This edge-to-core capability is so important, and that big data feedback, analysis, and algorithms are all coming together very powerfully.

Tell us a bit about metrics of success. How can you measure this? Obviously a lot of it is going to be qualitative. If I’m a traveler and I get what I want, when I want it, at the right price, that’s a success story, but you’re also filling every seat on the aircraft or you’re filling more rooms in the hotels. How do we measure the success of this across your ecosystem?

Onorato: In that sense, we’re probably a little bit farther away from the real product, because we’re an aggregator. We don’t have the risk of running a physical hotel, and that’s where we’re actually very flexible. We can jump from one location to another very easily, and that’s one of the competitive advantages of being an OTA.

But the success overall right now is giving the best information at the right time to the final customer. What we’re measuring right now is definitely the voice of the customer, the voice of the final customer, who is asking for more and more information, more and more flexibility, and the ability to live an experience in the best way possible.

Join myVertica
To Get The Free
HPE Vertica Community Edition

So, we’re also providing a brand that is associated with wonderful holidays, having fun, etc.

Gardner: The last question, for those who are still working on building out their big data infrastructure, trying to attain this cutting-edge capability and start to take advantage of machine learning, artificial intelligence, and so forth, if you could do it all over again, what would you tell them, what would be your advice to somebody who is merely more in the early stages of their big data journey?

Onorato: It is definitely based on two factors — having the best technology and not always trying to build your own technology, because there are a lot of products in the market that can speed up your development.

And also, it’s having the best people. The best people is one of the competitive advantages of any company that is running this kind of business. You have to rely on fast learners, because market condition are changing, technology is changing, and the people needs to train themselves very fast. So, you have to invest in people and invest in the best technology available.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, data analysis, Hewlett Packard Enterprise, HP Vertica, machine learning | Tagged , , , , , , , , | Leave a comment

WWT took an enterprise Tower of Babel and delivered comprehensive intelligent search

The next BriefingsDirect Voice of the Customer digital transformation case study highlights how World Wide Technology, known as WWT, in St. Louis, found itself with a very serious yet somehow very common problem — users simply couldn’t find relevant company content.

We’ll explore how WWT reached deep into its applications, data, and content to rapidly and efficiently create a powerful Google-like, pan-enterprise search capability. Not only does it search better and empower users, the powerful internal index sets the stage for expanded capabilities using advanced analytics to engender a more productive and proactive digital business culture.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how WWT took an enterprise Tower of Babel and delivered cross-applications intelligent search are James Nippert, Enterprise Search Project Manager, and Susan Crincoli, Manager of Enterprise Content, both at World Wide Technology in St. Louis. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems pretty evident that the better search you have in an organization, the better people are going to find what they need as they need it. What holds companies back from delivering results like people are used to getting on the web?

Nippert

Nippert:  It’s the way things have always been. You just had to drill down from the top level. You go to your Exchange, your email, and start there. Did you save a file here? “No, I think I saved it on my SharePoint site,” and so you try to find it there, or maybe it was in a file directory.

Those are the steps that people have been used to because it’s how they’ve been doing it their entire lives, and it’s the nature of beast as we bring more and more enterprise applications into the fold. You have enterprises with 100 or 200 applications, and each of those has its own unique data silos. So, users have to try to juggle all of these different content sources where stuff could be saved. They’re just used to having to dig through each one of those to try to find whatever they’re looking for.

Gardner: And we’ve all become accustomed to instant gratification. If we want something, we want it right away. So, if you have to tag something, or you have to jump through some hoops, it doesn’t seem to be part of what people want. Susan, are there any other behavioral parts of this?

Find the world

Crincoli: We, as consumers, are getting used to the Google-like searching. We want to go to one place and find the world. In the information age, we want to go to one place and be able to find whatever it is we’re looking for. That easily transfers into business problems. As we store data in myriad different places, the business user also wants the same kind of an interface.

Crincoli

Gardner: Certain tools that can only look at a certain format or can only deal with certain tags or taxonomy are strong, but we want to be comprehensive. We don’t want to leave any potentially powerful crumbs out there not brought to bear on a problem. What’s been the challenge when it comes to getting at all the data, structured, unstructured, in various formats?

Nippert: Traditional search tools are built off of document metadata. It’s those tags that go along with records, whether it’s the user who uploaded it, the title, or the date it was uploaded. Companies have tried for a long time to get users to tag with additional metadata that will make documents easier to search for. Maybe it’s by department, so you can look for everything in the HR Department.

At the same time, users don’t want to spend half an hour tagging a document; they just want to load it and move on with their day. Take pictures, for example. Most enterprises have hundreds of thousands of pictures that are stored, but they’re all named whatever number the camera gave, and they will name it DC0001. If you have 1,000 pictures named that you can’t have a successful search, because no search engine will be able to tell just by that title — and nothing else — what they want to find.

Gardner: So, we have a situation where the need is large and the paybacks could be large, but the task and the challenge are daunting. Tell us about your journey. What did you do in order to find a solution?

Nippert: We originally recognized a problem with our on-premises Microsoft SharePoint environment. We were using an older version of SharePoint that was running mostly on metadata, and our users weren’t uploading any metadata along with their internet content.

We originally set out to solve that issue, but then, as we began interviewing business users, we understood very quickly that this is an enterprise-scale problem. Scaling out even further, we found out it’s been reported that as much as 10 percent of staffing costs can be lost directly to employees not being able to find what they’re looking for. Your average employee can spend over an entire work week per year searching for information or documentation that they need to get their job done.

So it’s a very real problem. WWT noticed it over the last couple of years, but as there is the velocity in volume of data increase, it’s only going to become more apparent. With that in mind, we set out to start an RFI process for all the enterprise search leaders. We used the Gartner Magic Quadrants and started talks with all of the Magic Quadrant leaders. Then, through a down-selection process, we eventually landed on HPE.

We have a wonderful strategic partnership with them. It wound up being that we went with the HPE IDOL tool, which has been one of the leaders in enterprise search, as well as big data analytics, for well over a decade now, because it has very extensible platform, something that you can really scale out and customize and build on top of. It doesn’t just do one thing.

Humanizes Machine Learning

For Big Data Success

Gardner: And it’s one solution to let people find what they’re looking for, but when you’re comprehensive and you can get all kinds of data in all sorts of apps, silos and nooks and crannies, you can deliver results that the searching party didn’t even know was there. The results can be perhaps more powerful than they were originally expecting.

Susan, any thoughts about a culture, a digital transformation benefit, when you can provide that democratization of search capability, but maybe extended into almost analytics or some larger big-data type of benefit?

Multiple departments

Crincoli: We’re working across multiple departments and we have a lot of different internal customers that we need to serve. We have a sales team, business development practices, and professional services. We have all these different departments that are searching for different things to help them satisfy our customers’ needs.

With HPE being a partner, where their customers are our customers, we have this great relationship with them. It helps us to see the value across all the different things that we can bring to bear to get all this data, and then, as we move forward, what we help people build more relevant results.

If something is searched for one time, versus 100 times, then that’s going to bubble up to the top. That means that we’re getting the best information to the right people in the right amount of time. I’m looking forward to extending this platform and to looking at analytics and into other platforms.

Gardner: That’s why they call it “intelligent search.” It learns as you go.

Nippert: The concept behind intelligent search is really two-fold. It first focuses on business empowerment, which is letting your users find whatever it is specifically that they’re looking for, but then, when you talk about business enablement, it’s also giving users the intelligent conceptual search experience to find information that they didn’t even know they should be looking for.

If I’m a sales representative and I’m searching for company “X,” I need to find any of the Salesforce data on that, but maybe I also need to find the account manager, maybe I need to find professional services’ engineers who have worked on that, or maybe I’m looking for documentation on a past project. As Susan said, that Google-like experience is bringing that all under one roof for someone, so they don’t have to go around to all these different places; it’s presented right to them.

Gardner: Tell us about World Wide Technology, so we understand why having this capability is going to be beneficial to your large, complex organization?

Humanizes Machine Learning

For Big Data Success

Crincoli: We’re a $7-billion organization and we have strategic partnerships with Cisco, HPE, EMC, and NetApp, etc. We have a lot of solutions that we bring to market. We’re a solution integrator and we’re also a reseller. So, when you’re an account manager and you’re looking across all of the various solutions that we can provide to solve the customer’s problems, you need to be able to find all of the relevant information.

You probably need to find people as well. Not only do I need to find how we can solve this customer’s problem, but also who has helped us to solve this customer’s problem before. So, let me find the right person, the right pre-sales engineer or the right post-sales engineer. Or maybe there’s somebody in professional services. Maybe I want the person who implemented it the last time. All these different people, as well as solutions that we can bring in help give that sales team the information they need right at their fingertips.

It’s very powerful for us to think about the struggles that a sales manager might have, because we have so many different ways that we can help our customer solve those problems. We’re giving them that data at their fingertips, whether that’s from Salesforce, all the way through to SharePoint or something in an email that they can’t find from last year. They know they have talked to somebody about this before, or they want to know who helped me. Pulling all of that information together is so powerful.

We don’t want them to waste their time when they’re sitting in front of a customer trying to remember what it was that they wanted to talk about.

Gardner: It really amounts to customer service benefits in a big way, but I’m also thinking this is a great example of how, when you architect and deploy and integrate properly on the core, on the back end, that you can get great benefits delivered to the edge. What is the interface that people tend to use? Is there anything we can discuss about ease of use in terms of that front-end query?

Simple and intelligent

Nippert: As far as ease of use goes, it’s simplicity. If you’re a sales rep or an engineer in the field, you need to be able to pull something up quickly. You don’t want to have to go through layers and layers of filtering and drilling down to find what you’re looking for. It needs to be intelligent enough that, even if you can’t remember the name of a document or the title of a document, you ought to be able to search for a string of text inside the document and it still comes back to the top. That’s part of the intelligent search; that’s one of the features of HPE IDOL.

Whenever you’re talking about front-end, it should be something light and something fast. Again, it’s synonymous with what users are used to on the consumer edge, which is Google. There are very few search platforms out there that can do it better. Look at the  Google home page. It’s a search bar and two buttons; that’s all it is. When users are used to that at home and they come to work, they don’t want a cluttered, clumsy, heavy interface. They just need to be able to find what they’re looking for as quickly and simply as possible.

Gardner: Do you have any examples where you can qualify or quantify the benefit of this technology and this approach that will illustrate why it’s important?

Nippert: We actually did a couple surveys, pre- and post-implementation. As I had mentioned earlier, it was very well known that our search demands weren’t being met. The feedback that we heard over and over again was “search sucks.” People would say that all the time. So, we tried to get a little more quantification around that with some surveys before and after the implementation of IDOL search for the enterprise. We got a couple of really great numbers out of it. We saw that people’s satisfaction with search went up by about 30 percent with overall satisfaction. Before, it was right in the middle, half of them were happy, half of them weren’t.

Now, we’re well over 80 percent that have overall satisfaction with search. It’s gotten better at finding everything from documents to records to web pages across the board; it’s improving on all of those. As far as the specifics go, the thing we really cared about going into this was, “Can I find it on the first page?” How often do you ever go to the second page of search results.

With our pre-surveys, we found that under five percent of people were finding it on the first page. They had to go to second or third page or four through 10. Most of the users just gave up if it wasn’t on the first page. Now, over 50 percent of users are able to find what they’re looking for on the very first page, and if not, then definitely the second or third page.

We’ve gone from a completely unsuccessful search experience to a valid successful search experience that we can continue to enhance on.

Crincoli: I agree with James. When I came to the company, I felt that way, too — search sucks. I couldn’t find what I was looking for. What’s really cool with what we’ve been able to do is also review what people are searching for. Then, as we go back and look at those analytics, we can make those the best bets.

If we see hundreds of people are searching for the same thing or through different contexts, then we can make those the best bets. They’re at the top and you can separate those things out. These are things like the handbook or PTO request forms that people are always searching for.

Gardner: I’m going to just imagine that if I were in the healthcare, pharma, or financial sectors, I’d want to give my employees this capability, but I’d also be concerned about proprietary information and protection of data assets. Maybe you’re not doing this, but wonder what you know about allowing for the best of search, but also with protection, warnings, and some sort of governance and oversight.

Governance suite

Nippert: There is a full governance suite built in and it comes through a couple of different features. One of the main ones is induction, where as IDOL scans through every single line of a document or a PowerPoint slide of a spreadsheet whatever it is, it can recognize credit card numbers, Social Security numbers anything that’s personally identifiable information (PII) and either pull that out, delete it, send alerts, whatever.

You have that full governance suite built in to anything that you’ve indexed. It also has a mapped security engine built in called Omni Group, so it can map the security of any content source. For example, in SharePoint, if you have access to a file and I don’t and if we each ran a search, you would see a comeback in the results and I wouldn’t. So, it can honor any content’s security.

Gardner: Your policies and your rules are what’s implemented, and that’s how it goes?

Nippert: Exactly. It is up to as the search team or working with your compliance or governance team to make sure that that does happen.

Gardner: As we think about the future and the availability for other datasets to be perhaps brought in, that search is a great tool for access to more than just corporate data, enterprise data and content, but maybe also the front-end for some advanced querying analytics, business intelligence (BI), has there been any talk about how to take what you are doing in enterprise search and munge that, for lack of a better word, with analytics BI and some of the other big data capabilities.

Nippert: Absolutely. So HPE has just recently released BI for Human Intelligence (BIFHI), which is their new front end for IDOL and that has a ton of analytics capabilities built into it that really excited to start looking at a lot of rich text, rich media analytics that can pull the words right off the transcript of an MP4 raw video and transcribe it at the same time. But more than that, it is going to be something that we can continue to build on top of, as well and come up with our own unique analytic solutions.

Gardner: So talk about empowering your employees. Everybody can become a data scientist eventually, right, Susan?

Crincoli: That’s right. If you think about all of the various contexts, we started out with just a few sources, but we also have some excitement because we built custom applications, both for our customers and for our internal work. We’re taking that to the next level with building an API and pulling that data into the enterprise search that just makes it even more extensible to our enterprise.

Gardner: I suppose the next step might be the natural language audio request where you would talk to your PC, your handheld device, and say, “World Wide Technology feed me this,” and it will come back, right?

Nippert: Absolutely. You won’t even have to lift a finger anymore.

Cool things

Crincoli: It would be interesting to loop in what they are doing with Cortana at Microsoft and some of the machine learning and some of the different analytics behind Cortana. I’d love to see how we could loop that together. But those are all really cool things that we would love to explore.

Gardner: But you can’t get there until you solve the initial blocking and tackling around content and unstructured data synthesized into a usable format and capability.

Humanizes Machine Learning

For Big Data Success

Nippert: Absolutely. The flip side of controlling your data sources, as we’re learning, is that there are a lot of important data sources out there that aren’t good candidates for enterprise search whatsoever. When you look at a couple of terabytes or petabytes of MongoDB data that’s completely unstructured and it’s just binaries, that’s enterprise data, but it’s not something that anyone is looking for.

So even though our original knee-jerk is to index everything, get everything to search, you want to able to search across everything. But you also have to take it with a grain of salt. A new content source could be hundreds or thousands of results that could potentially clutter the accuracy of results. Sometimes, it’s actually knowing when not to search something.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, Business intelligence, data analysis, Hewlett Packard Enterprise | Tagged , , , , , , , , , | Leave a comment

HPE takes aim at customer needs for speed and agility in age of IoT, hybrid everything

A leaner, more streamlined Hewlett Packard Enterprise (HPE) advanced across several fronts at HPE Discover 2016 in London, making inroads into hybrid IT, Internet of Things (IoT), and on to the latest advances in memory-based computer architecture. All the innovations are designed to help customers address the age of digital disruption with speed, agility, and efficiency.

Addressing a Discover audience for the first time since HPE announced spinning off many software lines to Micro Focus, Meg Whitman, HPE President and CEO, said that company is not only committed to those assets, becoming a major owner of Micro Focus in the deal, but building its software investments.

“HPE is not getting out of software but doubling-down on the software that powers the apps and data workloads of hybrid IT,” she said Tuesday at London’s ExCel exhibit center.

“Massive compute resources need to be brought to the edge, powering the Internet of Things (IoT). … We are in a world now where everything computes, and that changes everything,” said Whitman, who has now been at the helm of HPE and HP for five years.

HPE’s new vision: To be the leading provider of hybrid IT, to run today’s data centers, and then bridge the move to multi-cloud and empower the intelligent edge, said Whitman. “Our goal is to make hybrid IT simple and to harness the intelligent edge for real-time decisions” to allow enterprises of all kinds to win in the marketplace, she said.

Hyper-converged systems

To that aim, the company this week announced an extension of HPE Synergy’s fully programmable infrastructure to HPE’s multi-cloud platform and hyper-converged systems, enabling IT operators to deliver software-defined infrastructure as quickly as customers’ businesses demand. The new solutions include:

  • HPE Synergy with HPE Helion CloudSystem 10 — This brings full composability across compute, storage and fabric to HPE’s OpenStack technology-based hybrid cloud platform to enable customers to run bare metal, virtualized, containerized and cloud-native applications on a single infrastructure and dynamically compose and recompose resources for unmatched agility and efficiency.
  • HPE Hyper Converged Operating Environment — The software update leverages composable technologies to deliver new capabilities to the HPE Hyper Converged 380, including new workspace controls that allow IT managers to compose and recompose virtualized resources for different lines of business, making it easier and more efficient for IT to act as an internal service provider to their organization.

This move delivers a full-purpose composable infrastructure platform, treating infrastructure as code, enabling developers to accelerate application delivery, says HPE. HPE Synergy has nearly 100 early access customers across a variety of industries, and is now broadly available. [Disclosure: HPE is a sponsor of BriefingsDirect podcasts.]

This year’s HPE Discover was strong on showcasing the ecosystem approach to creating and maintaining hybrid IT. Heavy hitters from Microsoft Azure, Arista, and Docker joined Whitman on stage to show their allegiance to HPE’s offerings — along with their own — as essential ingredients to Platform 3.0 efficiency.

See more on my HPE Discover analysis on The Cube.

HPE also announced plans to expand Cloud28+, an open community of commercial and public sector organizations with the common goal of removing barriers to cloud adoption. Supported by HPE’s channel program, Cloud28+ unites service providers, solution providers, ISVs, system integrators, and government entities to share knowledge, resources and services aimed at helping customers build and consume the right mix of cloud solutions for their needs.

Internet of Things

Discover 2016 also saw new innovations designed to help organizations rapidly, securely, and cost-effectively deploy IoT devices in wide area, enterprise and industrial deployments. These solutions include:

“Cost-prohibitive economics and the lack of a holistic solution are key barriers for mass adoption of IoT,” said Keerti Melkote, Senior Vice President and General Manager, HPE. “By approaching IoT with innovations to expand our comprehensive framework built on edge infrastructure solutions, software platforms, and technology ecosystem partners, HPE is addressing the cost, complexity and security concerns of organizations looking to enable a new class of services that will transform workplace and operational experiences.”

As organizations integrate IoT into mainstream operations, the onboarding and management of IoT devices remains costly and inefficient particularly at large scale. Concurrently, the diverse variations of IoT connectivity, protocols and security, prevent organizations from easily aggregating data across a heterogeneous fabric of connected things.

To improve the economies of scale for massive IoT deployments over wide area networks, HPE announced the new HPE Mobile Virtual Network Enabler (MVNE) and enhancements to the HPE Universal IoT (UIoT) Platform.

As the amount of data generated from smart “things” grows and the frequency at which it is collected increases, so will the need for systems that can acquire and analyze the data in real-time. Real-time analysis is enabled through edge computing and the close convergence of data capture and control systems in the same box.

HPE Edgeline Converged Edge Systems converge real-time analog data acquisition with data center-level computing and manageability, all within the same rugged open standards chassis. Benefits include higher performance, lower energy, reduced space, and faster deployment times.

“The intelligent edge is the new frontier of the hybrid computing world,” said Whitman. “The edge of the network is becoming a very crowded place, but these devices need to be made more useful.”

This means that the equivalent of a big data crunching data center needs to be brought to the edge affordably.

Biggest of big data

“IoT is the biggest of big data,” said Tom Bradicich, HPE Vice President and General Manager, Servers and IoT Systems. “HPE EdgeLine and [partner company] PTC help bridge the digital and physical worlds for IoT and augmented reality (AR) for fully automated assembly lines.”

IoT and data analysis at the edge helps companies finally predict the future, head off failures and maintenance needs in advance. And the ROI on edge computing will be easy to prove when factory downtime can be greatly eliminated using IoT, data analysis and AR at the edge everywhere.

Along these lines, Citrix, together with HPE, has developed a new architecture around HPE Edgeline EL4000 with XenApp, XenDesktop and XenServer to allow graphically rich, high-performance applications to be deployed right at the edge.  They’re now working together on next-generation IoT solutions that bring together the HPE Edge IT and Citrix Workspace IoT strategies.

In related news, SUSE has entered into an agreement with HPE to acquire technology and talent that will expand SUSE’s OpenStack infrastructure-as-a-service (IaaS) solution and accelerate SUSE’s entry into the growing Cloud Foundry platform-as-a-service (PaaS) market.

The acquired OpenStack assets will be integrated into SUSE OpenStack Cloud, and the acquired Cloud Foundry and PaaS assets will enable SUSE to bring to market a certified, enterprise-ready SUSE Cloud Foundry PaaS solution for all customers and partners in the SUSE ecosystem.

As part of the transaction, HPE has named SUSE as its preferred open source partner for Linux, OpenStack IaaS, and Cloud Foundry PaaS.

#HPE also put force behind its drive to make high performance computing (HPC) a growing part of enterprise data centers and private clouds. Hot on the heels of buying SGI, HPE has recognized that public clouds leave little room for those workloads that do not perform best in virtual machines.

Indeed, if all companies buy their IT from public clouds, they have little performance advantage over one another. But many companies want to gain the best systems with the best performance for the workloads that give them advantage, and which run the most complex — and perhaps value-creating — applications. I predict that HPC will be a big driver for HPE, both in private cloud implementations and in supporting technical differentiation for HPE customers and partners.

Memory-driven computing

Computer architecture took a giant leap forward with the announcement that HPE has successfully demonstrated memory-driven computing, a concept that puts memory, not processing, at the center of the computing platform to realize performance and efficiency gains not possible today.

Developed as part of The Machine research program, HPE’s proof-of-concept prototype represents a major milestone in the company’s efforts to transform the fundamental architecture on which all computers have been built for the past 60 years.

Gartner predicts that by 2020, the number of connected devices will reach 20.8 billion and generate an unprecedented volume of data, which is growing at a faster rate than the ability to process, store, manage, and secure it with existing computing architectures.

“We have achieved a major milestone with The Machine research project — one of the largest and most complex research projects in our company’s history,” said Antonio Neri, Executive Vice President and General Manager of the Enterprise Group at HPE. “With this prototype, we have demonstrated the potential of memory-driven computing and also opened the door to immediate innovation. Our customers and the industry as a whole can expect to benefit from these advancements as we continue our pursuit of game-changing technologies.”

The proof-of-concept prototype, which was brought online in October, shows the fundamental building blocks of the new architecture working together, just as they had been designed by researchers at HPE and its research arm, Hewlett Packard Labs. HPE has demonstrated:

  • Compute nodes accessing a shared pool of fabric-attached memory
  • An optimized Linux-based operating system (OS) running on a customized system on a chip (SOC)
  • Photonics/Optical communication links, including the new X1 photonics module, are online and operational
  • New software programming tools designed to take advantage of abundant persistent memory.

During the design phase of the prototype, simulations predicted the speed of this architecture would improve current computing by multiple orders of magnitude. The company has run new software programming tools on existing products, illustrating improved execution speeds of up to 8,000 times on a variety of workloads. HPE expects to achieve similar results as it expands the capacity of the prototype with more nodes and memory.

In addition to bringing added capacity online, The Machine research project will increase focus on exascale computing. Exascale is a developing area of HPC that aims to create computers several orders of magnitude more powerful than any system online today. HPE’s memory-driven computing architecture is incredibly scalable, from tiny IoT devices to the exascale, making it an ideal foundation for a wide range of emerging high-performance compute and data intensive workloads, including big data analytics.

Commercialization

HPE says it is committed to rapidly commercializing the technologies developed under The Machine research project into new and existing products. These technologies currently fall into four categories: Non-volatile memory, fabric (including photonics), ecosystem enablement and security.

Martin Banks, writing in Diginomica, questions whether these new technologies and new architectures represent a new beginning or a last hurrah for HPE. He poses the question to David Chalmers, HPE’s Chief Technologist in EMEA, and Chalmers explains HPE’s roadmap.

The conclusion? Banks feels that the in-memory architecture has the potential to be the next big step that IT takes. If all the pieces fall into place, Banks says, “There could soon be available a wide range of machines at price points that make fast, high-throughput systems the next obvious choice. . . . this could be the foundation for a whole range of new software innovations.”

Storage initiative

HPE lastly announced a new initiative to address demand for flexible storage consumption models, accelerate all-flash data center adoption, assure the right level of resiliency, and help customers transform to a hybrid IT infrastructure.

Over the past several years, the industry has seen flash storage rapidly evolve from niche application performance accelerator to the default media for critical workloads. During this time, HPE’s 3PAR StoreServ Storage platform has emerged as a leader in all-flash array market share growth, performance, and economics. The new HPE 3PAR Flash Now initiative gives customers a way to acquire this leading all-flash technology on-premises starting at $0.03 per usable Gigabyte per month, a fraction of the cost of public cloud solutions.

“Capitalizing on digital disruption requires that customers be able to flexibly consume new technologies,” said Bill Philbin, vice president and general manager, Storage, Hewlett Packard Enterprise. “Helping customers benefit from both technology and consumption flexibility is at the heart of HPE’s innovation agenda.”

Whitman’s HPE, given all of the news at HPE Discover, has assembled the right business path to place HPE and its ecoystems of partners and alliances squarely the very center of the major IT trends of the next five years.

Indeed, I’ve been at HPE Discover conferences for more than 10 years now, and this keynote address and the news makes more sense as pertains to current and future IT market than I’ve ever seen.

You may also be interested in:

Posted in big data, Cloud computing, Cyber security, data center, Hewlett Packard Enterprise, HP, storage, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

Meet George Jetson – your new AI-empowered chief procurement officer

The next BriefingsDirect technology innovation thought leadership discussion explores how rapid advances in artificial intelligence (AI) and machine learning are poised to reshape procurement — like a fast-forwarding to a once-fanciful vision of the future.

Whereas George Jetson of the 1960s cartoon portrayed a world of household robots, flying cars, and push-button corporate jobs — the 2017 procurement landscape has its own impressive retinue of decision bots, automated processes, and data-driven insights.

We won’t need to wait long for this vision of futuristic business to arrive. As we enter 2017, applied intelligence derived from entirely new data analysis benefits has redefined productivity and provided business leaders with unprecedented tools for managing procurement, supply chains, and continuity risks.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the future of predictive — and even proactive — procurement technologies, please welcome Chris Haydon, Chief Strategy Officer at SAP Ariba. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It seems like only yesterday that we were content to gain a common view of the customer or develop an end-to-end bead on a single business process. These were our goals in refining business in general, but today we’ve leapfrogged to a future where we’re using words like “predictive” and “proactive” to define what business function should do and be about. Chris, what’s altered our reality to account for this rapid advancement from visibility into predictive — and on to proactive?
Haydon: There are a couple of things. The acceleration of the smarts, the intelligence, or the artificial intelligence, whatever the terminology that you identify with, has really exploded. It’s a lot more real, and you see these use-cases on television all the time. The business world is just looking to go in and adopt that.

And then there’s this notion of the Lego block of being able to string multiple processes together via an API is really exciting — that coupled with the ability to have insight. The last piece, the ability to make sense of big data, either from a visualization perspective or from a machine-learning perspective, has accelerated things.

These trends are starting to come together in the business-to-business (B2B) world, and today, we’re seeing them manifest themselves in procurement.

Gardner: What is it about procurement as a function that’s especially ripe for taking advantage of these technologies?

Transaction intense

Haydon: Procurement is obviously very transaction-intense. Historically, what transaction intensity means is people, processing, exceptions. When we talk about these trends now, the ability to componentize services, the ability to look at big data or machine learning, and the input on top of this contextualizes intelligence. It’s cognitive and predictive by its very nature, a bigger data set, and [improves] historically inefficient human-based processes. That’s why procurement is starting to be at the forefront.

Haydon

Gardner: Procurement itself has changed from the days of when we were highly vertically integrated as corporations. We had long lead times on product cycles and fulfillment. Nowadays, it’s all about agility and compressing the time across the board. So, procurement has elevated its position. Anything more to add?

Haydon: Everyone needs to be closer to the customer, and you need live business. So, procurement is live now. This change in dynamic — speed and responsiveness — is closer to your point. It’s also these other dimensions of the consumer experience that now has to be the business-to-business experience. All that means same-day shipping, real-time visibility, and changing dynamically. That’s what we have to deliver.

Gardner: If we go back to our George Jetson reference, what is it about this coming year, 2017? Do you think it’s an important inception point when it comes to factoring things like the rising role of procurement, the rising role of analytics, and the fact that the Internet of Things (IoT) is going to bring more relevant data to bear? Why now?

Haydon: There are a couple of things. The procurement function is becoming more mature. Procurement leaders have extracted those first and second levels of savings from sourcing and the like. And they have control of their processes.

With cloud-based technologies and more of control of their processes, they’re looking now to how they’re going to serve their internal customers by being value-generators and risk-reducers.

How do you forward the business, how do you de-risk, how do you get supply continuity, how do you protect your brand? You do that by having better insight, real-time insight into your supply base, and that’s what’s driving this investment.

Gardner: We’ve been talking about Ariba being a 20-year-old company. Congratulations on your anniversary of 20 years.

Haydon: Thank you.

AI and bots

Gardner: You’re also, of course, part of SAP. Not only have you been focused on procurement for 20 years, but you’ve got a large global player with lots of other technologies and platform of benefits to avail yourselves of. So, that brings me to the point of AI and bots.

It seems to me that right at the time when procurement needs help, when procurement is more important than ever, that we’re also in a position technically to start doing some innovative things that get us into those words “predictive” and more “intelligent.”

Set the stage for how these things come together.

Haydon: You allude to being part of SAP, and that’s really a great strength and advantage for a domain-focused procurement expertise company.

The machine-learning capabilities that are part of a native SAP HANA platform, which we naturally adopt and get access to, put us on the forefront of not having to invest in that part of the platform, but to focus on how we take that platform and put it into the context of procurement.

There are a couple of pretty obvious areas. There’s no doubt that when you’ve got the largest B2B network, billions in spend, and hundreds and millions of transactions on invoicing, you apply some machine learning on that. We can start doing a lot smarter matching an exception management on that, pretty straightforward. That’s at one end of the chain.

On the other end of the chain, we have bots. Some people get a little bit wired about the word “bot,” “robotics,” or whatever, maybe it’s a digital assistant or it’s a smart app. But, it’s this notion of helping with decisions, helping with real-time decisions, whether it’s identifying a new source of supply because there’s a problem, and the problem is identified because you’ve got a live network. It’s saying that you have a risk or you have a continuity problem, and not just that it’s happening, but here’s an alternative, here are other sources of a qualified supply.

Gardner: So, it strikes me that 2017 is such a pivotal year in business. This is the year where we’re going to start to really define what machines do well, and what people do well, and not to confuse them. What is it about an end-to-end process in procurement that the machine can do better that we can then elevate the value in the decision-making process of the people?

Haydon: Machines can do better in just identifying patterns — clusters, if you want to use a more technical word. They transform category management and enables procurement to be at the front of their internal customer set by looking not just at their traditional total cost of ownership (TCO), but total value and use. That’s a part of that real dynamic change.

What we call end-to-end, or even what SAP Ariba defined in a very loose way when we talked about upstream, it was about outsourcing and contracting, and downstream was about procurement, purchasing, and invoicing. That’s gone, Dana. It’s not about upstream and downstream, it’s about end-to-end process, and re-imagining and reinventing that.

The role of people

Gardner: When we give more power to a procurement professional by having highly elevated and intelligent tools, their role within the organization advances and the amount of improvement they can make financially advances. But I wonder where there’s risk if we automate too much and whether companies might be thinking that they still want people in charge of these decisions. Where do we begin experimenting with how much automation to bring, now that we know how capable these machines have been, or is this going to be a period of exploration for the next few years?

Haydon: It will be a period of exploration, just because businesses have different risk tolerances and there are actually different parts of their life cycle. If you’re in a hyper growth mode and you’re pretty profitable, that’s a little bit different than if you’re under a very big margin pressure.

For example, maybe if you’re in high tech in the Silicon Valley, and some big names that we could all talk about are, you’re prepared to be able to go at it, and let it all come.

If you’re in a natural-resource environment, every dollar is even more precious than it was a year ago.

That’s also the beauty, though, with technology. If you want to do it for this category, this supplier, this business unit, or this division you can do that a lot easier than ever before and so you go on a journey.

Gardner: That’s an important point that people might not appreciate, that there’s a tolerance for your appetite for automation, intelligence, using machine learning, and AI. They might even change, given the context of the certain procurement activity you’re doing within the same company. Maybe you could help people who are a little bit leery of this, thinking that they’re losing control. It sounds to me like they’re actually gaining more control.

Haydon: They gain more control, because they can do more and see more. To me, it’s layered. Does the first bot automatically requisition something — yes or no? So, you put tolerances on it. I’m okay to do it if it is less than $50,000, $5,000, or whatever the limit is, and it’s very simple. If the event is less than $5,000 and it’s within one percent of the last time I did it, go and do it. But tell me that you are going to do it or let’s have a cooling-off period.

If you don’t tell me or if you don’t stop me, I’m going to do it, and that’s the little bit of this predictive as well. So you still control the gate, you just don’t have to be involved in all the sub-processes and all that stuff to get to the gate. That’s interesting.

Gardner: What’s interesting to me as well, Chris, is because the data is such a core element of how successful this is, it means that companies in a procurement intelligence drive will want more data, so they can make better decisions. Suppliers who want to be competitive in that environment will naturally be incentivized to provide more data, more quickly, with more openness. Tell us some of the implications for intelligence brought to procurement on the supplier? What we should expect suppliers to do differently as a result?

Notion of content

Haydon: There’s no doubt that, at a couple of levels, suppliers will need to let the buyers know even more about themselves than they have ever known before.

That goes to the notion of content. It’s like there is unique content to be discovered, which is whom am I, what do I do well and demonstrate that I do well. That’s being discovered. Then, there is the notion of being able to transact. What do I need to be able to do to transact with you efficiently whether that’s a payment, a bank account, or just the way in which I can consume this?

Then, there is also this last notion of the content. What content do I need to be able to provide to my customer, aka the end user, for them to be able to initiate the business with them?

These three dimensions of being discovered, how to be dynamically transacted with, and then actually providing the content of what you do even as a material of service to the end user via the channel. You have to have all of these dimensions right.

That’s why we fundamentally believe that a network-based approach, when it’s end to end, meaning a supplier can do it once to all of the customers across the [Ariba] Discovery channel, across the transactional channel, across the content channel is really value adding. In a digital economy, that’s the only way to do it.

Gardner: So this idea of the business network, which is a virtual repository for all of this information isn’t just quantity, but it’s really about the quality of the relationship. We hear about different business networks vying for attention. It seems to me that understanding that quality aspect is something you shouldn’t lose track of.

Haydon: It’s the quality. It’s also the context of the business process. If you don’t have the context of the business process between a buyer and a seller and what they are trying to affect through the network, how does it add value? The leading-practice networks, and we’re a leading-practice network, are thinking about Discovery. We’re thinking about content; we’re thinking about transactions.

Gardner: Again, going back to the George Jetson view of the future, for organizations that want to see the return on their energy and devotion to these concepts around AI, bots, and intelligence. What sort of low-hanging fruit do we look for, for assuring them that they are on the right path? I’m going to answer my own question, but I want you to illustrate it a bit better, and that’s risk and compliance and being able to adjust to unforeseen circumstances seems to me an immediate payoff for doing this.

Severance of pleadings

Haydon: The United Kingdom is enacting a law before the end of the year for severance of pleadings. It’s the law, and you have to comply. The real question is how you comply.

You eye your brand, you eye your supply chain, and having the supply-chain profile information at hand right now is top of mind. If you’re a Chief Procurement Officer (CPO) and you walk into the CEO’s office, the CEO could ask, “Can you tell me that I don’t have any forced labor, I don’t have any denied parties, and I’m Office of Foreign Assets Control (OFAC) compliant? Can you tell me that now?”

You might be able to do it for your top 50 suppliers or top 100 suppliers, and that’s great, but unfortunately, a small, $2,000 supplier who uses some forced labor in any part of the world is potentially a problem in this extended supply chain. We’ve seen brands boycotted very quickly. These things roll.

So yes, I think that’s just right at the forefront. Then, it’s applying intelligence to that to give that risk threshold and to think about where those challenges are. It’s being smart and saying, “Here is a high risk category. Look at this category first and all the suppliers in the category. We’re not saying that the suppliers are bad, but you better have a double or triple look at that, because you’re at high risk just because of the nature of the category.”

Gardner: Technically, what should organizations be thinking about in terms of what they have in place in order for their systems and processes to take advantage of these business network intelligence values? If I’m intrigued by this concept, if I see the benefits in reducing risk and additional efficiency, what might I be thinking about in terms of my own architecture, my own technologies in order to be in the best position to take advantage of this?

Haydon: You have to question how much of that you think you can build yourself. If you think you’re asking different questions than most of your competitors, you’re probably not. I’m sure there are specific categories and specific areas on tight supplier relationships and co-innovation development, but when it comes to the core risk questions, more often, they’re about an industry, a geography, or the intersection of both.

Our recommendation to corporations is never try and build it yourself. You might need to have some degree of privacy, but look to have it as more industry-based. Think larger than yourself in trying to solve that problem differently. Those cloud deployment models really help you.

Gardner: So it really is less of a technical preparatory thought process than process being a digital organization, availing yourself of cloud models, being ready to think about acting intelligently and finding that right demarcation between what the machines do best and what the people do best.

More visible

Haydon: By making things digital they are actually more visible. You have to be able to balance the pure nature of visibility to get at the product; that’s the first step. That’s why people are on a digital journey.

Gardner: Machines can’t help you with a paper-based process, right?

Haydon: Not as much. You have to scan it and throw it in. Then, you are then digitizing it.

Gardner: We heard about Guided Buying last year from SAP Ariba. It sounds like we’re going to be getting a sort of “Guided Buying-Plus” next year and we should keep an eye on that.

Haydon: We’re very excited. We announced that earlier this year. We’re trying to solve two problems quickly through Guided Buying.

One is the nature of the ad-hoc user. We’re all ad-hoc users in the business today. I need to buy things, but I don’t want to read the policy, I don’t want to open the PDF on some corporate portal on some threshold limit that, quite honestly, I really need to know about once or twice a year.

So our Guided Buying has a beautiful consumer-based look and feel, but with embedded compliance. We hide the complexity. We just show the user what they need to know at the time, and the flow is very powerful.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: SAP Ariba.

You may also be interested in:

Posted in artificial intelligence, big data, SAP, SAP Ariba | Tagged , , , , , , , , , | Leave a comment

Strategic view across more data delivers digital business boost for AmeriPride

The next BriefingsDirect Voice of the Customer digital transformation case study explores how linen services industry leader AmeriPride Services uses big data to gain a competitive and comprehensive overview of its operations, finances and culture.

We’ll explore how improved data analytics allows for disparate company divisions and organizations to come under a single umbrella — to become more aligned — and to act as a whole greater than the sum of the parts. This is truly the path to a digital business.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to describe how digital transformation has been supported by innovations at the big data core, we’re joined by Steven John, CIO, and Tony Ordner, Information Team Manager, both at at AmeriPride Services in Minnetonka, Minnesota. The discussion is moderated by BriefingsDirect’s Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s discuss your path to being a more digitally transformed organization. What were the requirements that led you to become more data-driven, more comprehensive, and more inclusive in managing your large, complex organization?

John

John: One of the key business drivers for us was that we’re a company in transition — from a very diverse organization to a very centralized organization. Before, it wasn’t necessarily important for us to speak the same data language, but now it’s critical. We’re developing the lexicon, the Rosetta Stone, that we can all rely on and use to make sure that we’re aligned and heading in the same direction.

Gardner: And Tony, when we say “data,” are we talking about just databases and data within applications? Or are we being even more comprehensive — across as many information types as we can?

Ordner: It’s across all of the different information types. When we embarked on this journey, we discovered that data itself is great to have, but you also have to have processes that are defined in a similar fashion. You really have to drive business change in order to be able to effectively utilize that data, analyze where you’re going, and then use that to drive the business. We’re trying to institute into this organization an iterative process of learning.

Gardner: For those who are not familiar with AmeriPride Services, tell us about the company. It’s been around for quite a while. What do you do, and how big of an umbrella organization are we talking about?

Long-term investments

John: The company is over 125 years old. It’s family-owned, which is nice, because we’re not driven by the quarter. We can make longer-term investments through the family. We can have more of a future view and have ambition to drive change in different ways than a quarter-by-quarter corporation does.

We’re in the laundry business. We’re in the textiles and linen business. What that means is that for food and beverage, we handle tablecloths, napkins, chef coats, aprons, and those types of things. In oil and gas, we provide the safety garments that are required. We also provide the mats you cross as you walk in the door of various restaurants or retail stores. We’re in healthcare facilities and meet the various needs of providing and cleansing the garments and linens coming out of those institutions. We’re very diverse. We’re the largest company of our kind in Canada, probably about fourth in the US, and growing.

Become a Member of myVertica
Gain Access to the Free
HPE Vertica Community Edition

Gardner: And this is a function that many companies don’t view as core and they’re very happy to outsource it. However, you need to remain competitive in a dynamic world. There’s a lot of innovation going on. We’ve seen disruption in the taxicab industry and the hospitality industry. Many companies are saying, “We don’t want to be a deer in the headlights; we need to get out in front of this.”

Tony, how do you continue to get in front of this, not just at the data level, but also at the cultural level?

Ordner: Part of what we’re doing is defining those standards across the company. And we’re coming up with new programs and new ways to get in front and to partner with the customers.

Ordner

As part of our initiative, we’re installing a lot of different technology pieces that we can use to be right there with the customers, to make changes with them as partners, and maybe better understand their business and the products that they aren’t buying from us today that we can provide. We’re really trying to build that partnership with customers, provide them more ways to access our products, and devise other ways they might not have thought of for using our products and services.

With all of those data points, it allows us to do a much better job.

Gardner: And we have heard from Hewlett Packard Enterprise (HPE) the concept that it’s the “analytics that are at the core of the organization,” that then drive innovation and drive better operations. Is that something you subscribe to, and is that part of your thinking?

John: For me, you have to extend it a little bit further. In the past, our company was driven by the experience and judgment of the leadership. But what we discovered is that we really wanted to be more data-driven in our decision-making.

Data creates a context for conversation. In the context of their judgment and experience, our leaders can leverage that data to make better decisions. The data, in and of itself, doesn’t drive the decisions — it’s that experience and judgment of the leadership that’s that final filter.

We often forget the human element at the end of that and think that everything is being driven by analytics, when analytics is a tool and will remain a tool that helps leaders lead great companies.
Gardner: Steven, tell us about your background. You were at a startup, a very successful one, on the leading edge of how to do things different when it comes to apps, data, and cloud delivery.

New ways to innovate

John: Yes, you’re referring to Workday. I was actually Workday’s 33rd customer, the first to go global with their product. Then, I joined Workday in two roles: as their Strategic CIO, working very closely with the sales force, helping CIOs understand the cloud and how to manage software as a service (SaaS); and also as their VP of Mid-Market Services, where we were developing new ways to innovate, to implement in different ways and much more rapidly.

And it was a great experience. I’ve done two things in my life, startups and turnarounds, and I thought that I was kind of stepping back and taking a relaxing job with AmeriPride. But in many ways, it’s both; AmeriPride’s both a turnaround and a startup, and I’m really enjoying the experience.

Gardner: Let’s hear about how you translate technology advancement into business advancement. And the reason I ask it in that fashion is that it seems as a bit of a chicken and the egg, that they need to be done in parallel — strategy, ops, culture, as well as technology. How are you balancing that difficult equation?
John: Let me give you an example. Again, it goes back to that idea of, if you just have the human element, they may not know what to ask, but when you add the analytics, then you suddenly create a set of questions that drive to a truth.

Become a Member of myVertica
Gain Access to the Free
HPE Vertica Community Edition

We’re a route-based business. We have over a 1,000 trucks out there delivering our products every day. When we started looking at margin we discovered that our greatest margin was from those customers that were within a mile of another customer.

So factoring that in changes how we sell, that changes how we don’t sell, or how we might actually let some customers go — and it helps drive up our margin. You have that piece of data, and suddenly we as leaders knew some different questions to ask and different ways to orchestrate programs to drive higher margin.

Gardner: Another trend we’ve seen is that putting data and analytics, very powerful tools, in the hands of more people can have unintended, often very positive, consequences. A knowledge worker isn’t just in a cube and in front of a computer screen. They’re often in the trenches doing the real physical work, and so can have real process insights. Has that kicked in yet at AmeriPride, and are you democratizing analytics?

Ordner: That’s a really great question. We’ve been trying to build a power-user base and bring some of these capabilities into the business segments to allow them to explore the data.

You always have to keep an eye on knowledge workers, because sometimes they can come to the wrong conclusions, as well as the right ones. So it’s trying to make sure that we maintain that business layer, that final check. It’s like, the data is telling me this, is that really where it is?

I liken it to having a flashlight in a dark room. That’s what we are really doing with visualizing this data and allowing them to eliminate certain things, and that’s how they can raise the questions, what’s in this room? Well, let me look over here, let me look over there. That’s how I see that.

Too much information

John: One of the things I worry about is that if you give people too much information or unstructured information, then they really get caught up in the academics of the information — and it doesn’t necessarily drive a business process or drive a business result. It can cause people to get lost in the weeds of all that data.

You still have to orchestrate it, you still have to manage it, and you have to guide it. But you have to let people go off and play and innovate using the data. We actually have a competition among our power-users where they go out and create something, and there are judges and prizes. So we do try to encourage the innovation, but we also want to hold the reins in just a little bit.

Gardner: And that gets to the point of having a tight association between what goes on in the core and what goes on at the edge. Is that something that you’re dabbling in as well?

John: It gets back to that idea of a common lexicon. If you think about evolution, you don’t want a Madagascar or a Tasmania, where groups get cut off and then they develop their own truth, or a different truth, or they interpret data in a different way — where they create their own definition of revenue, or they create their own definition of customer.

If you think about it as orbits, you have to have a balance. Maybe you only need to touch certain people in the outer orbit once a month, but you have to touch them once a month to make sure they’re connected. The thing about orbits and keeping people in the proper orbits is that if you don’t, then one of two things happens, based on gravity. They either spin out of orbit or they come crashing in. The idea is to figure out what’s the right balance for the right groups to keep them aligned with where we are going, what the data means, and how we’re using it, and how often.

Gardner: Let’s get back to the ability to pull together the data from disparate environments. I imagine, like many organizations, that you have SaaS apps. Maybe it’s for human capital management or maybe it’s for sales management. How does that data then get brought to bear with internal apps, some of them may even be on a mainframe still, or virtualized apps from older code basis and so forth? What’s the hurdle and what words of wisdom might you impart to others who are earlier in this journey of how to make all that data common and usable?

Ordner: That tends to be a hurdle. As to the data acquisition piece, as you set these things up in the cloud, a lot of the times the business units themselves are doing these things or making the agreements. They don’t put into place the data access that we’ve always needed. That’s been our biggest hurdle. They’ll sign the contracts, not getting us involved until they say, “Oh my gosh, now we need the data.” We look at it and we say, “Well, it’s not in our contracts and now it’s going to cost more to access the data.” That’s been our biggest hurdle for the cloud services that we’ve done.

Once you get past that, web services have been a great thing. Once you get the licensing and the contract in place, it becomes a very simple process, and it becomes a lot more seamless.

Gardner: So, maybe something to keep in mind is always think about the data before, during, and after your involvement with any acquisition, any contract, and any vendor?

Ordner: Absolutely.

You own three things

John: With SaaS, at the end of the day, you own three things: the process design, the data, and the integration points. When we construct a contract, one of the things I always insist upon is what I refer to as the “prenuptial agreement.”

What that simply means is, before the relationship begins, you understand how it can end. The key thing in how it ends is that you can take your data with you, that it has a migration path, and that they haven’t created a stickiness that traps you there and you don’t have the ability to migrate your data to somebody else, whether that’s somebody else in the cloud or on-premise.

Gardner: All right, let’s talk about lessons learned in infrastructure. Clearly, you’ve had an opportunity to look at a variety of different platforms, different requirements that you have had, that you have tested and required for your vendors. What is it about HPE Vertica, for example, that is appealing to you, and how does that factor into some of these digital transformation issues?

Ordner: There are two things that come to mind right away for me. One is there were some performance implications. We were struggling with our old world and certain processes that ran 36 hours. We did a proof of concept with HPE and Vertica and that ran in something like 17 minutes. So, right there, we were sold on performance changes.

As we got into it and negotiated with them, the other big advantage we discovered is that the licensing model with the amount of data, versus the core model that everyone else runs in the CPU core. We’re able to scale this and provide that service at a high speed, so we can maintain that performance without having to take penalties against licensing. Those are a couple of things I see. Anything from your end, Steven?

John: No, I think that was just brilliant.

Gardner: How about on that acquisition and integration of data. Is there an issue with that that you have been able to solve?

Ordner: With acquisition and integration, we’re still early in that process. We’re still learning about how to put data into HPE Vertica in the most effective manner. So, we’re really at our first source of data and we’re looking forward to those additional pieces. We have a number of different telematics pieces that we want to include; wash aisle telematics as well as in-vehicle telematics. We’re looking forward to that.

There’s also scan data that I think will soon be on the horizon. All of our garments and our mats have chips in them. We scan them in and out, so we can see the activity and where they flow through the system. Those are some of our next targets to bring that data in and take a look at that and analyze it, but we’re still a little bit early in that process as far as multiple sources. We’re looking forward to some of the different ways that Vertica will allow us to connect to those data sources.

Gardner: I suppose another important consideration when you are picking and choosing systems and platforms is that extensibility. RFID tags are important now; we’re expecting even more sensors, more data coming from the edge, the information from the Internet of Things (IoT). You need to feel that the systems you’re putting in place now will scale out and up. Any thoughts about the IoT impact on what you’re up to?

Overcoming past sins

John: We have had several conversations just this week with HPE and their teams, and they are coming out to visit with us on that exact topic. Being about a year into our journey, we’ve been doing two things. We’ve been forming the foundation with HPE Vertica and we’ve been getting our own house in order. So, there’s a fair amount of cleanup and overcoming the sins of the past as we go through that process.

But Vertica is a platform; it’s a platform where we have only tapped a small percentage of its capability. And in my personal opinion, even HPE is only aware of a portion of its capability. There are a whole set of things that it can do, and I don’t believe that we have discovered all of them.

Become a Member of myVertica
Gain Access to the Free
HPE Vertica Community Edition

With that said, we’re going to do what you and Tony just described; we’re going to use the telematics coming out of our trucks. We’re going to track safety and seat belts. We’re going to track green initiatives, routes, and the analytics around our routes and fuel consumption. We’re going to make the place safer, we’re going to make it more efficient, and we’re going to get proactive about being able to tell when a machine is going to fail and when to bring in our vendor partners to get it fixed before it disrupts production.

Gardner: It really sounds like there is virtually no part of your business in the laundry services industry that won’t be in some way beneficially impacted by more data, better analytics delivered to more people. Is that fair?

Ordner: I think that’s a very fair statement. As I prepared for this conference, one of the things I learned, and I have been with the company for 17 years, is that we’ve done a lot technology changes, and technology has taken an added significance within our company. When you think of laundry, you certainly don’t think of technology, but we’ve been at the leading edge of implementing technology to get closer to our customers, closer to understanding our products.

[Data technology] has become really ingrained within the industry, at least at our company.

John: It is one of those few projects where everyone is united, everybody believes that success is possible, and everybody is willing to pay the price to make it happen.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, data analysis, Hewlett Packard Enterprise, Vertica | Tagged , , , , , , , , , , | Leave a comment