How Imagine Communications leverages edge computing and HPC for live multiscreen IP video

The next BriefingsDirect Voice of the Customer HPC and edge computing strategies interview explores how a video delivery and customization capability has moved to the network edge — and closer to consumers — to support live, multi-screen Internet Protocol (IP) entertainment delivery.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn how hybrid technology and new workflows for IP-delivered digital video are being re-architected — with significant benefits to the end-user experience, as well as with new monetization values to the content providers.

Our guest is Glodina Connan-Lostanlen, Chief Marketing Officer at Imagine Communications in Frisco, Texas. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Your organization has many major media clients. What are the pressures they are facing as they look to the new world of multi-screen video and media?

Connan-Lostanlen: The number-one concern of the media and entertainment industry is the fragmentation of their audience. We live with a model supported by advertising and subscriptions that rely primarily on linear programming, with people watching TV at home.

Glodina Connan-Lostanlen

Connan-Lostanlen

And guess what? Now they are watching it on the go — on their telephones, on their iPads, on their laptops, anywhere. So they have to find the way to capture that audience, justify the value of that audience to their advertisers, and deliver video content that is relevant to them. And that means meeting consumer demand for several types of content, delivered at the very time that people want to consume it.  So it brings a whole range of technology and business challenges that our media and entertainment customers have to overcome. But addressing these challenges with new technology that increases agility and velocity to market also creates opportunities.

For example, they can now try new content. That means they can try new programs, new channels, and they don’t have to keep them forever if they don’t work. The new models create opportunities to be more creative, to focus on what they are good at, which is creating valuable content. At the same time, they have to make sure that they cater to all these different audiences that are either static or on the go.

Gardner: The media industry has faced so much change over the past 20 years, but this is a major, perhaps once-in-a-generation, level of change — when you go to fully digital, IP-delivered content.

As you say, the audience is pulling the providers to multi-screen support, but there is also the capability now — with the new technology on the back-end — to have much more of a relationship with the customer, a one-to-one relationship and even customization, rather than one-to-many. Tell us about the drivers on the personalization level.

Connan-Lostanlen: That’s another big upside of the fragmentation, and the advent of IP technology — all the way from content creation to making a program and distributing it. It gives the content creators access to the unique viewers, and the ability to really engage with them — knowing what they like — and then to potentially target advertising to them. The technology is there. The challenge remains about how to justify the business model, how to value the targeted advertising; there are different opinions on this, and there is also the unknown or the willingness of several generations of viewers to accept good advertising.

That is a great topic right now, and very relevant when we talk about linear advertising and dynamic ad insertion (DAI). Now we are able to — at the very edge of the signal distribution, the video signal distribution — insert an ad that is relevant to each viewer, because you know their preferences, you know who they are, and you know what they are watching, and so you can determine that an ad is going to be relevant to them.

But that means media and entertainment customers have to revisit the whole infrastructure. It’s not necessary rebuilding, they can put in add-ons. They don’t have to throw away what they had, but they can maintain the legacy infrastructure and add on top of it the IP-enabled infrastructure to let them take advantage of these capabilities.

Gardner: This change has happened from the web now all the way to multi-screen. With the web there was a model where you would use a content delivery network (CDN) to take the object, the media object, and place it as close to the edge as you could. What’s changed and why doesn’t that model work as well?

Connan-Lostanlen: I don’t know yet if I want to say that model doesn’t work anymore. Let’s let the CDN providers enhance their technology. But for sure, the volume of videos that we are consuming everyday is exponentially growing. That definitely creates pressure in the pipe. Our role at the front-end and the back-end is to make sure that videos are being created in different formats, with different ads, and everything else, in the most effective way so that it doesn’t put an undue strain on the pipe that is distributing the videos.

We are being pushed to innovate further on the type of workflows that we are implementing at our customers’ sites today, to make it efficient, to not leave storage at the edge and not centrally, and to do transcoding just-in-time. These are the things that are being worked on. It’s a balance between available capacity and the number of programs that you want to send across to your viewers – and how big your target market is.

The task for us on the back-end is to rethink the workflows in a much more efficient way. So, for example, this is what we call the digital-first approach, or unified distribution. Instead of planning a linear channel that goes the traditional way and then adding another infrastructure for multi-screen, on all those different platforms and then cable, and satellite, and IPTV, etc. — why not design the whole workflow digital-first. This frees the content distributor or provider to hold off on committing to specific platforms until the video has reached the edge. And it’s there that the end-user requirements determine how they get the signal.

This is where we are going — to see the efficiencies happen and so remove the pressure on the CDNs and other distribution mechanisms, like over-the-air.

Explore

High-Performance Computing

Solutions from HPE

Gardner: It means an intelligent edge capability, whereas we had an intelligent core up until now. We’ll also seek a hybrid capability between them, growing more sophisticated over time.

We have a whole new generation of technology for video delivery. Tell us about Imagine Communications. How do you go to market? How do you help your customers?

Education for future generations

Connan-Lostanlen: Two months ago we were in Las Vegas for our biggest tradeshow of the year, the NAB Show. At the event, our customers first wanted to understand what it takes to move to IP — so the “how.” They understand the need to move to IP, to take advantage of the benefits that it brings. But how do they do this, while they are still navigating the traditional world?

It’s not only the “how,” it’s needing examples of best practices. So we instructed them in a panel discussion, for example, on Over the Top Technology (OTT), which is another way of saying IP-delivered, and what it takes to create a successful multi-screen service. Part of the panel explained what OTT is, so there’s a lot of education.

There is also another level of education that we have to provide, which is moving from the traditional world of serial digital interfaces (SDIs) in the broadcast industry to IP. It’s basically saying analog video signals can be moved into digital. Then not only is there a digitally sharp signal, it’s an IP stream. The whole knowledge about how to handle IP is new to our own industry, to our own engineers, to our own customers. We also have to educate on what it takes to do this properly.

One of the key things in the media and entertainment industry is that there’s a little bit of fear about IP, because no one really believed that IP could handle live signals. And you know how important live television is in this industry – real-time sports and news — this is where the money comes from. That’s why the most expensive ads are run during the Super Bowl.

It’s essential to be able to do live with IP – it’s critical. That’s why we are sharing with our customers the real-life implementations that we are doing today.

We are also pushing multiple standards forward. We work with our competitors on these standards. We have set up a trade association to accelerate the standards work. We did all of that. And as we do this, it forces us to innovate in partnership with customers and bring them on board. They are part of that trade association, they are part of the proof-of-concept trials, and they are gladly sharing their experiences with others so that the transition can be accelerated.

Gardner: Imagine Communications is then a technology and solutions provider to the media content companies, and you provide the means to do this. You are also doing a lot with ad insertion, billing, in understanding more about the end-user and allowing that data flow from the edge back to the core, and then back to the edge to happen.

At the heart of it all

Connan-Lostanlen: We do everything that happens behind the camera — from content creation all the way to making a program and distributing it. And also, to your point, on monetizing all that with a management system. We have a long history of powering all the key customers in the world for their advertising system. It’s basically an automated system that allows the selling of advertising spots, and then to bill them — and this is the engine of where our customers make money. So we are at the heart of this.

We are in the prime position to help them take advantage of the new advertising solutions that exist today, including dynamic ad insertion. In other words, how you target ads to the single viewer. And the challenge for them is now that they have a campaign, how do they design it to cater both to the linear traditional advertising system as well as the multi-screen or web mobile application? That’s what we are working on. We have a whole set of next-generation platforms that allow them to take advantage of both in a more effective manner.

Gardner: The technology is there, you are a solutions provider. You need to find the best ways of storing and crunching data, close to the edge, and optimizing networks. Tell us why you choose certain partners and what are the some of the major concerns you have when you go to the technology marketplace?

Connan-Lostanlen: One fundamental driver here, as we drive the transition to IP in this industry, is in being able to rely on consumer-off-the-shelf (COTS) platforms. But even so, not all COTS platforms are born equal, right?

For compute, for storage, for networking, you need to rely on top-scale hardware platforms, and that’s why about two years ago we started to work very closely with Hewlett Packard Enterprise (HPE) for both our compute and storage technology.

Explore

High-Performance Computing

Solutions from HPE

We develop the software appliances that run on those platforms, and we sell this as a package with HPE. It’s been a key value proposition of ours as we began this journey to move to IP. We can say, by the way, our solutions run on HPE hardware. That’s very important because having high-performance compute (HPC) that scales is critical to the broadcast and media industry. Having storage that is highly reliable is fundamental because going off the air is not acceptable. So it’s 99.9999 percent reliable, and that’s what we want, right?

It’s a fundamental part of our message to our customers to say, “In your network, put Imagine solutions, which are powered by one of the top compute and storage technologies.”

Gardner: Another part of the change in the marketplace is this move to the edge. It’s auspicious that just as you need to have more storage and compute efficiency at the edge of the network, close to the consumer, the infrastructure providers are also designing new hardware and solutions to do just that. That’s also for the Internet of Things (IoT) requirements, and there are other drivers. Nonetheless, it’s an industry standard approach.

What is it about HPE Edgeline, for example, and the architecture that HPE is using, that makes that edge more powerful for your requirements? How do you view this architectural shift from core data center to the edge?

Optimize the global edge

Connan-Lostanlen: It’s a big deal because we are going to be in a hybrid world. Most of our customers, when they hear about cloud, we have to explain it to them. We explain that they can have their private cloud where they can run virtualized applications on-premises, or they can take advantage of public clouds.

Being able to have a hybrid model of deployment for their applications is critical, especially for large customers who have operations in several places around the globe. For example, such big names as Disney, Turner –- they have operations everywhere. For them, being able to optimize at the edge means that you have to create an architecture that is geographically distributed — but is highly efficient where they have those operations. This type of technology helps us deliver more value to the key customers.

Gardner: The other part of that intelligent edge technology is that it has the ability to be adaptive and customized. Each region has its own networks, its own regulation, and its own compliance, security, and privacy issues. When you can be programmatic as to how you design your edge infrastructure, then a custom-applications-orientation becomes possible.

Is there something about the edge architecture that you would like to see more of? Where do you see this going in terms of the capabilities of customization added-on to your services?

Connan-Lostanlen: One of the typical use-cases that we see for those big customers who have distributed operations is that they like to try and run their disaster recovery (DR) site in a more cost-effective manner. So the flexibility that an edge architecture provides to them is that they don’t have to rely on central operations running DR for everybody. They can do it on their own, and they can do it cost-effectively. They don’t have to recreate the entire infrastructure, and so they do DR at the edge as well.

We especially see this a lot in the process of putting the pieces of the program together, what we call “play out,” before it’s distributed. When you create a TV channel, if you will, it’s important to have end-to-end redundancy — and DR is a key driver for this type of application.

Gardner: Are there some examples of your cutting-edge clients that have adopted these solutions? What are the outcomes? What are they able to do with it?

Pop-up power

Connan-Lostanlen: Well, it’s always sensitive to name those big brand names. They are very protective of their brands. However, one of the top ones in the world of media and entertainment has decided to move all of their operations — from content creation, planning, and distribution — to their own cloud, to their own data center.

They are at the forefront of playing live and recorded material on TV — all from their cloud. They needed strong partners in data centers. So obviously we work with them closely, and the reason why they do this is simply to really take advantage of the flexibility. They don’t want to be tied to a restricted channel count; they want to try new things. They want to try pop-up channels. For the Oscars, for example, it’s one night. Are you going to recreate the whole infrastructure if you can just check it on and off, if you will, out of their data center capacity? So that’s the key application, the pop-up channels and ability to easily try new programs.

Gardner: It sounds like they are thinking of themselves as an IT company, rather than a media and entertainment company that consumes IT. Is that shift happening?

Connan-Lostanlen: Oh yes, that’s an interesting topic, because I think you cannot really do this successfully if you don’t start to think IT a little bit. What we are seeing, interestingly, is that our customers typically used to have the IT department on one side, the broadcast engineers on the other side — these were two groups that didn’t speak the same language. Now they get together, and they have to, because they have to design together the solution that will make them more successful. We are seeing this happening.

I wouldn’t say yet that they are IT companies. The core strength is content, that is their brand, that’s what they are good at — creating amazing content and making it available to as many people as possible.

They have to understand IT, but they can’t lose concentration on their core business. I think the IT providers still have a very strong play there. It’s always happening that way.

In addition to disaster recovery being a key application, multi-screen delivery is taking advantage of that technology, for sure.

Explore

High-Performance Computing

Solutions from HPE

Gardner: These companies are making this cultural shift to being much more technically oriented. They think about standard processes across all of what they do, and they have their own core data center that’s dynamic, flexible, agile and cost-efficient. What does that get for them? Is it too soon, or do we have some metrics of success for companies that make this move toward a full digitally transformed organization?

Connan-Lostanlen: They are very protective about the math. It is fair to say that the up-front investments may be higher, but when you do the math over time, you do the total cost of ownership for the next 5 to 10 years — because that’s typically the life cycle of those infrastructures – then definitely they do save money. On the operational expenditure (OPEX) side [of private cloud economics] it’s much more efficient, but they also have upside on additional revenue. So net-net, the return on investment (ROI) is much better. But it’s kind of hard to say now because we are still in the early days, but it’s bound to be a much greater ROI.

Another specific DR example is in the Middle East. We have a customer there who decided to operate the DR and IP in the cloud, instead of having a replicated system with satellite links in between. They were able to save $2 million worth of satellite links, and that data center investment, trust me, was not that high. So it shows that the ROI is there.

My satellite customers might say, “Well, what are you trying to do?” The good news is that they are looking at us to help them transform their businesses, too. So big satellite providers are thinking broadly about how this world of IP is changing their game. They are examining what they need to do differently. I think it’s going to create even more opportunities to reduce costs for all of our customers.

IT enters a hybrid world

Gardner: That’s one of the intrinsic values of a hybrid IT approach — you can use many different ways to do something, and then optimize which of those methods works best, and also alternate between them for best economics. That’s a very powerful concept.

Connan-Lostanlen: The world will be a hybrid IT world, and we will take advantage of that. But, of course, that will come with some challenges. What I think is next is the number-one question that I get asked.

Three years ago costumers would ask us, “Hey, IP is not going to work for live TV.” We convinced them otherwise, and now they know it’s working, it’s happening for real.

Secondly, they are thinking, “Okay, now I get it, so how do I do this?” We showed them, this is how you do it, the education piece.

Now, this year, the number-one question is security. “Okay, this is my content, the most valuable asset I have in my company. I am not putting this in the cloud,” they say. And this is where another piece of education has to start, which is: Actually, as you put stuff on your cloud, it’s more secure.

And we are working with our technology providers. As I said earlier, the COTS providers are not equal. We take it seriously. The cyber attacks on content and media is critical, and it’s bound to happen more often.

Initially there was a lack of understanding that you need to separate your corporate network, such as emails and VPNs, from you broadcast operations network. Okay, that’s easy to explain and that can be implemented, and that’s where most of the attacks over the last five years have happened. This is solved.

They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

However, the cyber attackers are becoming more clever, so they will overcome these initial defenses.They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

Gardner: Sure, the next domino to fall after you have the data center concept, the implementation, the execution, even the optimization, is then to remove risk, whether it’s disaster recovery, security, right down to the silicon and so forth. So that’s the next thing we will look for, and I hope I can get a chance to talk to you about how you are all lowering risk for your clients the next time we speak.

Explore

High-Performance Computing

Solutions from HPE

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

 

Posted in application transformation, big data, Cloud computing, Cyber security, data analysis, disaster recovery, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, Internet of Things, machine learning, Security, server, storage | Tagged , , , , , , , , , , , | Leave a comment

How The Open Group Healthcare Forum and Health Enterprise Reference Architecture cures process and IT ills

The next BriefingsDirect healthcare thought leadership panel discussion examines how a global standards body, The Open Group, is working to improve how the healthcare industry functions.

We’ll now learn how The Open Group Healthcare Forum (HCF) is advancing best practices and methods for better leveraging IT in healthcare ecosystems. And we’ll examine the forum’s Health Enterprise Reference Architecture (HERA) initiative and its role in standardizing IT architectures. The goal is to foster better boundaryless interoperability within and between healthcare public and private sector organizations.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about improving the processes and IT that better supports healthcare, please welcome our panel of experts: Oliver Kipf, The Open Group Healthcare Forum Chairman and Business Process and Solution Architect at Philips, based in Germany; Dr. Jason Lee, Director of the Healthcare Forum at The Open Group, in Boston, and Gail Kalbfleisch, Director of the Federal Health Architecture at the US Department of Health and Human Services in Washington, D.C. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: For those who might not be that familiar with the Healthcare Forum and The Open Group in general, tell us about why the Healthcare Forum exists, what its mission is, and what you hope to achieve through your work.

Lee: The Healthcare Forum exists because there is a huge need to architect the healthcare enterprise, which is approaching 20 percent of the gross domestic product (GDP) of the economy in the US, and approaching that level in other developing countries in Europe.

Jason Lee (1)

Lee

There is a general feeling that enterprise architecture is somewhat behind in this industry, relative to other industries. There are important gaps to fill that will help those stakeholders in healthcare — whether they are in hospitals or healthcare delivery systems or innovation hubs in organizations of different sorts, such as consulting firms. They can better leverage IT to achieve business goals, through the use of best practices, lessons learned, and the accumulated wisdom of the various Forum members over many years of work. We want them to understand the value of our work so they can use it to address their needs.

Our mission, simply, is to help make healthcare information available when and where it’s needed and to accomplish that goal through architecting the healthcare enterprise. That’s what we hope to achieve.

Gardner: As the chairman of the HCF, could you explain what a forum is, Oliver? What does it consist of, how many organizations are involved?

Kipf: The HCF is made up of its members and I am really proud of this team. We are very passionate about healthcare. We are in the technology business, so we are more than just the governing bodies; we also have participation from the provider community. That makes the Forum true to the nature of The Open Group, in that we are global in nature, we are vendor-neutral, and we are business-oriented. We go from strategy to execution, and we want to bridge from business to technology. We take the foundation of The Open Group, and then we apply this to the HCF.

Oliver Matthias Kipf (1)

Kipf

As we have many health standards out there, we really want to leverage [experience] from our 30 members to make standards work by providing the right type of tools, frameworks, and approaches. We partner a lot in the industry.

The healthcare industry is really a crowded place and there are many standard development organizations. There are many players. It’s quite vital as a forum that we reach out, collaborate, and engage with others to reach where we want to be.

Gardner: Gail, why is the role of the enterprise architecture function an important ingredient to help bring this together? What’s important about EA when we think about the healthcare industry?

Kalbfleisch: From an EA perspective, I don’t really think that it matters whether you are talking about the healthcare industry or the finance industry or the personnel industry or the gas and electric industry. If you look at any of those, the organizations or the companies that tend to be highly functioning, they have not just architecture — because everyone has architecture for what they do. But that architecture is documented and it’s available for use by decision-makers, and by developers across the system so that each part can work well together.

Gail Kalbfleisch (1)

Kalbfleisch

We know that within the healthcare industry it is exceedingly complicated, and it’s a mixture of a lot of different things. It’s not just your body and your doctor, it’s also your insurance, your payers, research, academia — and putting all of those together.

If we don’t have EA, people new to the system — or people who were deeply embedded into their parts of the system — can’t see how that system all works together usefully. For example, there are a lot of different standards organizations. If we don’t see how all of that works together — where everybody else is working, and how to make it fit together – then we’re going to have a hard time getting to interoperability quickly and efficiently.

It’s important that we get to individual solution building blocks to attain a more integrated approach.

Kipf: If you think of the healthcare industry, we’ve been very good at developing individual solutions to specific problems. There’s a lot of innovation and a lot of technology that we use. But there is an inherent risk of producing silos among the many stakeholders who, ultimately, work for the good of the patient. It’s important that we get to individual solution building blocks to attain a more integrated approach based on architecture building blocks, and based on common frameworks, tools and approaches.

Gardner: Healthcare is a very complex environment and IT is very fast-paced. Can you give us an update on what the Healthcare Forum has been doing, given the difficulty of managing such complexity?

Bird’s-eye view mapping

Lee: The Healthcare Forum began with a series of white papers, initially focusing on an information model that has a long history in the federal government. We used enterprise architecture to evaluate the Federal Health Information Model (FHIM).  People began listening and we started to talk to people outside of The Open Group, and outside of the normal channels of The Open Group. We talked to different types of architects, such as information architects, solution architects, engineers, and initially settled on the problem that is essential to The Open Group — and that is the problem of boundaryless information flow.

It can be difficult to achieve boundaryless information flow to enable information to travel digitally, securely and quickly.

We need to get beyond the silos that Oliver mentioned and that Gail alluded to. As I mentioned in my opening comments, this is a huge industry, and Gail illustrated it by naming some of the stakeholders within the health, healthcare and wellness enterprises. If you think of your hospital, it can be difficult to achieve boundaryless information flow to enable your information to travel digitally, securely, quickly, and in a way that’s valid, reliable and understandable by those who send it and by those who receive it.  But if that is possible, it’s all to the betterment of the patient.

Initially, in our focus on what healthcare folks call interoperability — what we refer to as boundaryless information flow — we came to realize through discussions with stakeholders in the public sector, as well as the private sector and globally, that understanding how the different pieces are linked together is critical. Anybody who works in an organization or belongs to a church, school or family understands that sometimes getting the right message communicated from point A to point B can be difficult.

To address that issue, the HCF members have decided to create a Health Enterprise Reference Architecture (HERA) that is essentially a framework and a map at the highest level. It helps people see that what they do relates to what others do, regardless of their position in their company. You want to deliver value to those people, to help them understand how their work is interconnected, and how IT can help them achieve their goals.

Gardner: Oliver, who should be aware of and explore engaging with the HCF?

Kipf: The members of The Open Group themselves, many of them are players in the field of healthcare, and so they are the natural candidates to really engage with. In that healthcare ecosystem we have providers, payers, governing bodies, pharmaceuticals, and IT companies.

Those who deeply need planning, management and architecting — to make big thinking a reality out there — those decision-makers are the prime candidates for engagement in the Healthcare Forum. They can benefit from the kinds of products we produce, the reference architecture, and the white papers that we offer. In a nutshell, it’s the members, and it’s the healthcare industry, and the healthcare ecosystem that we are targeting.

Gardner: Gail, perhaps you could address the reference architecture initiative? Why do you see that as important? Who do you think should be aware of it and contribute to it?

Shared reference points

Kalbfleisch: Reference architecture is one of those building block pieces that should be used. You can call it a template. You can have words that other people can relate to, maybe easier than the architecture-speak.

If you take that template, you can make it available to other people so that we can all be designing our processes and systems with a common understanding of our information exchange — so that it crosses boundaries easily and securely. If we are all running on the same template, that’s going to enable us to identify how to start, what has to be included, and what standards we are going to use.

A reference architecture is one of those very important pieces that not only forms a list of how we want to do things, and what we agreed to, but it also makes it so that every organization doesn’t have to start from scratch. It can be reused and improved upon as we go through the work. If someone improves the architecture, that can come back into the reference architecture.

Who should know about it? Decision makers, developers, medical device innovators, people who are looking to improve the way information flows within any health sector.

Who should know about it? Decision makers, developers, medical device innovators, people who are looking to improve the way information flows within any health sector — whether it’s Oliver in Europe, whether it’s someone over in California, Australia, it really doesn’t matter. Anyone who wants to make interoperability better should know about it.

My focus is on decision-makers, policymakers, process developers, and other people who look at it from a device-design perspective. One of the things that has been discussed within the HCF’s reference architecture work is the need to make sure that it’s all at a high-enough level, where we can agree on what it looks like. Yet it also must go down deeply enough so that people can apply it to what they are doing — whether it’s designing a piece of software or designing a medical device.

Gardner: Jason, The Open Group has been involved with standards and reference architectures for decades, with such recent initiatives as the IT4IT approach, as well as the longstanding TOGAF reference architecture. How does the HERA relate to some of these other architectural initiatives?

Building on a strong foundation

Lee: The HERA starts by using the essential components and insights that are built into the TOGAF ArchitecturalDevelopment Model (ADM) and builds from there. It also uses the ArchiMate language, but we have never felt restricted to using only those existing Open Group models that have been around for some time and are currently being developed further.

We are a big organization in terms of our approach, our forum, and so we want to draw from the best there is in order to fill in the gaps. Over the last few decades, an incredible amount of talent has joined The Open Group to develop architectural models and standards that apply across multiple industries, including healthcare. We reuse and build from this important work.

In addition, as we have dug deeper into the healthcare industry, we have found other issues – gaps — that need filling. There are related topics that would benefit. To do that, we have been working hard to establish relationships with other organizations in the healthcare space, to bring them in, and to collaborate. We have done this with the Health Level Seven Organization (HL7), which is one of the best-known standards organizations in the world.

We are also doing this now with an organization called Healthcare Services Platform Consortium (HSPC), which involves academic, government and hospital organizations, as well as people who are focused on developing standards around terminology.

IT’s getting better all the time

Kipf: If you think about reference architecture in a specific domain, such as in the healthcare industry, you look at your customers and the enterprises — those really concerned with the delivery of health services. You need to ask yourself the question: What are their needs?

And the need in this industry is a focus on the person and on the service. It’s also highly regulatory, so being compliant is a big thing. Quality is a big thing. The idea of lifetime evolution — that you become better and better all the time — that is very important, very intrinsic to the healthcare industry.

When we are looking into the customers out there that we believe that the HERA could be of value, it’s the small- to mid-sized and the large enterprises that you have to think of, and it’s really across the globe. That’s why we believe that the HERA is something that is tuned into the needs of our industry.

And as Jason mentioned, we build on open standards and we leverage them where we can. ArchiMate is one of the big ones — not only the business language, but also a lot of the concepts are based on ArchiMate. But we need to include other standards as well, obviously those from the healthcare industry, and we need to deviate from specific standards where this is of value to our industry.

Gardner: Oliver, in order to get this standard to be something that’s used, that’s very practical, people look to results. So if you were to take advantage of such reference architectures as HERA, what should you expect to get back? If you do it right, what are the payoffs?

Capacity for change and collaboration

Kipf: It should enable you to do a better job, to become more efficient, and to make better use of technology. Those are the kinds of benefits that you see realized. It’s not only that you have a place where you can model all the elements of your enterprise, where you can put and manage your processes and your services, but it’s also in the way you are architecting your enterprise.

The HERA gives you the tools to get where you want to be, to define where you want to be — and also how to get there.

It gives you the ability to change. From a transformation management perspective, we know that many healthcare systems have great challenges and there is this need to change. The HERA gives you the tools to get where you want to be, to define where you want to be — and also how to get there. This is where we believe it provides a lot of benefits.

Gardner: Gail, similar question, for those organizations, both public and private sector, that do this well, that embrace HERA, what should they hope to get in return?

Kalbfleisch: I completely agree with what Oliver said. To add, one of the benefits that you get from using EA is a chance to have a perspective from outside your own narrow silos. The HERA should be able to help a person see other areas that they have to take into consideration, that maybe they wouldn’t have before.

Another value is to engage with other people who are doing similar work, who may have either learned lessons, or are doing similar things at the same time. So that’s one of the ways I see the effectiveness and of doing our jobs better, quicker, and faster.

Also, it can help us identify where we have gaps and where we need to focus our efforts. We can focus our limited resources in much better ways on specific issues — where we can accomplish what we are looking to — and to gain that boundaryless information flow.

Reaching your goals

We show them how they can follow a roadmap to accomplish their self-defined goals more effectively.

Lee: Essentially, the HERA will provide a framework that enables companies to leverage IT to achieve their goals. The wonderful thing about it is that we are not telling organizations what their goals should be. We show them how they can follow a roadmap to accomplish their self-defined goals more effectively. Often this involves communicating the big picture, as Gail said, to those who are in siloed positions within their organizations.

There is an old saying: “What you see depends on where you sit.” The HERA helps stakeholders gain this perspective by helping key players understand the relationships, for example, between business processes and engineering. So whether a stakeholder’s interest is increasing patient satisfaction, reducing error, improving quality, and having better patient outcomes and gaining more reimbursement where reimbursement is tied to outcomes — using the product and the architecture that we are developing helps all of these goals.

Gardner: Jason, for those who are intrigued by what you are doing with HERA, tell us about its trajectory, its evolution, and how that journey unfolds. Who can they learn more or get involved?

Lee: We have only been working on the HERA per se for the last year, although its underpinnings go back 20 years or more. Its trajectory is not to a single point, but to an evolutionary process. We will be producing products, white papers, as well as products that others can use in a modular fashion to leverage what they already use within their legacy systems.

We encourage anyone out there, particularly in the health system delivery space, to join us. That can be done by contacting me at j.lee@opengroup.org and at www.opengroup.org/healthcare.

It’s an incredible time, a very opportune time, for key players to be involved because we are making very important decisions that lay the foundation for the HERA. We collaborate with key players, and we lay down the tracks from which we will build increasing levels of complexity.

But we start at the top, using non-architectural language to be able to talk to decision-makers, whether they are in the public sector or private sector. So we invite any of these organizations to join us.

Learn from others’ mistakes

Kalbfleisch: My first foray into working with The Open Group was long before I was in the health IT sector. I was with the US Air Force and we were doing very non-health architectural work in conjunction with The Open Group.

The interesting part to me is in ensuring boundaryless information flow in a manner that is consistent with the information flowing where it needs to go and who has access to it. How does it get from place to place across distinct mission areas, or distinct business areas where the information is not used the same way or stored in the same way? Such dissonance between those business areas is not a problem that is isolated just to healthcare; it’s across all business areas.

We don’t have to make the same mistakes. We can take what people have learned and extend it much further.

That was exciting. I was able to take awareness of The Open Group from a previous life, so to speak, and engage with them to get involved in the Healthcare Forum from my current position.

A lot of the technical problems that we have in exchanging information, regardless of what industry you are in, have been addressed by other people, and have already been worked on. By leveraging the way organizations have already worked on it for 20 years, we can leverage that work within the healthcare industry. We don’t have to make the same mistakes that were made before. We can take what people have learned and extend it much further. We can do that best by working together in areas like The Open Group HCF.

Kipf: On that evolutionary approach, I also see this as a long-term journey. Yes, there will be releases when we have a specification, and there will guidelines. But it’s important that this is an engagement, and we have ongoing collaboration with customers in the future, even after it is released. The coming together of a team is what really makes a great reference architecture, a team that places the architecture at a high level.

We can also develop distinct flavors of the specification. We should expect much more detail. Those implementation architectures then become spin-offs of reference architectures such as the HERA.

Lee: I can give some concrete examples, to bookend the kinds of problems that can be addressed using the HERA. At the micro end, a hospital can use the HERA structure to implement a patient check-in to the hospital for patients who would like to bypass the usual process and check themselves in. This has a number of positive value outcomes for the hospital in terms of staffing and in terms of patient satisfaction and cost savings.

At the other extreme, a large hospital system in Philadelphia or Stuttgart or Oslo or in India finds itself with patients appearing at the emergency room or in the ambulatory settings unaffiliated with that particular hospital. Rather than have that patient come as a blank sheet of paper, and redo all the tests that had been done prior, the HERA will help these healthcare organizations figure out how to exchange data in a meaningful way. So the information can flow digitally, securely, and it means the same thing to those who get it as much as it does to those who receive it, and everything is patient-focused, patient-centric.

Gardner: Oliver, we have seen with other Open Group standards and reference architectures, a certification process often comes to bear that helps people be recognized for being adept and properly trained. Do you expect to have a certification process with HERA at some point?

Certifiable enterprise expertise

Kipf: Yes, the more we mature with the HERA, along with the defined guidelines and the specifications and the HERA model, the more there will be a need and demand for health enterprise-focused employees in the marketplace. They can show how consulting services can then use HERA.

And that’s a perfect place when you think of certification. It helps make sure that the quality of the workforce is strong, whether it’s internal or in the form of a professional services role. They can comply with the HERA.

Gardner: Clearly, this has applicability to healthcare payer organizations, provider organizations, government agencies, and the vendors who supply pharmaceuticals or medical instruments. There are a great deal of process benefits when done properly, so that enterprise architects could become certified eventually.

My question then is how do we take the HERA, with such a potential for being beneficial across the board, and make it well-known? Jason, how do we get the word out? How can people who are listening to this or reading this, help with that?

Spread the word, around the world

Lee: It’s a question that has to be considered every time we meet. I think the answer is straightforward. First, we build a product [the HERA] that has clear value for stakeholders in the healthcare system. That’s the internal part.

Second—and often, simultaneously—we develop a very important marketing/collaboration/socialization capability. That’s the external part. I’ve worked in healthcare for more than 30 years, and whether it’s public or private sector decision-making, there are many stakeholders, and everybody’s focused on the same few things: improving value, enhancing quality, expanding access, and providing security.

All companies must plan, build, operate and improve.

We will continue developing relationships with key players to ensure them that what they’re doing is key to the HERA. At the broadest level, all companies must plan, build, operate and improve.

There are immense opportunities for business development. There are innumerable ways to use the HERA to help health enterprise systems operate efficiently and effectively. There are opportunities to demonstrate to key movers and shakers in healthcare system how what we’re doing integrates with what they’re doing. This will maximize the uptake of the HERA and minimize the chances it sits on a shelf after it’s been developed.

Gardner: Oliver, there are also a variety of regional conferences and events around the world. Some of them are from The Open Group. How important is it for people to be aware of these events, maybe by taking part virtually online or in person? Tell us about the face-time opportunities, if you will, of these events, and how that can foster awareness and improvement of HERA uptake.

Kipf: We began with the last Open Group event. I was in Berlin, presenting the HERA. As we see more development, more maturity, we can then show more. The uptake will be there and we also need to include things like cyber security, things like risk compliance. So we can bring in a lot of what we have been doing in various other initiatives within The Open Group. We can show how it can be a fusion, and make this something that is really of value.

I am confident that through face-to-face events, such as The Open Group events, we can further spread the message.

Lee: And a real shout-out to Gail and Oliver who have been critical in making introductions and helping to share The Open Group Healthcare Forum’s work broadly. The most recent example is the 2016 HIMSS conference, a meeting that brings together more than 40,000 people every year. There is a federal interoperability showcase there, and we have been able to introduce and discuss our HERA work there.

We’ve collaborated with the Office of the National Coordinator where the Federal Heath Architecture sits, with the US Veterans Administration, with the US Department of Defense, and with the Centers for Medicare and Medicaid (CMS). This is all US-centered, but there are lots of opportunities globally to not just spread the word in public for domains and public venues, but also to go to those key players who are moving the industry forward, and in some cases convince them that enterprise architecture does provide that structure, that template that can help them achieve their goals.

Future forecast

Gardner: I’m afraid we are almost out of time. Gail, perhaps a look into the crystal ball. What do you expect and hope to see in the next few years when it comes to improvements initiatives like HERA at The Open Group Forum can provide? What do you hope to see in the next couple of years in terms of improvement?

Kalbfleisch: What I would like to see happen in the next couple of years as it relates to the HERA, is the ability to have a place where we can go from anywhere and get a glimpse of the landscape. Right now, it’s hard to find anywhere where someone in the US can see the great work that Oliver is doing, or the people in Norway, or the people in Australia are doing.

Reference architecture is great to have, but it has no power until it’s used.

It’s really important that we have opportunities to communicate as large groups, but also the one-on-one. Yet when we are not able to communicate personally, I would like to see a resource or a tool where people can go and get the information they need on the HERA on their own time, or as they have a question. Reference architecture is great to have, but it has no power until it’s used.

My hope for the future is for the HERA to be used by decision-makers, developers, and even patients. So when an organizations such as some hospital wants to develop a new electronic health record (EHR) system, they have a place to go and get started, without having to contact Jason or wait for a vendor to come along and tell them how to solve a problem. That would be my hope for the future.

Lee: You can think of the HERA as a soup with three key ingredients. First is the involvement and commitment of very bright people and top-notch organizations. Second, we leverage the deep experience and products of other forums of The Open Group. Third, we build on external relationships. Together, these three things will help make the HERA successful as a certifiable product that people can use to get their work done and do better.

Gardner: Jason, perhaps you could also tee-up the next Open Group event in Amsterdam. Can you tell us more about that and how to get involved?

Lee: We are very excited about our next event in Amsterdam in October. You can go to www.opengroup.org and look under Events, read about the agendas, and sign up there. We will have involvement from experts from the US, UK, Germany, Australia, Norway, and this is just in the Healthcare Forum!

The Open Group membership will be giving papers, having discussions, moving the ball forward. It will be a very productive and fun time and we are looking forward to it. Again, anyone who has a question or is interested in joining the Healthcare Forum can please send me, Jason Lee, an email at j.lee@opengroup.org.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in big data, Cloud computing, Cyber security, data analysis, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Government, Information management, Platform 3.0, Security, The Open Group | Tagged , , , , , , , , | Leave a comment

Hybrid cloud ecosystem readies for impact from arrival of Microsoft Azure Stack

The next BriefingsDirect cloud deployment strategies interview explores how hybrid cloud ecosystem players such as PwC and Hewlett Packard Enterprise (HPE) are gearing up to support the Microsoft Azure Stack private-public cloud continuum.

We’ll now learn what enterprises can do to make the most of hybrid cloud models and be ready specifically for Microsoft’s solutions for balancing the boundaries between public and private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to explore the latest approaches for successful hybrid IT, we’re joined by Rohit “Ro” Antao, a Partner at PwC, and Ken Won, Director of Cloud Solutions Marketing at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Ro, what are the trends driving adoption of hybrid cloud models, specifically Microsoft Azure Stack? Why are people interested in doing this?

Antao: What we have observed in the last 18 months is that a lot of our clients are now aggressively pushing toward the public cloud. In that journey there are a couple of things that are becoming really loud and clear to them.

Journey to the cloud

Number one is that there will always be some sort of a private data center footprint. There are certain workloads that are not appropriate for the public cloud; there are certain workloads that perform better in the private data center. And so the first acknowledgment is that there is going to be that private, as well as public, side of how they deliver IT services.

Now, that being said, they have to begin building the capabilities and the mechanisms to be able to manage these different environments seamlessly. As they go down this path, that’s where we are seeing a lot of traction and focus.

The other trend in conjunction with that is in the public cloud space where we see a lot of traction around Azure. They have come on strong. They have been aggressively going after the public cloud market. Being able to have that seamless environment between private and public with Azure Stack is what’s driving a lot of the demand.

Won: We at HPE are seeing that very similarly, as well. We call that “hybrid IT,” and we talk about how customers need to find the right mix of private and public — and managed services — to fit their businesses. They may put some services in a public cloud, some services in a private cloud, and some in a managed cloud. Depending on their company strategy, they need to figure out which workloads go where.

Ken Won

Won

We have these conversations with many of our customers about how do you determine the right placement for these different workloads — taking into account things like security, performance, compliance, and cost — and helping them evaluate this hybrid IT environment that they now need to manage.

Gardner: Ro, a lot of what people have used public cloud for is greenfield apps — beginning in the cloud, developing in the cloud, deploying in the cloud — but there’s also an interest in many enterprises about legacy applications and datasets. Is Azure Stack and hybrid cloud an opportunity for them to rethink where their older apps and data should reside?

Antao: Absolutely. When you look at the broader market, a lot of these businesses are competing today in very dynamic markets. When companies today think about strategy,

Rohit Antao

Antao

it’s no longer the 5- and 10-year strategy. They are thinking about how to be relevant in the market this year, today, this quarter. That requires a lot of flexibility in their business model; that requires a lot of variability in their cost structure.

When you look at it from that viewpoint, a lot of our clients look at the public cloud as more than, “Is the app suitable for the public cloud?” They are also seeking certain cost advantages in terms of variability in that cost structure that they can take advantage of. And that’s where we are seeing them look at the public cloud beyond just applications in terms that are suitable for public cloud.

Public and/or private power

Won: We help a lot of companies think about where the best place is for their traditional apps. Often they don’t want to restructure them, they don’t want to rewrite them, because they are already an investment; they don’t want to spend a lot of time refactoring them.

If you look at these traditional applications, a lot of times when they are dealing with data – especially if they are dealing with sensitive data — those are better placed in a private cloud.

Antao: One of the great things about Microsoft Azure Stack is it gives the data center that public cloud experience — where developers have the similar experience as they would in a public cloud. The only difference is that you are now controlling the costs as well. So that’s another big advantage we see.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: Yeah, absolutely, it’s giving the developers the experience of a public cloud, but from the IT standpoint of also providing the compliance, the control, and the security of a private cloud. Allowing applications to be deployed in either a public or private cloud — depending on its requirements — is incredibly powerful. There’s no other environment out there that provides that API-compatibility between private and public cloud deployments like Azure Stack does. 

Gardner: Clearly Microsoft is interested in recognizing that skill sets, platform affinity, and processes are all really important. If they are able to provide a private cloud and public cloud experience that’s common to the IT operators that are used to using Microsoft platforms and frameworks — that’s a boon. It’s also important for enterprises to be able to continue with the skills they have.

Ro, is such a commonality of skills and processes not top of mind for many organizations? 

Antao: Absolutely! I think there is always the risk when you have different environments having that “swivel chair” approach. You have a certain set of skills and processes for your private data center. Then you now have a certain set of skills and processes to manage your public cloud footprint.

One of the big problems and challenges that this solves is being able to drive more of that commonality across consistent sets of processes. You can have a similar talent pool, and you have similar kinds of training and awareness that you are trying to drive within the organization — because you now can have similar stacks on both ends.

Won: That’s a great point. We know that the biggest challenge to adopting new concepts

The biggest challenge to adopting new concepts is not the technology; it’s really the people and process issues.

is not the technology; it’s really the people and process issues. So if you can address that, which is what Azure Stack does, it makes it so much easier for enterprises to bring on new capabilities, because they are leveraging the experience that they already have using Azure public cloud.

Gardner: Many IT organizations are familiar with Microsoft Azure Stack. It’s been in technical preview for quite some time. As it hits the market in September 2017, in seeking that total-solution, people-and-process approach, what is PwC bringing to the table to help organizations get the best value and advantage out of Azure Stack?

Hybrid: a tectonic IT shift

Antao: Ken made the point earlier in this discussion about hybrid IT. When you look at IT pivoting to more of the hybrid delivery mode, it’s a tectonic shift in IT’s operating model, in their architecture, their culture, in their roles and responsibilities – in the fundamental value proposition of IT to the enterprise.

When we partner with HPE in helping organizations drive through this transformation, we work with HPE in rethinking the operating model, in understanding the new kinds of roles and skills, of being able to apply these changes in the context of the business drivers that are leading it. That’s one of the typical ways that we work with HPE in this space.

Won: It’s a great complement. HPE understands the technology, understands the infrastructure, combined with the business processes, and then the higher level of thinking and the strategy knowledge that PwC has. It’s a great partnership.

Gardner: Attaining hybrid IT efficiency and doing it with security and control is not something you buy off the shelf. It’s not a license. It seems to me that an ecosystem is essential. But how do IT organizations manage that ecosystem? Are there ways that you all are working together, HPE in this case with PwC, and with Microsoft to make that consumption of an ecosystem solution much more attainable?

Won: One of the things that we are doing is working with Microsoft on their partnerships so that we can look at all these companies that have their offerings running on Azure public cloud and ensuring that those are all available and supported in Azure Stack, as well as running in the data center.

We are spending a lot of time with Microsoft on their ecosystem to make sure those services, those companies, or those products are available on Azure Stack — as well fully supported on Azure Stack that’s running on HPE gear.

Gardner: They might not be concerned about the hardware, but they are concerned about the total value — and the total solution. If the hardware players aren’t collaborating well with the service providers and with the cloud providers — then that’s not going to work.

Quick collaboration is key

Won: Exactly! I think of it like a washing machine. No one wants to own a washing machine, but everyone wants clean clothes. So it’s the necessary evil, it’s super important, but you just as soon not have to do it.

Gardner: I just don’t know what to take to the dry cleaner or not, right?

Won: Yeah, there you go!

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: From a consulting standpoint, clients no longer have the appetite for these five- to six-year transformations. Their businesses are changing at a much faster pace. One of the ways that we are working the ecosystem-level solution — again much like the deep and longstanding relationship we have had with HPE – is we have also been working with Microsoft in the same context.

And in a three-way fashion, we have focused on being able to define accelerators to deploying these solutions. So codifying a lot of our experiences, the lessons learned, a deep understanding of both the public and the private stack to be able to accelerate value for our customers — because that’s what they expect today.

Won: One of the things, Ro, that you brought up, and I think is very relevant here, is these three-way relationships. Customers don’t want to have to deal with all of these different vendors, these different pieces of stack or different aspects of the value chain. They instead expect us as vendors to be working together. So HPE, PwC, Microsoft are all working together to make it easier for the customers to ultimately deliver the services they need to drive their business.

Low risk, all reward

Gardner: So speed-to-value, super important; common solution cooperation and collaboration synergy among the partners, super important. But another part of this is doing it at low risk, because no one wants to be in a transition from a public to private or a full hybrid spectrum — and then suffer performance issues, lost data, with end customers not happy.

PwC has been focused on governance, risk management and compliance (GRC) in trying to bring about better end-to-end hybrid IT control. What is it that you bring to this particular problem that is unique? It seems that each enterprise is doing this anew, but you have done it for a lot of others and experience can be very powerful that way.

Antao: Absolutely! The move to hybrid IT is a fundamental shift in governance models, in how you address certain risks, the emergence of new risks, and new security challenges. A lot of what we have been doing in this space has been in helping that IT organizations accelerate that shift — that paradigm shift — that they have to make.

In that context, we have been working very closely with HPE to understand what the requirements of that new world are going to look like. We can build and bring to the table solutions that support those needs.

Won: It’s absolutely critical — this experience that PwC has is huge. We always come up with new technologies; every few years you have something new. But it’s that experience that PwC has to bring to the table that’s incredibly helpful to our customer base.

There’s this whole journey getting to that hybrid IT state and having the governing mechanisms around it.

Antao: So often when we think of governance, it’s more in terms of the steady state and the runtime. But there’s this whole journey between getting from where we today to that hybrid IT state — and having the governing mechanisms around it — so that they can do it in a way that doesn’t expose their business to too much risk. There is always risk involved in these large-scale transformations, but how do you manage and govern that process through getting to that hybrid IT state? That’s where we also spend a lot of time as we help clients through this transformation.

Gardner: For IT shops that are heavily Microsoft-focused, is there a way for them to master Azure Stack, the people, process and technology that will then be an accelerant for them to go to a broader hybrid IT capability? I’m thinking of multi-cloud, and even being able to develop with DevOps and SecOps across a multiple cloud continuum as a core competency.

Is Azure Stack for many companies a stepping-stone to a wider hybrid capability, Ro?

Managed multi-cloud continuum

Antao: Yes. And I think in many cases that’s inevitable. When you look at most organizations today, generally speaking, they have at least two public cloud providers that they use. They consume several Software as a service (SaaS) applications. They have multiple data center locations.  The role of IT now is to become the broker and integrator of multi-cloud environments, among and between on-premise and in the public cloud. That’s where we see a lot of them evolve their management practices, their processes, the talent — to be able to abstract these different pools and focus on the business. That’s where we see a lot of the talent development.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: We see that as well at HPE as this whole multi-cloud strategy is being implemented. More and more, the challenge that organizations are having is that they have these multiple clouds, each of which is managed by a different team or via different technologies with different processes.

So as a way to bring these together, there is huge value to the customer, by bringing together, for example, Azure Stack and Azure [public cloud] together. They may have multiple Azure Stack environments, perhaps in different data centers, in different countries, in different locales. We need to help them align their processes to run much more efficiently and more effectively. We need to engage with them not only from an IT standpoint, but also from the developer standpoint. They can use those common services to develop that application and deploy it in multiple places in the same way.

Antao: What’s making this whole environment even more complex these days is that a couple of years ago, when we talked about multi-cloud, it was really the capability to either deploy in one public cloud versus another.

Within a given business workflow, how do you leverage different clouds, given their unique strengths and weaknesses?

Few years later, it evolved into being able to port workloads seamlessly from one cloud to another. Today, as we look at the multi-cloud strategy that a lot of our clients are exploring this: Within a given business workflow, depending on the unique characteristics of different parts of that business process, how do you leverage different clouds given their unique strengths and weaknesses?

There might be portions of a business process that, to your point earlier, Ken, are highly confidential. You are dealing with a lot of compliance requirements. You may want to consume from an internal private cloud. There are other parts of it that you are looking for, such as immense scale, to deal with the peaks when that particular business process gets impacted. How do you go back to where the public cloud has a history with that? In a third case, it might be enterprise-grades workloads.

So that’s where we are seeing multi-cloud evolve, into where in one business process could have multiple sources, and so how does an IT organization manage that in a seamless way?

Gardner: It certainly seems inevitable that the choice of such a cloud continuum configuration model will vary and change. It could be one definition in one country or region, another definition in another country and region. It could even be contextual, such as by the type of end user who’s banging on the app. As the Internet of Things (IoT) kicks in, we might be thinking about not just individuals, but machine-to-machine (M2M), app-to-app types of interactions.

So quite a bit of complexity, but dealt with in such a way that the payoff could be monumental. If you do hybrid cloud and hybrid IT well, what could that mean for your business in three to five years, Ro?

Nimble, quick and cost-efficient

Antao: Clearly there is the agility aspect, of being able to seamlessly leverage these different clouds to allow IT organizations to be much more nimble in how they respond to the business.

From a cost standpoint, and this is actually a great example we had for a large-scale migration that we are currently doing to the public cloud. What the IT organization found was they consumed close to 70 percent of their migration budget for only 30 percent of the progress that they made.

And a larger part of that was because the minute you have your workloads sitting on a public cloud — whether it is a development workload or you are still working your way through it, but technically it’s not yet providing value — the clock is ticking. Being able to allow for a hybrid environment, where you a do a lot of that development, get it ready — almost production-ready — and then when the time is right to drive value from that application — that’s when you move to a public cloud. Those are huge cost savings right there.

Clients that have managed to balance those two paradigms are the ones who are also seeing a lot of economic efficiencies.

Won: The most important thing that people see value in is that agility. The ability to respond much faster to competitive actions or to new changes in the market, the ability to bring applications out faster, to be able to update applications in months — or sometimes even weeks — rather than the two years that it used to take.

It’s that agility to allow people to move faster and to shift their capabilities so much quicker than they have ever been able to do – that is the top reason why we’re seeing people moving to this hybrid model. The cost factor is also really critical as they look at whether they are doing CAPEX or OPEX and private cloud or public cloud.

One of the things that we have been doing at HPE through our Flexible Capacity program is that we enable our customers who were getting hardware to run these private clouds to actually pay for it on a pay-as-you-go basis. This allows them to better align their usage — the cost to their usage. So taking that whole concept of pay-as-you-go that we see in the public cloud and bringing that into a private cloud environment.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: That’s a great point. From a cost standpoint, there is an efficiency discussion. But we are also seeing in today’s world that we are depending on edge computing a lot more. I was talking to the CIO of a large park the other day, and his comment to me was, yes, they would love to use the public cloud but they cannot afford for any kind of latency or disruption of services because that means he’s got thousands of visitors and guests in his park, because of the amount of dependency on technology he can afford that kind of latency.

And so part of it is also the revenue impact discussion, and using public cloud in a way that allows you to manage some of those risks in terms of that analytical power and that computing power you need closer to the edge — closer to your internal systems.

Gardner: Microsoft Azure Stack is reinforcing the power and capability of hybrid cloud models, but Azure Stack is not going to be the same for each individual enterprise. How they differentiate, how they use and take advantage of a hybrid continuum will give them competitive advantages and give them a one-up in terms of skills.

It seems to me that the continuum of Azure Stack, of a hybrid cloud, is super-important. But how your organization specifically takes advantage of that is going to be the key differentiator. And that’s where an ecosystem solutions approach can be a huge benefit.

Let’s look at what comes next. What might we be talking about a year from now when we think about Microsoft Azure Stack in the market and the impact of hybrid cloud on businesses, Ken?

Look at clouds from both sides now

You will see that as a break in the boundary of private cloud versus public cloud, so think of it as a continuum.

Won: You will see organizations shifting from a world of using multiple clouds and having different applications or services on clouds to having an environment where services are based on multiple clouds. With the new cloud-native applications you’ll be running different aspects of those services in different locations based on what are the requirements of that particular microservice.

So a service may be partially running in Azure, part of it may be running in Azure Stack. You will certainly see that as a kind of break in the boundary of private cloud versus public cloud, and so think of it as a continuum, if you will, of different environments able to support whatever applications they need.

Gardner: Ro, as people get more into the weeds with hybrid cloud, maybe using Azure Stack, how will the market adjust?

Antao: I completely agree with Ken in terms of how organizations are going to evolve their architecture. At PwC we have this term called the Configurable Enterprise, which essentially focuses on how the IT organization consumes services from all of these different sources to be able to ultimately solve business problems.

To that point, where we see the market trends is in the hybrid IT space, the adoption of that continuum. One of the big pressures IT organizations face is how they are going to evolve their operating model to be successful in this new world. CIOs, especially the forward-thinking ones, are starting to ask that question. We are going to see in the next 12 months a lot more pressure in that space.

Gardner: These are, after all, still early days of hybrid cloud and hybrid IT. Before we sign off, how should organizations that might not yet be deep into this prepare themselves? Are there some operations, culture, and skills? How might you want to be in a good position to take advantage of this when you do take the plunge?

Plan to succeed with IT on board

Won: One of the things we recommend is a workshop where we sit down with the customer and think through their company strategy. What is their IT strategy? How does that relate or map to the infrastructure that they need in order to be successful?

This makes the connection between the value they want to offer as a company, as a business, to the infrastructure. It puts a plan in place so that they can see that direct linkage. That workshop is one of the things that we help a lot of customers with.

We also have innovation centers that we’ve built with Microsoft where customers can come in and experience Azure Stack firsthand. They can see the latest versions of Azure Stack, they can see the hardware, and they can meet with experts. We bring in partners such as PwC to have a conversation in these innovation centers with experts.

Gardner: Ro, how to get ready when you want to take the plunge and make the best and most of it?

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: We are at a stage right now where these transformations can no longer be done to the IT organization; the IT organization has to come along on this journey. What we have seen is, especially in the early stages, the running of pilot projects, of being able to involve the developers, the infrastructure architects, and the operations folks in pilot workloads, and learn how to manage it going forward in this new model.

You want to create that from a top-down perspective, being able to tie in to where this adds the most value to the business. From a grassroots effort, you need to also create champions within the trenches that are going to be able to manage this new environment. Combining those two efforts has been very successful for organizations as they embark on this journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, cloud messaging, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, Hewlett Packard Enterprise, Microsoft, PwC | Tagged , , , , , , , , , , | Leave a comment

Advanced IoT systems provide analysis catalyst for the petrochemical refinery of the future

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) technology trends interview explores how IT combines with IoT to help create the refinery of the future.

We’ll now learn how a leading-edge petrochemical company in Texas is rethinking data gathering and analysis to foster safer environments and greater overall efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To help us define the best of the refinery of the future vision is Doug Smith, CEO of Texmark Chemicals in Galena Park, Texas, and JR Fuller, Worldwide Business Development Manager for Edgeline IoT at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top trends driving this need for a new refinery of the future? Doug, why aren’t the refinery practices of the past good enough?

Smith: First of all, I want to talk about people. People are the catalysts who make this refinery of the future possible. At Texmark Chemicals, we spent the last 20 years making capital investments in our infrastructure, in our physical plant, and in the last four years we have put together a roadmap for our IT needs.

Through our introduction to HPE, we have entered into a partnership that is not just a client-customer relationship. It’s more than that, and it allows us to work together to discover IoT solutions that we can bring to bear on our IT challenges at Texmark. So, we are on the voyage of discovery together — and we are sailing out to sea. It’s going great.

Gardner: JR, it’s always impressive when a new technology trend aids and abets a traditional business, and then that business can show through innovation what should then come next in the technology. How is that back and forth working? Where should we expect IoT to go in terms of business benefits in the not-to-distant future?

Fuller: One of powerful things about the partnership and relationship we have is that we each respect and understand each other’s “swim lanes.” I’m not trying to be a chemical company. I’m trying to understand what they do and how I can help them.

JR FullerAnd they’re not trying to become an IT or IoT company. Their job is to make chemicals; our job is to figure out the IT. We’re seeing in Texmark the transformation from an Old World economy-type business to a New World economy-type business.

This is huge, this is transformational. As Doug said, they’ve made huge investments in their physical assets and what we call Operational Technology (OT). They have done that for the past 20 years. The people they have at Texmark who are using these assets are phenomenal. They possess decades of experience.

Learn From Customers Who

Realize the IoT Advantage

 Read More

Yet IoT is really new for them. How to leverage that? They have said, “You know what? We squeezed as much as we can out of OT technology, out of our people, and our processes. Now, let’s see what else is out there.”

And through introductions to us and our ecosystem partners, we’ve been able to show them how we can help squeeze even more out of those OT assets using this new technology. So, it’s really exciting.

Gardner: Doug, let’s level-set this a little bit for our audience. They might not all be familiar with the refinery business, or even the petrochemical industry. You’re in the process of processing. You’re making one material into another and you’re doing that in bulk, and you need to do it on a just-in-time basis, given the demands of supply chains these days.

You need to make your business processes and your IT network mesh, to reach every corner. How does a wireless network become an enabler for your requirements?

The heart of IT 

Smith: In a large plant facility, we have different pieces of equipment. One piece of equipment is a pump — the analogy would be the heart of the process facility of the plant.

So your question regarding the wireless network, if we can sensor a pump and tie it into a mesh network, there are incredible cost savings for us. The physical wiring of a pump runs anywhere from $3,000 to $5,000 per pump. So, we see a savings in that.

Doug Smith (1)Being able to have the information wirelessly right away — that gives us knowledge immediately that we wouldn’t have otherwise. We have workers and millwrights at the plant that physically go out and inspect every single pump in our plant, and we have 133 pumps. If we can utilize our sensors through the wireless network, our millwrights can concentrate on the pumps that they know are having problems.

To have the information wirelessly right away — that gives us knowledge immediately that we wouldn’t have otherwise.

Gardner: You’re also able to track those individuals, those workers, so if there’s a need to communicate, to locate, to make sure that they hearing the policy, that’s another big part of IoT and people coming together.

Safety is good business

Smith: The tracking of workers is more of a safety issue — and safety is critical, absolutely critical in a petrochemical facility. We must account for all our people and know where they are in the event of any type of emergency situation.

Gardner: We have the sensors, we can link things up, we can begin to analyze devices and bring that data analytics to the edge, perhaps within a mini data center facility, something that’s ruggedized and tough and able to handle a plant environment.

Given this scenario, JR, what sorts of efficiencies are organizations like Texmark seeing? I know in some businesses, they talk about double digit increases, but in a mature industry, how does this all translate into dollars?

Fuller: We talk about the power of one percent. A one percent improvement in one of the major companies is multi-billions of dollars saved. A one percent change is huge, and, yes, at Texmark we’re able to see some larger percentage-wise efficiency, because they’re actually very nimble.

It’s hard to turn a big titanic ship, but the smaller boat is actually much better at it. We’re able to do things at Texmark that we are not able to do at other places, but we’re then able to create that blueprint of how they do it.

You’re absolutely right, doing edge computing, with our HPE Edgeline products, and gathering the micro-data from the extra compute power we have installed, provides a lot of opportunities for us to go into the predictive part of this. It’s really where you see the new efficiencies.

Recently I was with the engineers out there, and we’re walking through the facility, and they’re showing us all the equipment that we’re looking at sensoring up, and adding all these analytics. I noticed something on one of the pumps. I’ve been around pumps, I know pumps very well.

I saw this thing, and I said, “What is that?”

“So that’s a filter,” they said.

I said, “What happens if the filter gets clogged?”

“It shuts down the whole pump,” they said.

“What happens if you lose this pump?” I asked.

“We lose the whole chemical process,” they explained.

“Okay, are there sensors on this filter?”

“No, there are only sensors on the pump,” they said.

There weren’t any sensors on the filter. Now, that’s just something that we haven’t thought of, right? But again, I’m not a chemical guy. So I can ask questions that maybe they didn’t ask before.

So I said, “How do you solve this problem today?”

“Well, we have a scheduled maintenance plan,” they said.

They don’t have a problem, but based on the scheduled maintenance plan that filter gets changed whether it needs to or not. It just gets changed on a regular basis. Using IoT technology, we can tell them exactly when to change that filter. Therefore IoT saves on the cost of the filter and the cost of the manpower — and those types of potential efficiencies and savings are just one small example of the things that we’re trying to accomplish.

Continuous functionality

Smith: It points to the uniqueness of the people-level relationship between the HPE team, our partners, and the Texmark team. We are able to have these conversations to identify things that we haven’t even thought of before. I could give you 25 examples of things just like this, where we say, “Oh, wow, I hadn’t thought about that.” And yet it makes people safer and it all becomes more efficient.

Learn From Customers Who

Realize the IoT Advantage
Read More

Gardner: You don’t know until you have that network in place and the data analytics to utilize what the potential use-cases can be. The name of the game is utilization efficiency, but also continuous operations.

How do you increase your likelihood or reduce the risk of disruption and enhance your continuous operations using these analytics?

Smith: To answer, I’m going to use the example of toll processing. Toll processing is when we would have a customer come to us and ask us to run a process on the equipment that we have at Texmark.

Normally, they would give us a recipe, and we would process a material. We take samples throughout the process, the production, and deliver a finished product to them. With this new level of analytics, with the sensoring of all these components in the refinery of the future vision, we can provide a value-add to the customers by giving them more data than they could ever want. We can document and verify the manufacture and production of the particular chemical that we’re toll processing for them.

Fuller: To add to that, as part of the process, sometimes you may have to do multiple runs when you’re tolling, because of your feed stock and the way it works.

By using advanced analytics and the predictive benefits of having all that data, we’re looking to gain efficiencies.

By usingadvanced analytics, and some of the predictive benefits of having all of that data available, we’re looking to gain efficiencies to cut down the number of additional runs needed. If you take a process that would have taken three runs and we can knock that down to two runs — that’s a 30 percent decrease in total cost and expense. It also allows them produce more products, and to get it out to people a lot faster

Smith: Exactly. Exactly!

Gardner: Of course, the more insight that you can obtain from a pump, and the more resulting data analysis, that gives you insight into the larger processes. You can extend that data and information back into your supply chain. So there’s no guesswork. There’s no gap. You have complete visibility — and that’s a big plus when it comes to reducing risk in any large, complex, multi-supplier undertaking.

Beyond data gathering, data sharing

Smith: It goes back to relationships at Texmark. We have relationships with our neighbors that are unique in the industry, and so we would be able to share the data that we have.

Fuller: With suppliers.

Smith: Exactly, with suppliers and vendors. It’s transformational.

Gardner: So you’re extending a common standard industry-accepted platform approach locally into an extended process benefit. And you can share that because you are using common, IT-industry-wide infrastructurefrom HPE.

Fuller: And that’s very important. We have a three-phase project, and we’ve just finished the first two phases. Phase 1 was to put ubiquitous WiFi infrastructure in there, with the location-based services, and all of the things to enable that. The second phase was to upgrade the compute infrastructure with our Edgeline compute and put in our HPE Micro Datacenter in there. So now they have some very robust compute.

Learn From Customers Who

Realize the IoT Advantage

Read More

With that infrastructure in place, it now allows us to do that third phase, where we’re bringing in additional IoT projects. We will create a data infrastructure with data storage, and application programming interfaces (APIs), and things like that. That will allow us to bring in a specialty video analytic capability that will overlay on top of the physical and logical infrastructure. And it makes it so much easier to integrate all that.

Gardner: You get a chance to customize the apps much better when you have a standard IT architecture underneath that, right?

Trailblazing standards for a new workforce

Smith: Well, exactly. What are you saying, Dana is – and it gives me chills when I start thinking about what we’re doing at Texmark within our industry – is the setting of standards, blazing a new trail. When we talk to our customers and our suppliers and we tell them about this refinery of the future project that we’re initiating, all other business goes out the window. They want to know more about what we’re doing with the IoT — and that’s incredibly encouraging.

Gardner: I imagine that there are competitive advantages when you can get out in front and you’re blazing that trail. If you have the experience, the skills of understanding how to leverage an IoT environment, and an edge computing capability, then you’re going to continue to be a step ahead of the competition on many levels: efficiency, safety, ability to customize, and supply chain visibility.

Smith: It surely allows our Texmark team to do their jobs better. I use the example of the millwrights going out and inspecting pumps, and they do that everyday. They do it very well. If we can give them the tools, where they can focus on what they do best over a lifetime of working with pumps, and only work on the pumps that they need to, that’s a great example.

I am extremely excited about the opportunities at the refinery of the future to bring new workers into the petrochemical industry. We have a large number of people within our industry who are retiring; they’re taking intellectual capital with them. So to be able to show young people that we are using advanced technology in new and exciting ways is a real draw and it would bring more young people into our industry.

Gardner: By empowering that facilities edge and standardizing IT around it, that also gives us an opportunity to think about the other part of this spectrum — and that’s the cloud. There are cloud services and larger data sets that could be brought to bear.

How does the linking of the edge to the cloud have a benefit?

Cloud watching

Fuller: Texmark Chemicals has one location, and they service the world from that location as a global leader in dicyclopentadiene (DCPD) production. So the cloud doesn’t have the same impact as it would for maybe one of the other big oil or big petrochemical companies. But there are ways that we’re going to use the cloud at Texmark and rally around it for safety and security.

Utilizing our location-based services, and our compute, if there is an emergency — whether it’s at Texmark or a neighbor — using cloud-based information like weather, humidity, and wind direction — and all of these other things that are constantly changing — we can provide better directed responses. That’s one way we would be using cloud at Texmark.

When we start talking about the larger industry — and connecting multiple refineries together or upstream, downstream and midstream kinds of assets together with a petrochemical company — cloud becomes critical. And you have to have hybrid infrastructure support.

You don’t want to send all your video to the cloud to get analyzed. You want to do that at the edge. You don’t want to send all of your vibration data to the cloud, you want to do that at the edge. But, yes, you do want to know when a pump fails, or when something happens so you can educate and train and learn and share that information and institutional knowledge throughout the rest of the organization.

Gardner: Before we sign off, let’s take a quick look into the crystal ball. Refinery of the future, five years from now, Doug, where do you see this going?

Learn From Customers Who

 Realize the IoT Advantage

Read More

Smith: The crystal ball is often kind of foggy, but it’s fun to look into it. I had mentioned earlier opportunities for education of a new workforce. Certainly, I am focused on the solutions that IoT brings to efficiencies, safety, and profitability of Texmark as a company. But I am definitely interested in giving people opportunities to find a job to work in a good industry that can be a career.

Gardner: JR, I know HPE has a lot going on with edge computing, making these data centers more efficient, more capable, and more rugged. Where do you see the potential here for IoT capability in refineries of the future?

Future forecast: safe, efficient edge

Fuller: You’re going to see the pace pick up. I have to give kudos to Doug. He is a visionary. Whether he admits that or not, he is actually showing an industry that has been around for many years how to do this and be successful at it. So that’s incredible. In that crystal ball look, that five-year look, he’s going to be recognized as someone who helped really transform this industry from old to new economy.

As far as edge-computing goes, what we’re seeing with our converged Edgeline systems, which are our first generation, and we’ve created this market space for converged edge systems with the hardening of it. Now, we’re working on generation 2. We’re going to get faster, smaller, cheaper, and become more ubiquitous. I see our IoT infrastructure as having a dramatic impact on what we can actually accomplish and the workforce in five years. It will be more virtual and augmented and have all of these capabilities. It’s going to be a lot safer for people, and it’s going to be a lot more efficient.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Business intelligence, Cloud computing, Cyber security, data analysis, data center, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, HP, Information management, Internet of Things, Security | Tagged , , , , , , , , , , | Leave a comment

Get ready for the post-cloud world

Just when cloud computing seems inevitable as the dominant force in IT, it’s time to move on because we’re not quite at the end-state of digital transformation. Far from it.

Now’s the time to prepare for the post-cloud world.

It’s not that cloud computing is going away. It’s that we need to be ready for making the best of IT productivity once cloud in its many forms become so pervasive as to be mundane, the place where all great IT innovations must go.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Cloud computing, data analysis, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning | Tagged , , , , , , , | Leave a comment

India Smart Cities Mission shows IoT potential for improving quality of life at vast scale

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) transformation discussion examines the potential impact and improvement of low-power edge computing benefits on rapidly modernizing cities.

These so-called smart city initiatives are exploiting open, wide area networking (WAN) technologies to make urban life richer in services, safer, and far more responsive to residences’ needs. We will now learn how such pervasively connected and data-driven IoT architectures are helping cities in India vastly improve the quality of life there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share how communication service providers have become agents of digital urban transformation are VS Shridhar, Senior Vice President and Head of the Internet-of-Things Business Unit at Tata Communications in Chennai area, India, and Nigel Upton, General Manager of the Universal IoT Platform and Global Connectivity Platform and Communications Solutions Business at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about India’s Smart Cities mission. What are you up to and how are these new technologies coming to bear on improving urban quality of life?

Shridhar: The government is clearly focusing on Smart Cities as part of their urbanization plan, as they believe Smart Cities will not only improve the quality of living, but also generate employment, and take the whole country forward in terms of technologically embracing and improving the quality of life.

So with that in mind, the Government of India has launched 100 Smart Cities initiatives. It’s quite interesting because each of the cities that aspire to belong had to make a plan and their own strategy around how they are going to evolve and how they are going to execute it, present it, and get selected. There was a proper selection process.

Many of the cities made it, and of course some of them didn’t make it. Interestingly, some of the cities that didn’t make it are developing their own plans.

IoT Solutions for Communications Service Providers and Enterprises from HPE

Learn More

There is lot of excitement and curiosity as well as action in the Smart Cities project. Admittedly, it’s a slow process, it’s not something that you can do at the blink of the eye, and Rome wasn’t built overnight, but I definitely see a lot of progress.

Gardner: Nigel, it seems that the timing for this is auspicious, given that there are some foundational technologies that are now available at very low cost compared to the past, and that have much more of a pervasive opportunity to gather information and make a two-way street, if you will, between the edge and central administration. How is the technology evolution synching up with these Smart Cities initiatives in India?

Upton: I am not sure whether it’s timing or luck, or whatever it happens to be, but adoption of the digitization of city infrastructure and services is to some extent driven by economics. While I like to tease my colleagues in India about their sensitivity to price, the truth of the matter is that the economics of digitization — and therefore IoT in smart cities — needs to be at the right price, depending on where it is in the world, and India has some very specific price points to hit. That will drive the rate of adoption.

And so, we’re very encouraged that innovation is continuing to drive price points down to the point that mass adoption can then be taken up, and the benefits realized to a much more broad spectrum of the population. Working with Tata Communications has really helped HPE understand this and continue to evolve as technology and be part of the partner ecosystem because it does take a village to raise an IoT smart city. You need a lot of partners to make this happen, and that combination of partnership, willingness to work together and driving the economic price points to the point of adoption has been absolutely critical in getting us to where we are today.

Balanced Bandwidth

Gardner: Shridhar, we have some very important optimization opportunities around things like street lighting, waste removal, public safety, water quality; of course, the pervasive need for traffic and parking, monitoring and improvement.

How do things like a low-power specification Internet and network gateways and low-power WANs (LPWANs) create a new foundation technically to improve these services? How do we connect the services and the technology for an improved outcome?

Shridhar: If you look at human interaction to the Internet, we have a lot of technology coming our way. We used to have 2G, that has moved to 3G and to 4G, and that is a lot of bandwidth coming our way. We would like to have a tremendous amount of access and bandwidth speeds and so on, right?

VS Shridhar

Shridhar

So the human interaction and experience is improving vastly, given the networks that are growing. On the machine-to-machine (M2M) side, it’s going to be different. They don’t need oodles of bandwidth. About 80 to 90 percent of all machine interactions are going to be very, very low bandwidth – and, of course, low power. I will come to the low power in a moment, but it’s going to be very low bandwidth requirement.

In order to switch off a streetlight, how much bandwidth do you actually require? Or, in order to sense temperature or air quality or water and water quality, how much bandwidth do you actually require?

When you ask these questions, you get an answer that the machines don’t require that much bandwidth. More importantly, when there are millions — or possibly billions — of devices to be deployed in the years to come, how are you going to service a piece of equipment that is telling a streetlight to switch on and switch off if the battery runs out?

Machines are different from humans in terms of interactions. When we deploy machines that require low bandwidth and low power consumption, a battery can enable such a machine to communicate for years.

Aside from heavy video streaming applications or constant security monitoring, where low-bandwidth, low-power technology doesn’t work, the majority of the cases are all about low bandwidth and low power. And these machines can communicate with the quality of service that is required.

When it communicates, the network has to be available. You then need to establish a network that is highly available, which consumes very little power and provides the right amount of bandwidth. So studies show that less than 50 kbps connectivity should suffice for the majority of these requirements.

Now the machine interaction also means that you collect all of them into a platform and basically act on them. It’s not about just sensing it, it’s measuring it, analyzing it, and acting on it.

Low-power to the people

So the whole stack consists not just of connectivity alone. It’s LPWAN technology that is emerging now and is becoming a de facto standard as more-and-more countries start embracing it.

At Tata Communications we have embraced the LPWAN technology from the LoRa Alliance, a consortium of more than 400 partners who have gotten together and are driving standards. We are creating this network over the next 18 to 24 months across India. We have made these networks available right now in four cities. By the end of the year, it will be many more cities — almost 60 cities across India by March 2018.

Gardner: Nigel, how do you see the opportunity, the market, for a standard architecture around this sort of low-power, low-bandwidth network? This is a proof of concept in India, but what’s the potential here for taking this even further? Is this something that has global potential?

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

Upton: The global potential is undoubtedly there, and there is an additional element that we didn’t talk about which is that not all devices require the same amount of bandwidth. So we have talked about video surveillance requiring higher bandwidth, we have talked about devices that have low-power bandwidth and will essentially be created once and forgotten when expected to last 5 or 10 years.

Nigel Upton

Upton

We also need to add in the aspect of security, and that really gave HPE and Tata the common ground of understanding that the world is made up of a variety of network requirements, some of which will be met by LPWAN, some of which will require more bandwidth, maybe as high as 5G.

The real advantage of being able to use a common architecture to be able to take the data from these devices is the idea of having things like a common management, common security, and a common data model so that you really have the power of being able to take information, take data from all of these different types of devices and pull it into a common platform that is based on a standard.

In our case, we selected the oneM2M standard, it’s the best standard available to be able to build that common data model and that’s the reason why we deployed the oneM2M model within the universal IoT platform to get that consistency no matter what type of device over no matter what type of network.

Gardner: It certainly sounds like this is an unprecedented opportunity to gather insight and analysis into areas that you just really couldn’t have measured before. So going back to the economics of this, Shridhar, have you had any opportunity through these pilot projects in such cities as Jamshedpur to demonstrate a return on investment, perhaps on street lighting, perhaps on quality of utilization and efficiency? Is there a strong financial incentive to do this once the initial hurdle of upfront costs is met?

Data-driven cost reduction lights up India

Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions.

Shridhar: Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions. So if you look at how things have been progressing, I will give you a few examples of how the costs have started constructing and playing out. One of course is to have devices, meeting at certain price point, we talked about how in India — we talked that Nigel was remarking how constant still this Indian market is, but it’s important, once we delivered to a certain cost, we believe we can now deliver globally to scale. That’s very important, so if we build something in India it would deliver to the global market as well.

The streetlight example, let’s take that specifically and see what kind of benefits it would give. When a streetlight operates for about 12 hours a day, it costs about Rs.12, which is about $0.15, but when you start optimizing it and say, okay, this is a streetlight that is supported currently on halogen and you move it to LED, it brings a little bit of cost saving, in some cases significant as well. India is going through an LED revolution as you may have read in the newspapers, those streetlights are being converted, and that’s one distinct cost advantage.

Now they are looking and driving, let’s say, the usage and the electricity bills even lower by optimizing it. Let’s say you sync it with the astronomical clock, that 6:30 in the evening it comes up and let’s say 6:30 in the morning it shuts down linking to the astronomical clock because now you are connecting this controller to the Internet.

The second thing that you would do is during busy hours keep it at the brightest, let’s say between 7:00 and 10:00, you keep it at the brightest and after that you start minimizing it. You can control it down in 10 percent increments.

The point I am making is, you basically deliver intensity of light to the kind of requirement that you have. If it is busy, or if there is nobody on the street, or if there is a safety requirement — a sensor will trigger up a series of lights, and so on.

So your ability to play around with just having streetlight being delivered to the requirement is so high that it brings down total cost. While I was telling you about $0.15 that you would spend per streetlight, that could be brought down to $0.05. So that’s the kind of advantage by better controlling the streetlights. The business case builds up, and a customer can save 60 to 70 percent just by doing this. Obviously, then the business case stands out.

The question that you are asking is an interesting one because each of the applications has its own way of returning the investment back, while the optimization of resources is being done. There is also a collateral positive benefit by saving the environment. So not only do I gain a business savings and business optimization, but I also pass on a general, bigger message of a green environment. Environment and safety are the two biggest benefits of implementing this and it would really appeal to our customers.

Gardner: It’s always great to put hard economic metrics on these things, but Shridhar just mentioned safety. Even when you can’t measure in direct economics, it’s invaluable when you can bring a higher degree of safety to an urban environment.

It opens up for more foot traffic, which can lead to greater economic development, which can then provide more tax revenue. It seems to me that there is a multiplier effect when you have this sort of intelligent urban landscape that creates a cascading set of benefits: the more data, the more efficiency; the more efficiency, the more economic development; the more revenue, the more data and so on. So tell us a little bit about this ongoing multiplier and virtuous adoption benefit when you go to intelligent urban environments?

Quality of life, under control

Upton: Yes, also it’s important to note that it differs almost by country to country and almost within region to region within countries. The interesting challenge with smart cities is that often you’re dealing with elected officials rather than hard-nosed businessman who are only interested in the financial return. And it’s because you’re dealing with politicians and they are therefore representing the citizens in their area, either their city or their town or their region, their priorities are not always the same.

There is quite a variation of one of the particular challenges, particular social challenges as well as the particular quality of life challenges in each of the areas that they work in. So things like personal safety are a very big deal in some regions. I am currently in Tokyo and here there is much more concern around quality of life and mobility with a rapidly aging population and their challenges are somewhat different.

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

But in India, the set of opportunities and challenges that are set out, they are in that combination of economic as well as social, and if you solve them and you essentially give citizens more peace of mind, more ability to be able to move freely, to be able to take part in the economic interaction within that area, then undoubtedly that leads to greater growth, but it is worth bearing in mind that it does vary almost city by city and region by region.

Gardner: Shridhar, do you have any other input into a cascading ongoing set of benefits when you get more data, more network opportunity. I guess I am trying to understand for a longer-term objective that being intelligent and data-driven has an ongoing set of benefits, what might those be? How can this be a long-term data and analytics treasure trove when you think about it in terms of how to provide better urban experiences?

Home/work help

Shridhar: From our perspective, when we looked at the customer benefits there is a huge amount of focus around the smart cities and how smart cities are benefiting from a network. If you look at the enterprise customers, they are also looking at safety, which is an overlapping application that a smart city would have.

So the enterprise wants to provide safety to its workers, for example, in mines or in difficult terrains, environments where they are focusing on helping them. Or women’s safety, which is as you know in India is a big thing as well — how do you provide a device which is not very obvious and it gives the women all the safety that is there.

So all this in some form is providing data. One of the things that comes to my mind when you ask about how data-driven resources can be and what kind of quality it would give is if you action your mind to some of the customer services devices, there could be applications or let’s say a housewife could have a multiple button kind of a device where she can order a service.

Depending on the service she presses and an aggregate of households across India, you would know the trends and direction of a certain service, and mind you, it could be as simple as a three-button device which says Service A, Service B, Service C, and it could be a consumer service that gets extended to a particular household that we sell it as a service.

So you could get lots of trends and patterns that are emerging from that, and we believe that the customer experience is going to change, because no longer is a customer going to retain in his mind what kind of phone numbers or your, let’s say, apps and all to order, you give them the convenience of just a button-press service. That immediately comes to my mind.

Feedback fosters change

The second one is in terms of feedback. You use the same three-button service to say, how well have you used utility — or rather how — what kind of quality of service that you rate multiple utilities that you are using, and there is toilet revolution in India. For example, you put these buttons out there, they will tell you at any given point of time what’s the user satisfaction and so on.

So these are all data that is getting gathered and I believe that while it is early days for us to go on and put out analytics and give you distinct kind of benefits that are there, but some of the things that customers are already looking at is which geographies, which segment, who are my biggest — profile of the customers using this and so on. That kind of information is going to come out very, very distinctly.

The Smart Cities is all about experience. The enterprises are now looking at the data that is coming out and seeing how they can use it to better segment, and provide better customer experience which would obviously mean both adding to their top line as well as helping them manage their bottom line. So it’s beyond safety, it’s getting into the customer experience – the realm of managing customer experience.

Gardner: From a go-to-market perspective, or a go-to-city’s perspective, these are very complex undertakings, lots of moving parts, lots of different technologies and standards. How are Tata and HPE are coming together — along with other service providers, Pointnext for example? How do you put this into a package that can then actually be managed and put in place? How do we make this appealing not only in terms of its potential but being actionable as well when it comes to different cities and regions?

Upton: The concept of Smart Cities has been around for a while and various governments around the world have pumped money into their cities over an extended period of time.

We now have the infrastructure in place, we have the price points and we have IoT becoming mainstream.

As usual, these things always take more time than you think, and I do not believe today that we have a technology challenge on our hands. We have much more of a business model challenge. Being able to deploy technology to be able to bring benefits to citizens, I think that is finally getting to the point where it is much better understood where innovation of the device level, whether it’s streetlights, whether it’s the ability to measure water quality, sound quality, humidity, all of these metrics that we have available to us now. There has been very rapid innovation at that device level and at the economics of how to produce them, at a price that will enable widespread deployment.

All that has been happening rapidly over the last few years getting us to the point where we now have the infrastructure in place, we have the price points in place, and we have IoT becoming mainstream enough that it is entering into the manufacturing process of all sorts of different devices, as I said, ranging from streetlights to personal security devices through to track and trace devices that are built into the manufacturing process of goods.

That is now reaching mainstream and we are now able to take advantage of this massive data that’s now being produced to be able to produce even more efficient and smarter cities, and make them safer places for our citizens.

Gardner: Last word to you, Shridhar. If people wanted to learn more about the pilot proof of concept (PoC) that you are doing there at Jamshedpur and other cities, through the Smart Cities Mission, where might they go, are there any resources, how would you provide more information to those interested in pursuing more of these technologies?

Pilot projects take flight

Shridhar: I would be very happy to help them look at the PoCs that we are doing. I would classify the PoCs that we are doing is as far as safety is concerned, we talked of energy management in one big bucket that is there, then the customer service I spoke about, the fourth one I would say is more on the utility side. Gas and water are two big applications where customers are looking at these PoCs very seriously.

And there is very one interesting application in that one customer wanted for pest control, where he wanted his mouse traps to have sensors so that they will at any point of time know if there is a rat trap at all, which I thought was a very interesting thing.

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

There are multiple streams that we have, we have done multiple PoCs, we will be very happy as Tata Communications team [to provide more information], and the HPE folks are in touch with us.

You could write to us, to me in particular for some period of time. We are also putting information on our website. We have marketing collateral, which describes this. We will do some of the joint workshops with HPE as well.

So there are multiple ways to reach us, and one of the best ways obviously is through our website. We are always there to provide more important help, and we believe that we can’t do it all alone; it’s about the ecosystem getting to know and getting to work on it.

While we have partners like HPE on the platform level, we also have partners such as Semtech, who established Center of Excellence in Mumbai along with us. So the access to the ecosystem from HPE side as well as our other partners is available, and we are happy to work and co-create the solutions going forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business networks, Cloud computing, data analysis, data center, Data center transformation, enterprise architecture, Enterprise transformation, Government, Hewlett Packard Enterprise, HP, Information management, Internet of Things, machine learning, Mobile apps, mobile computing, Networked economy, Security, storage, User experience | Tagged , , , , , , , , , | Leave a comment

How confluence of cloud, UC and data-driven insights newly empowers contact center agents

The next BriefingsDirect customer experience insights discussion explores how Contact center-as-a-service (CCaaS) capabilities are becoming more powerful as a result of leveraging cloud computing, multi-mode communications channels, and the ability to provide optimized and contextual user experiences.

More than ever, businesses have to make difficult and complex decisions about how to best source their customer-facing services. Which apps and services, what data and resources should be in the cloud or on-premises — or in some combination — are among the most consequential choices business leaders now face. As the confluence of cloud and unified communications (UC) — along with data-driven analytics — gain traction, the contact center function stands out.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

We’ll now hear why traditional contact center technology has become outdated, inflexible and cumbersome, and why CCaaS is becoming more popular in meeting the heightened user experience requirements of today.

Here to share more on the next chapter of contact center and customer service enhancements, is Vasili Triant, CEO of Serenova in Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the new trends reshaping the contact center function?

Triant: What’s changed in the world of contact center and customer service is that we’re seeing a generational spread — everything from baby boomers all the way now to Gen Z.

With the proliferation of smartphones through the early 2000s, and new technologies and new channels — things like WeChat and Viber — all these customers are now potential inbound discussions with brands. And they all have different mediums that they want to communicate on. It’s no longer just phone or e-mail: It’s phone, e-mail, web chat, SMS, WeChat, Facebook, Twitter, LinkedIn, and there are other channels coming around the corner that we don’t even know about yet.

Vasili Triant

Vasili Triant

When you take all of these folks — customers or brands — and you take all of these technologies that consumers want to engage with across all of these different channels – it’s simple, they want to be heard. It’s now the responsibility of brands to determine what is the best way to respond and it’s not always one-to-one.

So it’s not a phone call for a phone call, it’s maybe an SMS to a phone call, or a phone call to a web chat — whatever those [multi-channels] may be. The complexity of how we communicate with customers has increased. The needs have changed dramatically. And the legacy types of technologies out there, they can’t keep up — that’s what’s really driven the shift, the paradigm shift, within the contact center space.

Gardner: It’s interesting that the new business channels for marketing and capturing business are growing more complex. They still have to then match on the back end how they support those users, interact with them, and carry them through any sort of process — whether it’s on-boarding and engaging, or it’s supporting and servicing them.

What we’re requiring then is a different architecture to support all of that. It seems very auspicious that we have architectural improvements right along with these new requirements.

Triant: We have two things that have collided at the same time – cloud technologies and the growth of truly global companies.

Most of the new channels that have rolled out are in the cloud. I mean, think about it — Facebook is a cloud technology, Twitter is a cloud technology. WeChat, Viber, all these things, they are all cloud technologies. It’s becoming a Software-as-a-Service (SaaS)-based world. The easiest and best way to integrate with these other cloud technologies is via the cloud — versus on-premises. So what began as the shift of on-premises technology to cloud contact center — and that really began in 2011-2012 – has rapidly picked up speed with the adoption of multi-channels as a primary method of communication.

The only way to keep up with the pace of development of all these channels is through cloud technologies because you need to develop an agile world, you need to be able to get the upgrades out to customers in a quick fashion, in an easy fashion, and in an inexpensive fashion. That’s the core difference between the on-premises world and the cloud world.

At the same time, we are no longer talking about a United States company, an Australia company, or a UK company — we are talking about everything as global brands, or global businesses. Customer service is global now, and no one cares about borders or countries when it comes to communication with a brand.

Customer service is global now, and no one cares about borders or countries when it comes to communications with a brand.

Gardner: We have been speaking about this through the context of the end-user, the consumer. But this architecture and its ability to leverage cloud also benefits the agent, the person who is responsible for keeping that end-user happy and providing them with the utmost in intelligent services. So how does the new architecture also aid and abet the agent.

Triant: The agent is frankly one of the most important pieces to this entire puzzle. We talk a lot about channels and how to engage with the customer, but that’s really what we call listening. But even in just simple day-to-day human interactions, one of the most important things is how you communicate back. There has been a series of time-and-motion studies done within contact centers, within brands — and you can even look at your personal experiences. You don’t have to read reports to understand this.

The baseline for how an interaction will begin and end and whether that will be a happy or a poor interaction with the brand, is going to be dependent on the agents’ state of mind. If I call up and I speak to “Joe,” and he starts the conversation, he is in a great mood and he is having a great day, then my conversation will most likely end in a positive interaction because it started that way.

But if someone is frustrated, they had a rough day, they can’t find their information, their computers have been crashing or rebooting, then the interaction is guaranteed to end up poor. You hear this all the time, “Oh, can you wait a moment, my systems are loading. Oh, I can’t get you an answer, that screen is not coming up. I can’t see your account information.” The agents are frustrated because they can’t do their job, and that frustration then blends into your conversation.

So using the technology to make it easy for the agent to do their job is essential. If they have to go from one screen to another screen to conduct one interaction with the customer — they are going to be frustrated, and that will lead to a poor experience with the customer.

The cloud technologies like Serenova, which is web-based, are able to bring all those technologies into one screen. The agent can have all the information brought to them easily, all in one click, and then be able to answer all the customer needs. The agent is happy and that adds to the customer satisfaction. The conclusion of the call is a happy customer, which is what we all want. That’s a great scenario and you need cloud technology to do that because the on-premises world does not deliver a great agent experience.

One-stop service

Gardner: Another thing that the older technologies don’t provide is the ability to have a flexible spectrum to move across these channels. Many times when I engage with an organization I might start with an SMS or a text chat, but then if that can’t satisfy my needs, I want to get a deeper level of satisfaction. So it might end up going to a phone call or an interaction on the web, or even a shared desktop, if I’m in IT support, for example.

The newer cloud technology allows you to intercept via different types of channels, but you can also escalate and vary between and among them seamlessly. Why is that flexibility both of benefit to the end-user as well as the agent?

Triant: I always tell companies and customers of ours that you don’t have to over-think this; all you have to do is look to your personal life. Most common things that we as users deal with — such as cell phone companies, cable companies, airlines, — you can get onto any of these websites and begin chatting, but you can find that your interaction isn’t going well. Before I started at Serenova, I had these experiences where I was dealing with the cable company and — chat, chat, chat, — trying to solve my problem. But we couldn’t get there, and so then we needed to get on the phone. But they said, “Here is our 800 number, call in.” I’d call in, but I’d have to start a whole new interaction.

Basically, I’d have to re-explain my entire situation. Then, I am talking with one person, and they have to turn around and send me an email, but I am not going to get that email for 30 to 45 minutes because they have to get off the phone, and get into another system and send it off. In the meantime, I am frustrated, I am ticked off — and guess what I have done now? I have left that brand. This happens across the board. I can even have two totally different types of interactions with the company.

You can use a major airline brand as an example. One of our employees called on the phone trying to resolve an issue that was caused by the airline. They basically said, “No, no, no.” It made her very frustrated. She decided she’s going to fly with a different airline now. She then sent a social post [to that effect], and the airline’s VP of Customer Service answered it, and within minutes they had resolved her issue. But they already spent three hours on the phone trying to push her off through yet another channel because it was a totally different group, a totally different experience.

By leveraging technologies where you can pivot from one channel to another, everyone will get answers quicker. I can be chatting with you, Dana, and realize that we need to escalate to a voice conversation, for example, and I as the agent; I can then turn that conversation into a voice call. You don’t have to re-explain yourself and you are like, “Wow, that’s cool! Now I’m on the phone with a facility,” and we are able to handle our business.

As agent, I can also pivot simultaneously to an email channel to send you something as simple as a user guide or a series of knowledge-based articles that I may have at my fingertips as an agent. But you and I are still on the phone call. Even better yet, after-the-fact, as a business, I have all the analytics and the business intelligence to say that I had one interaction with Dana that started out as a web chat, pivoted to a phone call, and I simultaneously then sent a knowledge-based article of “X” around this issue and I can report on it all at once. Not three separate interactions, not three separate events — and I have made you a happy customer.

Gardner: We are clearly talking about enabling the agent to be a super-agent, and they can, of course, be anywhere. I think this is really important now because the function of an agent — we are already seeing the beginnings of this — but it’s going to certainly include and increase having more artificial intelligence (AI) and machine learning and associated data analytics benefits. The agent then might be a combination of human and AI functions and services.

So we need to be able to integrate at a core communications basis. Without going too far down this futuristic route, isn’t it important for that agent to be an assimilation of more assets and more services over time?

Artificial Intelligence plus human support

Triant: I‘m glad you brought up AI and these other technologies. The reality is that we’ve been through a number of cycles around what this technology is going to do and how it is going to interact with an agent. In my view, and I have been in this world for a while, the agent is the most important piece of customer service and brand engagement. But you have to be able to bring information to them, and you have to be able to give information to your customers so that if there is something simple, get it to them as quick as possible — but also bring all the relevant information to the agent.

AI has had multiple forms; it has existed for a long time. Sometimes people get confused because of marketing schemes and sales tactics [and view AI] as a way for cost avoidance, to reduce agents and eliminate staff by implementing these technologies. Really the focus is how to create a better customer experience, how to create a better agent experience.

We have had AI in our product for last three years, and we are re-releasing some components that will bring business intelligence to the forefront around the end of the year. What it essentially does is alIow you to see what you’re doing as a user out on the Internet and within these technologies. I can see that you have been looking for knowledge-based articles around, for example, “why my refrigerator keeps freezing up and how can I defrost it.” You can see such things on Twitter and you can see these things on Facebook. The amount of information that exists out there is phenomenal and in real-time. I can now gather that information … and I can proactively, as a business, make decisions about what I want to do with you as a potential consumer.

I can even identify you as a consumer within my business, know how many products you have acquired from me, and whether you’re a “platinum” customer or even a basic customer, and then make a decision.

For example, I have TVs, refrigerators, washer-dryers and other appliances all from the same manufacturer. So I am a large consumer to that one manufacturer because all of my components are there. But I may be searching a knowledge-based article on why the refrigerator continues to freeze up.

Now I may call in about just the refrigerator, but wouldn’t it be great for that agent to know that I own 22 other products from that same company? I’m not just calling about the refrigerator; I am technically calling about the entire brand. My experience around the refrigerator freaking out may change my entire brand decision going forward. That information may prompt me to decide that I want to route that customer to a different pool of agents, based on what their total lifetime value is as a brand-level consumer.

Through AI, by leveraging all this information, I can be a better steward to my customer and to the agent, because I will tell you, an agent will act differently if they understand the importance of that customer or to know that I, Vasili, have spent the last two hours searching online for information, which I posted on Facebook and I posted on Twitter.

Through AI, by leveraging all this information, I can be a better steward to the customer and to the agent.

At that point, the level of my frustration already has reached a certain height on a scale. As an agent, if you knew that, you might treat me differently because you already know that I am frustrated. The agent may be able to realize that you have been looking for some information on this, realize you have been on Facebook and Twitter. They can then say: “I am really sorry, I’m not able to get you answers. Let me see how I can help you, it seems that you are looking online about how to keep the refrigerator from freezing up.”

If I start the conversation that way, I’ve now diffused a lot of the frustration of the customer. The agent has already started that interaction better. Bringing that information to that person, that’s powerful, that’s business intelligence — and that’s creating action from all that information.

Keep your cool

Gardner: It’s fascinating that that level of sentiment analysis brings together the best of what AI and machine learning can do, which is to analyze all of these threads of data and information and determine a temperature, if you will, of a person’s mood and pass that on to a human agent who can then have the emotional capacity to be ready to help that person get to a lower temperature, be more able to help them overall.

It’s becoming clear to me, Vasili, that this contact center function and CCaaS architectural benefits are far more strategic to an organization than we may have thought, that it is about more than just customer service. This really is the best interface between a company — and all the resources and assets it has across customer service, marketing, and sales interactions. Do you agree that this has become far more strategic because of these new capabilities?

Triant: Absolutely, and as brands begin to realize the power of what the technology can do for their overall business, it will continue to evolve, and gain pace around global adoption.

As brands begin to realize the power of what the technology can do for their overall businesses, it will continue to evolve and gain global adoption.

We have only scratched the surface on adoption of these cloud technologies within organizations. A majority of brands out there look at these interactions as a cost of doing business. They still seek to reduce that cost versus the lifetime value of both the consumer, as well as the agent experience. This will shift, it is shifting, and there are companies that are thriving by recognizing that entire equation and how to leverage the technologies.

Technology is nothing without action and result. There have been some really cool things that have existed for a while, but they don’t ever produce any result that’s meaningful to the customer so they never get adopted and deployed and ultimately reach some type of a mass proliferation of results.

Gardner: You mentioned cost. Let’s dig into that. For organizations that are attracted to the capabilities and the strategic implications of CCaaS, how do we evaluate it in terms of cost? The old CapEx approach often had a high upfront cost, and then high operating costs, if you have an inefficient call center. Other costs involve losing your customers, losing brand affinity, losing your perception in the market. So when you talk to a prospect or customer, how do you help them tease out the understanding of a pay-as-you-go service as highly efficient? Does the highly empowered agent approach save money, or even make money, and CCaaS becomes not a cost center but a revenue generator?

Cost consciousness

Triant: Interesting point, Dana. When I started at Serenova about five years ago, customers all the time would say, “What’s the cost of owning the technology?” And, “Oh, my, on-premises stuff has already depreciated and I already own it, so it’s cheaper for me to keep it.” That was the conversation pretty much every day. Beginning in 2013, it rapidly started shifting. This shift was mainly driven by the fact that organizations started realizing that consumers want to engage on different channels, and the on-premises guys couldn’t keep up with this demand.

The cost of ownership no longer matters. What matters is that the on-premises guys just literally could not deliver the functionality. And so, whether that’s Cisco, Avaya, or Shoretel, they quickly started falling away in consideration for technology companies that were looking to deploy applications for their business to meet these needs.

The cost of ownership quickly disappeared as the main discussion point. Instead it came around to, “What is the solution that you’re going to deliver?” Customers that are looking for contact center technologies are beginning to take a cloud-first approach. And once they see the power of CCaaS through demonstration and through some trials of what an agent can do – and it’s all browser-based, there is no client install, there is no equipment on-premises – then it takes on a life of its own. It’s about, “What is the experience going to be? Are these channels all integrated? Can I get it all from one manufacturer?”

Following that, organizations focus on other intricacies around – Can it scale? Can it be redundant? Is it global? But those become architectural concerns for the brands themselves. There is a chunk of the industry that is not looking at these technologies, and they are stuck in brand euphoria or have to stay with on-premises infrastructure, or with a certain vendor because of their name or that they are going to get there someday.

As we have seen, Avaya has declared bankruptcy. Avaya does not have cloud technologies despite their marketing message. So the customers that are in those technologies now realize they have to find a path to keep up with the basic customer service at a global scale. Unfortunately, those customers have to find a path forward and they don’t have one right now.

It’s less about cost of ownership and it’s more about the high cost of not doing anything. If I don’t do anything, what’s going to be the cost? That cost ultimately becomes – I’m not going to be able to have engagement with my customers because the consumers are changing.

It’s less about cost of ownership and it’s more about the high cost of not doing anything.

Gardner: What about this idea of considering your contact center function not just as a cost center, but also as a business development function? Am I being too optimistic.

It seems to me that as AI and the best of what human interactions can do combine across multichannels, that this becomes no longer just a cost center for support, a check-off box, but a strategic must-do for any business.

Multi-channel customer interaction

Triant: When an organization reaches the pinnacle of happiness within what these technologies can do, they will realize that no longer do you need to have delineation between a marketing department that answers social media posts, an inside sales department that is only taking calls for upgrades and renewals, and a customer service department that’s dealing with complaints or inbound questions. They will see that you can leverage all the applications across a pool of agents with different skills.

I may have a higher skill around social media than over voice, or I may have a higher skill level around a sales activity, or renewal activity, over customer service problems. I should be able to do any interaction. And potentially one day it’ll just be customer interaction department and the channels are just a medium of inbound and outbound choice for a brand.

But you can now take information from whatever you see the customer doing. Each of their actions have a leading indicator, everything has a predictive action prior to the inbound touch, everything does. Now that a brand can see that, it will be able to have “consumer interaction departments,” and it will be properly routed to the right person based on that information. You’ll be able to bring information to that agent that will allow them to answer the customer’s questions.

Gardner: I can see how that agent’s job would be very satisfying and fulfilling when you are that important, when you have that sort of a key role in your organization that empowers people. That’s good news for people that are trying to find those skills and fill those positions.

Vasili, we only have a few minutes left, but I’d love to hear about a couple of examples. It’s one thing to tell, it’s another thing to show. Do we have some examples of organizations that have embraced this concept of a strategic contact center, taken advantage of those multi-channels, added perhaps some intelligence and improved the status and capability of the agents — all to some business benefit? Walk us through a couple of actual use cases where this has all come together.

Cloud communication culture shift

Triant: No one has reached that level of euphoria per se, but there are definitely companies that are moving in that direction.

It is a culture change, so it takes time. I know as well as anybody what it takes to shift a culture, and it doesn’t happen overnight. As an example, there is a ride-hailing company that engages in a different way with their consumer, and their consumer might be different than what you think from the way I am describing it. They use voice systems and SMS and often want to pivot between the two. Our technology actually allows the agent to make that decision even if they aren’t even physically in the same country. They are dynamically spread across multiple countries to answer any question they may need to answer based on time and day.

But they can pivot from what’s predominantly an SMS inbound and outbound communication into a voice interaction, and then they can also follow up with an e-mail, and that’s already happened. Now, it initially started with some SMS inbound and outbound, then they added voice – an interesting move as most people think adding voice is what people are getting away from. What everyone has begun to realize is that live communication ultimately is what everybody looks for in the end to solve the more complex problems.

What everyone has begun to realize is that live communication ultimately is what everybody looks for in the end to solve the more complex problems.

That’s one example. Another company that provides the latest technology in food order and delivery initially started with voice-only to order and deliver food. Now they’ve added SMS confirmations automatically, and e-mail as well for confirmation or for more information from the inbound voice call. And now, once they are an existing customer, they can even start an order from an SMS, and pivot back to a voice call for confirmation — all within one interaction. They are literally one of the fastest growing alternative food delivery companies, growing at a global scale.

They are deploying agents globally across one technology. They would not be able to do this with legacy technologies because of the expense. When you get into these kinds of high-volume, low-margin businesses, cost matters. When you can have an OpEx model that will scale, you are adding better customer service to the applications, and you are able to allow them to build a profitable model because you are not burning them with high CapEx processes.

Gardner: Before we sign off, you had mentioned your pipeline about your products and services, such as engaging more with AI capabilities toward the end of the year. Could give us a level-set on your roadmap? Where are your products and services now? Where do you go next?

A customer journey begins with insight

Triant: We have been building cloud technologies for 16 years in the contact center space. We released our latest CCaaS platform in March 2016 called CxEngage. We then had a major upgrade to the platform in March of this year, where we take that agent experience to the next level. It’s really our leapfrog in the agent interface and making it easier, bringing in more information to them.

Where we are going next is around the customer journey — predictive interactions. Some people call it AI, but I will call it “customer journey mapping with predictive action insights.” That’s going to be a big cornerstone in our product, including business analytics. It’s focused around looking at a combination of speech, data and text — all simultaneously creating predictive actions. This is another core area we are going in an and continue to expand the reach of our platform from a global scale.

At this point, we are a global company. We have the only global cloud platform built on a single software stack with one data pipeline. We now have more users on a pure cloud platform than any of our competitors globally. I know that’s a big statement, but when you look at a pure cloud infrastructure, you’re talking in a whole different realm of what services you are able to offer to customers. Our ability to provide a broad reach including to Europe, South Africa, Australia, India, and Singapore — and still deliver good cloud quality at a reasonable cost and redundant fashion –  we are second to none in that space.

Gardner: I’m afraid we will have to leave it there. We have been listening to a sponsored BriefingsDirect discussion on how CCaaS capabilities are becoming more powerful as a result of cloud computing, multimode communications channels, and the ability to provide optimized and contextual user experiences.

And we’ve learned how new levels of insight and intelligence are now making CCaaS approaches able to meet the highest user experience requirements of today and tomorrow. So please join me now in thanking our guest, Vasili Triant, CEO of Serenova in Austin, Texas.

Triant: Thank you very much, Dana. I appreciate you having me today.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing series of BriefingsDirect discussions. A big thank you to our sponsor, Serenova, as well as to you, our audience. Do come back next time and thanks for listening.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Serenova.

Transcript of a discussion on how contact center-as-a-service capabilities are becoming more powerful to provide optimized and contextual user experiences for agents and customers. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, contact center, data analysis, data center, Enterprise transformation, Help desk, machine learning, managed services, professional services, User experience | Tagged , , , , , , , | Leave a comment

Women in business leadership — networking their way to success

The next BriefingsDirect digital business insights panel discussion focuses on the evolving role of women in business leadership. We’ll explore how pervasive business networks are impacting relationships and changes in business leadership requirements and opportunities for women.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

To learn more about the transformation of talent management strategies as a result of digital business and innovation, please join me in welcoming our guests, Alicia Tillman, Chief Marketing Officer at SAP Ariba, and Lisa Skeete Tatum, Co-founder and CEO of Landit in New York. The panel was recorded in association with the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Alicia, looking at a confluence of trends, we have the rise of business networks and we have an advancing number of women in business leadership roles. Do they have anything to do with one another? What’s the relationship?

Tillman: It is certainly safe to say that there is a relationship between the two. Networks historically connected businesses mostly from a transactional standpoint. But networks today are so much more about connecting people. And not only connecting them in a business context, but also from a relationship-standpoint as well.

Alicia Tillman

Tillman

There is as much networking and influence that happens in a digital network as  does from meeting somebody at an event, conference or forum. It has really taken off in the recent years as being a way to connect quickly and broadly — across geographies and industries. There is nothing that brings you speed like a network, and that’s why I think there is such a strong correlation to how digital networking has taken off — and what a true technical network platform can allow.

Gardner: When people first hear “business networks,” they might think about transactions and applications talking to applications. But, as you say, this has become much broader in the last few years; business networks are really about social interactions, collaboration, and even joining companies culturally.

How has that been going? Has this been something that’s been powerful and beneficial to companies?

Tillman: It’s incredibly powerful and beneficial. If you think about how buying habits are these days, buyers are very particular about the goods that they are interested in, and, frankly, the people that they source from.

If I look at my buying population in particular at SAP Ariba, there is a tremendous movement toward sustainable goal or fair-trade types of responsibilities, of wanting to source goods from minority-owned businesses, wanting to source only organic or fair-trade products, wanting to only partner with organizations where they know within their supply chain the distribution of their product is coming from locations in the world where the working conditions are safe and their employees are being paid fairly.

A network allows for that; the SAP Ariba Network certainly allows for that, as we can match suppliers directly with what those incredibly diverse buyer needs are in today’s environment.

Gardner: Lisa, we just heard from Alicia about how it’s more important that companies have a relationship with one another and that they actually look for culture and character in new ways. Tell us about Landit, and how you’re viewing this idea of business networks changing the way people relate to their companies and even each other?

Skeete Tatum: Our goal at Landit is to democratize career success for women around the globe. We have created a technology platform that not only increases the success and engagement of women in the workplace, but it also enables companies in this new environment to attract, develop, and retain high-potential diverse talent.

Our goal at Landit is to democratize career success for women around the globe.

Lisa Skeete Tatum

Skeete Tatum

We do that by providing each woman with the personalized playbook in the spirit of one-size-fits-one. That empowers them with the access to the tools, the resources, the know-how, and, yes, the human connections that they need to more successfully navigate their paths.

It’s really in response to the millions of women who will find themselves at an inflection point; whether they are in a company that they love but are just trying to figure out how to more successfully navigate there, or they may be feeling a little stuck and are not sure how to get out. The challenge is: “I am motivated, I have the skills, I just don’t know where to start.”

We have really focused on knitting what we believe are those key elements together — leveraged by technology that actually guides them. But we find that companies in this new environment are often overwhelmed and trying to figure out a way to manage this new diverse workforce in this era of connectedness. So we give them a turnkey, one-size-fits-one solution, too.

As Alicia mentioned, in this next stage of collaborative businesses, there are really two things. One, we are more networked and more visible than ever before, which is great, because it’s created more opportunities and flexibility than we have seen — not to mention more access. However, those opportunities are highly dependent on how someone showcases their value, their contribution, and their credibility, which makes it even more important to cultivate not only your brand and your network. It goes beyond just individual capabilities of getting at what is the sponsorship in the support of a strong network.

The second thing I would say, that Alicia also mentioned, is that today’s business environment — which is more global, more diverse in its tapestry — requires businesses to create an environment where everyone feels valued. People need to feel like they can bring the full measure of their talent and passion to the workplace. Companies want amazing talent to find a place at their company.

Gardner: If I’m at a company looking to be more diverse, how would I use Landit to accomplish that? Also, if I were an individual looking to get into the type of company that I want to be involved with, how would I use Landit?

Connecting supply and demand for talent

Skeete Tatum: As an individual, when you come on to Landit, we actually give you one of the key ingredients for success. Because we often don’t know what we don’t know, we knit together the first step, of “Where do I fit?” If you are not in a place that fits with your values, it’s not sustainable.

So we help you figure out what is it that fits with “all of me,” and we then connect you to those opportunities. Many times with diversity programs, they are focused just on the intake, which is just one component. But you want people to thrive when they get there.

Many times with diversity programs, they are focused just on the intake, which is just one component. But you want people to thrive when they get there.

And so, whether it is building your personal brand or building your board of advisors or continuing with your skill development in a personalized, relevant way — or access to coaching because often many of us don’t have that unless we are in the C-suite on the way — we are able to knit that together in a way that is relevant, that’s right-sized for the individual.

For the company, we give them a turnkey solution to invest in a scalable way, to touch more lives across their company, particularly in a more global environment. Rather than having to place multiple bets, they place one bet with Landit. We leverage that one-size-fits-one capability with things that we all know are keys to success. We are then able to have them deliver that again, whether it is to the newly minted managers or people they have just acquired or maybe they are leaders that they want to continue to invest in. We enable them to do that in a measurable way, so that they can see the engagement and the success and the productivity.

Gardner: Alicia, I know that SAP Ariba is already working to provide services to those organizations that are trying to create diversity and inclusion within their supply chains. How do you see Landit fitting into the business network that SAP Ariba is building around diversity?

Tillman: First, the SAP Ariba Network is the largest business to business (B2B) network on the planet. We connect more than 2.5 million companies that transact over $1 trillion in commerce annually. As you can imagine, there is incredible diversity in the buying requirements that exist amongst those companies that are located in all parts of the world and work in virtually every industry.

One of things that we offer as an organization is a Discovery tool. When you have a network that is so large, it can be difficult and a bit daunting for a buyer to find the supplier that meets their business requirements, and for a supplier to find their ideal buyer. So our SAP Ariba Discovery application is a matching service, if you will, that enables a buyer to list their requirements. You then let the tool work for you to allow matching you to suppliers that most meet your requirements, whatever they may be.

I’m very proud to have Lisa present at our Women in Leadership Forum at SAP AribaLIVE 2017. I am showcasing Lisa not only because of her entrepreneurial spirit and the success that she’s had in her career — that I think will be very inspirational and motivational to women who are looking to continue to develop their careers — but she has also created a powerful platform with Landit. For women, it helps provide a digital environment that allows them to harness precisely what it is that’s important to them when it comes to career development, and then offers the coaching in the Landit environment to enable that.

For women, it helps provide a digital environment that allows them to harness precisely what it is that’s important to them when it comes to career development.

Landit also offers companies an ability to support their goals around gender diversity. They can look at the Landit platform and source talent that is not only very focused on careers — but also supports a company in their diversity goals. It’s a tremendous capability that’s necessary and needed in today’s environment.

Gardner: Lisa, what has changed in the past several years that has prompted this changed workforce? We have talked about the business network as an enabler, and we have talked about social networks connecting people. But what’s going to be different about the workforce going forward?

Collaborative visibility via networking

Skeete Tatum: There are three main things. First, there is a recognition that diversity is not a “nice to have,” it’s a “must-have” from a competitive standpoint; to acquire the best ideas and gain a better return on capital. So it’s a business imperative to invest in and value diversity within one’s workforce. Second, businesses are continuing to shift toward matching opportunities with the people who are best able to do that job, but in a less-biased way. Thirdly, business-as-usual isn’t going to work in this new reality of career management.

Business-as-usual isn’t going to work in this new reality of career management.

It’s no longer one- or bi-directional, where it’s just the manager or the employee. It’s much more collaborative and driven by the individual. And so all of these things … where there is much more opportunity, much more freedom. But how do you anchor that with a problem and a framework and a connectivity that enables someone to more successfully navigate the new environment and new opportunities? How do you leverage and build your network?  Everyone knows they need to do it, but many people don’t know how to do it. Or when your brand is even more important, visibility is more important, how do you develop and communicate your accomplishments and your value? It is the confluence of those things coming together that creates this new world order.

Gardner: Alicia, one of the biggest challenges for most businesses is getting the skills that they need in a timely fashion. How do we get past the difficulty of best matching hiring?  How do we use business networks to help solve that?

Tillman: This is the beauty of technology. Technology is an enabler in business to form relationships more quickly, and to transact more quickly. Similarly, technology also provides a network to help you grow from a development standpoint. Lisa’s organization, Landit, is one example of that.

Within SAP Ariba we are very focused on closing the gap in gaining the skills that are necessary to be successful in today’s business environment. I look at the offering of SAP SuccessFactors – which is  focused on empowering the humancapital management (HCM) organization to lead performance management and career development. And SAP Fieldglass helps companies find and source the right temporary labor that they need to service their most pressing projects. Combine all that with a business network, and there is no better place in today’s environment to find something you need — and find it quickly.

But it all comes down to the individual’s desire to want to grow their skills, or find new skills, to get out of their comfort zone and try something new. I don’t believe there is a shortage of tools or applications to help enable that growth and talent. It comes down to the individual’s desire to want to grab it and go after it.

Maximize your potential with technology

Skeete Tatum: I couldn’t agree more. The technology and the network are what create the opportunity. In the past, there may have been a skills gap, but you have to be able to label it, you have to be able to identify it in a way that is relevant to the individual. As Alicia said, there are many opportunities out there for development, but how do you parse that down and deliver it to the individual in a way that is relevant — and that’s actionable? That’s where a network comes in and where the power of one can be leveraged in a scalable way.

Now is probably one of the best times to invest in and have an individual grow to reach their full potential. The desire to meet their goals can be leveraged by technology in a very personal way.

Gardner: As we have been hearing here at SAP Ariba LIVE 2017, more-and-more technologies along the lines of artificial intelligence (AI) and machine learning (ML) – are taking advantage of all the data and analyzing it and making it actionable — can now be brought to bear on this set of issues of matching workforce requirements with skill sets.

Where should we expect to see these technologies reduce the complexity and help companies identify the right workforce, and the workforce identify the right companies?

Skeete Tatum: Having the data and being able to quantify and qualify it gives you the power to set a path forward. The beauty is that it actually enables everyone to have the opportunity to contribute, the opportunity to grow, and to create a path and a sense of belonging by having a way to get there. From our perspective, it is that empowerment and that ownership — but with the support of the network from the overall organization — that enables someone to move forward. And it enables the organization to be more successful and more embracing of this new workforce, this diverse talent.

Tillman: Individuals should feel more empowered today than ever before to really take their career development to unprecedented levels. There are so many technologies, so many applications out there to help coach you on every level. It’s up to the individual to truly harness what is standing in front of them and to really grab hold of it — and use it to their advantage to reach their career goal.

Gardner: Lisa, what should you be thinking about from a personal branding perspective when it comes to making the best use of tools like Landit and business networks?

Skeete Tatum: The first thing is that people actually have to think of themselves as a brand, as opposed to thinking that they are bragging or being boastful. The most important brand you have is the brand of you.

Second, people have to realize that this notion of building your brand is something that you nurture and it develops over time. What we believe is important is that we have to make it tangible, we have to make it actionable, and we have to make it bite-size, otherwise it seems overwhelming.

So we have defined what we believe are the 12 key elements for anyone to have a successful brand, such as have you been visible, do you have a strategic plan of you, are you seeking feedback, do you have a regular cadence of interaction with your network, et cetera. Knowing what to do and how to do it and at what cadence and at what level is what enables someone to move forward. And in today’s environment, again, it’s even more important.

Pique their curiosity by promoting your own

Tillman: Employers want to be sure that they are attracting candidates and employing candidates that are really invested in their own development. An employer operates in the best interest of the employee in terms of helping to enable tools and allow for that development to occur. At the same time, where candidates can really differentiate themselves in today’s work environment is when they are sitting across the table and they are in that interview. It’s really important for a candidate to talk about his or her own development and what are they doing to constantly learn and support their curiosity.

Employers want curious people. They want those that are taking advantage of development and tools and learning, and these are the things that I think set people apart from one another when they know that individually they are going to go after learning opportunities and push themselves out of their comfort zone to take themselves – and ultimately the companies that employ them – to the next level.

Gardner: Before we close out, let’s take a peek into the crystal ball. What, Alicia, would be your top two predictions given that we are just on sort of an inflection point with this new network, with this new workforce and the networking effect for it?

Tillman: First, technology is only going to continue to improve. Networks have historically enabled buyers and sellers to come together and transact to build their organizations and support growth, but networks are taking on a different form.

Technology is going to continue to enable priorities professionally and priorities personally. Technology is going to become a leading enabler of a person’s professional development.

Second, individuals are going to set themselves apart from others by their desire and their hunger to really grab hold of that technology. When you think about decision-making among companies in terms of candidates they hire and candidates they don’t, employers are going to report back and say, “One of the leading reasons why I selected one candidate over another is because of their desire to learn and their desire to grab hold of technologies and networks that were standing in front of them to bring their careers to an unprecedented level.”

Gardner: Lisa, what are your top two predictions for the new workforce and particularly for diversity playing a bigger role?

Skeete Tatum: Technology is the ultimate leveler of the playing field. It enables companies as well as the individual to make decisions based on things that matter. That is what enables people to bring their full selves, the full measure of their talent, to the workplace.

In terms of networks in particular, they have always been a key element to success but now they are even more important. It actually poses a special challenge for diverse talent. They are often not part of the network, and they may have competing personal responsibilities that make the investment of the time and the frequency in those relationships a challenge.

Sometimes there is a discomfort with how to do it. We believe that through technology people will have to get comfortable with being uncomfortable. They need to learn not only how to codify their network, but also have the right access to the right person with the right cadence, and access to that know how, that guidance, can be delivered through technology.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, Business networks, Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, ERP, Identity, managed services, Networked economy, Platform 3.0, procurement, professional services, SAP, SAP Ariba, social media, Spot buying, User experience | Tagged , , , , , , , , , , , , | Leave a comment

The next line of defense—How new security leverages virtualization to counter sophisticated threats

When it comes to securing systems and data, the bad guys are constantly upping their games — finding new ways to infiltrate businesses and users. Those who protect systems from these cascading threats must be ever vigilant for new technical advances in detection and protection. In fact, they must out-innovate their assailants.

The next BriefingsDirect security insights discussion examines the relationship between security and virtualization. We will now delve into how adaptive companies are finding ways to leverage their virtualization environments to become more resilient, more intelligent, and how they can protect themselves in new ways.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn how to ensure that virtualized data centers do not pose risks — but in fact prove more defensible — we are joined by two security-focused executives, Kurt Roemer, Chief Security Strategist at Citrix, and Harish Agastya, Vice President for Enterprise Solutions at Bitdefender. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Kurt, virtualization has become widespread and dominant within data centers over the past decade. At that same time, security has risen to the very top of IT leadership’s concerns. What is it about the simultaneous rise of virtualization and the rise of security concerns? Is there any intersection? Is there any relationship that most people may miss?

Roemer: The rise of virtualization and security has been concurrent. A lot of original deployments for virtualization technologies were for remote access, but they were also for secure remote access. The apps that people needed to get access to remotely were usually very substantial applications for the organization —  things like order processing or partner systems; they might have been employee access to email or internal timecard systems. These were things that you didn’t really want an attacker messing with — or arbitrary people getting access to.

Roemer.Kurt (1)

Roemer

Security has grown from just providing basic access to virtualization to really meeting a lot of the risks of these virtualized applications being exposed to the Internet in general, as well as now expanding out into the cloud. So, we have had to grow security capabilities to be able to not only keep up with the threat, but try to keep ahead of it as well.

Gardner: Hasn’t it historically been true that most security prevention technologies have been still focused at the operating system (OS)-level, not so much at the virtualization level? How has that changed over the past several years?

Roemer: That’s a good question. There have been a lot of technologies that are associated with virtualization, and as you go through and secure and harden your virtual environments, you really need to do it from the hardware level, through the hypervisor, through the operating system level, and up into the virtualization system and the applications themselves.

We are now seeing people take a much more rigorous approach at each of those layers, hardening the virtualization system and the OS and integrating in all the familiar security technologies that we’re used to, like antivirus, but also going through and providing for application-specific security.

So if you have a SAP system or something else where you need to protect some very sensitive company data and you don’t want that data to be accessed outside the office arbitrarily, you can provide very set interfaces into that system, being able to control the clipboard or copy and paste, what peripherals the application can interface with; i.e., turn off the camera, turn off the microphone if it’s not needed, and even get down to the level of with the browser, whether things like JavaScript is enabled or Flash is available.

So it helps to harden the overall environment and cut down on a lot of the vulnerabilities that would be inherent by just leaving things completely wide open. One of the benefits of virtualization is that you can get security to be very specific to the application.

Gardner: Harish, now that we are seeing this need for comprehensive security, what else is it that people perhaps don’t understand that they can do in the virtualization layer? Why is virtualization still uncharted territory as we seek to get even better security across the board?

Let’s get better than physical

Agastya: Customers often don’t realize when they are dealing with security in physical or virtual environments. The opportunities that virtual environments provide to them are to have the ability to take security to a higher level than physical-only. So better than physical is, I think, a key value proposition that they can benefit from — and the technology innovation of today has enabled that.

Harish Agastya

Agastya

There is a wave of innovation among security vendors in this space. How do we run resource-intensive security workloads in a way that does not compromise the service-level agreements (SLAs) that those information technology operations (IT Ops) administrators need to deliver?

There is a lot of work happening to offload security-scanning mechanisms on to dedicated security virtual appliances, for example. Bitdefender has been working withpartners like Citrix to enable that.

Now, the huge opportunity is to take that story further in terms of being able to provide higher levels of visibility, detection, and prevention from the attacks of today, which are advanced persistent threats. We seek to detect how they manifest in the data center and — in a virtual environment — what you have the opportunity to do, and how you can respond. That game is really changing now.

Gardner: Kurt, is there something about the ability to spin up virtualized environments, and then take them down that provides a risk that the bad guys can target or does that also provide an opportunity to start fresh: To eliminate vulnerabilities, or learn quickly and adapt quickly? Is there something about the rapid change that virtualization enables that is a security plus?

Persistent protection anywhere

Roemer: You really hit on the two sides of the coin. On one side, virtualization does oftentimes provide an image of the application or the applications plus OS that could be fairly easy for a hacker to steal and be able to spin up offline and be able to get access to secrets. So you want to be able to protect your images, to make sure that they are not something that can be easily stolen.

On the other side, having the ability to define persistence — what do you want to have to persist between reboots versus what’s non-persistent — allows you to have a constantly refreshed system. So when you reboot it, it’s exactly back to the golden image — and everything is as it should be. As you patch and update you are working with a known quantity as opposed to the endpoint where somebody might have administrative access and it has installed personal applications and plug-ins to their browser and other things like that that you may or may not want to have in placer.

The nice thing with virtualization is that it’s independent of the OS, the applications, the endpoints, and the varied situations that we all access our apps and data from.

Layering also comes into play and helps to make sure that you can dynamically layer in applications or components of the OS, depending on what’s needed. So if somebody is accessing a certain set of functionality in the office, maybe they have 100% functionality. But when they go home, because they are no longer in a trusted environment or maybe not working on a trusted PC from their home system, they get a degraded experience, seeing fewer applications and having less functionality layered onto the OS. Maybe they can’t save to local drives or print to local printers. All of that’s defined by policy. The nice thing with virtualization is that it’s independent of the OS, the applications, the endpoints, and the varied situations that we all access our apps and data from.

Gardner: Harish, with virtualization that there is a certain level of granularity as to how one can manage their security environment parameters. Can you expand on why having that granular capability to manage parameters is such a strong suit, and why virtualization is a great place to make that happen?

On the move, virtually

Agastya: That is one of the opportunities and challenges that security solutions need to be able to cope with.

As workloads are moving across different subgroups, sub-networks, that virtual machine (VM) needs to have a security policy that moves with it. It depends on what type of application is running, and it is not specific to the region or sub-network that that particular VM is resident on. That is something that security solutions that are designed to operate in virtual environments have the ability to do.

Security moves with the workload, as the workload is spawned off and new VMs are created. The same set of security policies associated with that workload now can protect that workload without needing to have a human step in and determine what security posture needs to belong to that VM.

See the IDC White Paper, Hypervisor Introspection: A Transformative Approach to Advanced Attack Detection.

That is the opportunity that virtualization provides. But it’s also a challenge. For example, maybe the previous generations of solutions predated all of this. We now need to try and address that.

We love the fact that virtualization is happening and that it has become a very elastic software-defined mechanism that moves around and gives the IT operations people so much more control. It allows an opportunity to be able to sit very well in that environment and provide security that works tightly integrated with the virtualization layer.

Gardner: I hear this so much these days that IT operations people are looking for more automation, and more control.

Kurt, I think it’s important to understand that when we talk about security within a virtualization layer, that doesn’t obviate the value of security that other technologies provide at the OS level or network level. So this isn’t either-or, this is an augmentation, isn’t that correct, when we talk about virtualization and security?

The virtual focus

Roemer: Yes, that’s correct. Virtualization provides some very unique assets that help extend security, but there are some other things that we want to be sure to focus on in terms of virtualization. One of them is Bitdfender Hypervisor Introspection (HVI). It’s the ability for the hypervisor to provide a set of direct inspect application programming interfaces (APIs) that allow for inspection of guest memory outside of the guest.

When you look at Windows or Linux guests that are running on a hypervisor, typically when you have tried to secure those it’s been through technology installed in the guest. So you have the guest that’s self-protecting, and they are relying on OS APIs to be able to effect security. Sometimes that works really well and sometimes the attackers get around OS privileges and are successful, even with security solutions in place.

One of the things that HVI does is it looks for the techniques that would be associated with an attack against the memory of the guest from outside the guest. It’s not relying on the OS APIs and can therefore catch attacks that otherwise would have slipped past the OS-based security functionality.

Gardner: Harish, maybe you can tell us about how Citrix and Bitdefender are working together?

Step into the breach, together

Agastya: The solution is Bitdefender HVI. It works tightly with Citrix’s XenServer hypervisor, and it has been available in a controlled release for the last several months. We have had some great customer traction on it. At Citrix Synergy this year wewill be making that solution generally available.

We have been working together for the last four years to bring this groundbreaking technology to the market.

What is the problem we are trying to solve? It is the issue of advanced attacks that hit the data center when, as Kurt mentioned, advanced attackers are able to skirt past endpoint security defense mechanisms by having root access and operating at the same level of privilege as the endpoint security that may be running within the VM.

They can then essentially create a blind spot where the attackers can do anything they want while the endpoint security solution continues to run.

See the IDC White Paper, Hypervisor Introspection: A Transformative Approach to Advanced Attack Detection.

These types of attacks stay in the environment and the customer suffers on average 200 days before a breach is discovered. The marketplace is filled with stories like this and it’s something that we have been working together with Citrix to address.

The fundamental solution leverages the power of the hypervisor to be able to monitor attacks that modify memory. It does that by looking for the common attack mechanisms that all these attackers use, whether it’s buffer overflows or it’s heap spraying, the list goes on.

They all result in memory modification that the endpoint security solution within the VM is blinded to. However, if you are leveraging the direct inspect APIs that Kurt talked about — available as part of Citrix’s XenServer solution – then we have the ability to look into that VM without having a footprint in there. It is a completely agentless solution that runs outside the security virtual appliance. It monitors all of the VMs in the data center against these types of attacks. It allows you to take action immediately, reduces the time to detection and blocks the attack.

Gardner: Kurt, what are some of the major benefits for the end-user organization in deploying something like HVI? What is the payback in business terms?

Performance gains

Roemer: Hypervisor Introspection, which we introduced in XenServer 7.1, allows an organization to deploy virtualization with security technologies behind it at the hypervisor level. What that means for the business is that every guest you bring up has protection associated with it. Even if it’s a new version of Linux that you haven’t previously tested and you really don’t know which antivirus you would have integrated with it; or something that you are working on from an appliance perspective — anything that can run on XenServer would be protected through these direct inspect APIs, and the Bitdefender HVI solution. That’s really exciting.

It also has performance benefits because you don’t have to run antivirus in every guest at the same level. By knowing what’s being protected at the hypervisor level, you can configure for a higher level of performance.

Now, of course, we always recommend having antivirus in guests, as you still have file-based access and so you need to look for malware, and sometimes files get emailed in or out or produced, and so having access to the files from an anti-malware perspective is very valuable.

So for the business, HVI gives you higher security, it gives you better performance, and the assurance that you are covered.

But you may need to cut down some of the scanning functionality and be able to meet much higher performance objectives.

Gardner: Harish, it sounds like this ability to gain introspection into that hypervisor is wonderful for security and does it in such a way that it doesn’t degrade performance. But it seems to me that there are also other ancillary benefits in addition to security, when you have that ability to introspect and act quickly. Is there more than just a security benefit, that the value could go quite a bit further?

The benefits of introspection

Agastya: That’s true. The ability to introspect into memory has huge potential in the market. First of all, with this solution right now, we address the ability to detect advanced attacks, which is a very big problem in the industry — where you have everything from nation-sponsored attacks to deep dark web, malicious components, attack components available to common citizens who can do bad things with them.

The capability to reduce that window to advanced attack detection is huge. But now with the power of introspection, we also have the ability to inject, on the fly, into the VM, additional solutions tools that can do deep forensics, measure network operations and the technology can expand to cover more. The future is bright for where we can take this between our companies.

Gardner: Kurt, anything to add on the potential for this memory introspection capability?

Specific, secure browsers

Roemer: There are a couple things to add. One is taking a look at the technologies and just rolling back through a lot of the exploits that we have seen, even throughout the last three months. There have been exploits against Microsoft Windows, exploits against Internet Explorer and Edge, hypervisors, there’s been EternalBlue and the Server Message Block (SMB) exploits. You can go back and be able to try these out against the solution and be able to see exactly how it would catch them, and what would have happened to your system had those exploits actually taken effect.

If you have a team that is doing forensics and trying to go through and determine whether systems had previously been exploited, you are giving that team additional functionality to be able to look back and see exactly how the exploits would have worked. Then they can understand better how things would have happened within their environment. Because you are doing that outside of the guest, you have a lot of visibility and a lot of information you otherwise wouldn’t have had.

One big expanded use-case here is to get the capability for HVI between Citrix and Bitdefender in the hands of your security teams, in the hands of your forensics teams, and in the hands of your auditors — so that they can see exactly what this tool brings to the table.

See the IDC White Paper, Hypervisor Introspection: A Transformative Approach to Advanced Attack Detection.

Something else you want to look at is the use-case that allows users to expand what they are doing and makes their lives easier — and that’s secured browsing.

Today, when people go out and browse the Internet or hit a popular application like Facebook or Outlook Web Access — or if you have an administrator who is hitting an administrative console for your Domain Name System (DNS) environment, your routers, your Cisco, Microsoft environments, et cetera, oftentimes they are doing that via a web browser.

One big expanded use-case here is to get the capability for HVI between Citrix and Bitdefender in the hands of your security teams.

Well, if that’s the same web browser that they use to do everything else on their PC, it’s over-configured, it presents excessive risk, and you now have the opportunity with this solution to publish browsers that are very specific to each use.

For example, you publish one browser specifically for administrative access, and you know that you have advanced malware detection. Even if somebody is trying to target your administrators, you are able to thwart their ability to get in and take over the environments that the administrators are accessing.

As more things move to the browser — and more very sensitive and critical applications move to the cloud — it’s extremely important to set up secured browsing. We strongly recommend doing this with XenServer and HVI along with Bitdefender providing security.

Agastya: The problem in the market with respect to the human who is sitting in front of the browser being the weakest link in the chain is a very important one. Many, many different technology approaches have been taken to address this problem — and most of them have struggled to make it work.

The value of XenApp coming in with its secured browser model is this: You can stream your browser and you are just presenting, rendering an interface on the client device, but the browser is actually running in the backend, in the data center, running on XenServer, protected by Bitdefender HVI. This model not only allows you to shift the threat away from the client device, but also kill it completely, because that exploit which previously would have run on the client device is not on the client device anymore. It’s not even on the server anymore because HVI has gotten to it and stopped it.

Roemer: I bring up the browser benefit as an example because when you think of the lonely browser today, it is the interface to some of your most critical applications. A browser, at the same time, is also connected to your file system, your network, your Windows registry, your certificate chain and keys — it’s basically connected to everything you do and everything you have access to in most OSes.

What we are talking about here is publishing a browser that is very specific to purpose and configured for an individual application. Just put an icon out there, users click on it and everything works for them silently in the background. By being able to redirect hyperlinks over to the new joint XenServer-Bitdefender solution, you are not only protecting against known applications and things that you would utilize — but you can also redirect arbitrary links.

Even if you tell people, “don’t click on any links”, you know every once in a while it’s going to happen. When that one person clicks on the link and takes down the entire network, it’s awful. Ransomware attacks happen like that all the time. With this solution, that arbitrary link would be redirected over to a one-time use browser. Bitdefender would come up and say, “Hey, yup, there’s definitely a problem here, we are going to shut this down,” and the attack never would have had a chance to get anywhere.

What we are talking about here is publishing a browser that is very specific to purpose and configured for an individual application.

The organization is notified and can take additional remediatative actions. It’s a great opportunity to really change how people are working and take this arbitrary link problem and the ransomware problem and neutralize it.

Gardner: It sounds revolutionary rather than evolutionary when it comes to security. It’s quite impressive. I have learned a lot in just the last week or two in looking into this. Harish, you mentioned earlier that before the general availability being announced in May for Bitdefender HVI on XenServer that you have had this in beta. Do you have any results from that? Can you offer any metrics of what’s happened in the real world when people deploy this? Are the results as revolutionary as it sounds?

Real-world rollout

Agastya: The product was first in beta and then released in controlled availability mode, so the product is actually in production deployment at several companies in both North America and Europe. We have a few financial services companies, and we have some hospitals. We have put the product to use in production deployments for virtual desktop infrastructure (VDI) deployments where the customers are running XenApp and XenDesktop on top of XenServer with Bitdefender HVI.

We have server workloads running straight on XenServer, too. These are typically application workloads that the financial services companies or the hospitals need to run. We have had some great feedback from them. Some of them have become references as well, and we will be talking more about it at Citrix Synergy 2017, so stay tuned. We are very excited about the fact that the product is able to provide value in the real world.

Roemer: We have a very detailed white paper on how to set up the secured browsing solution, the joint solution between Citrix and Bitdefender. Even if you are running other hypervisors in your environment, I would recommend that you set up this solution and try redirecting some arbitrary hyperlinks over to it, to see what value you are going to get in your organization. It’s really straightforward to set up and provides a considerable amount of additional security visibility.

See the IDC White Paper, Hypervisor Introspection: A Transformative Approach to Advanced Attack Detection.

Bitdefender also has some really amazing videos that show exactly how the solution can block some of the more popular exploits from this year. They are really impressive to watch.

Gardner: Kurt, we are about out of time, but I was curious, what’s the low-lying fruit? Harish mentioned government, VDI, healthcare. Is it the usual suspects with compliance issues hanging over their heads that are the low-lying fruit, or are there other organizations that would be ripe to enjoy the benefits?

Roemer: I would say compliance environments and anybody with regulatory requirements would very much be low-lying fruit for this, but anybody who has sensitive applications or very sensitive use-cases, too. Oftentimes, we hear things like outsourcing as being one of the more sensitive use-cases because you have external third parties who are getting in and either developing code for you, administering part of the operating environment, or something else.

We have also seen a pretty big uptick in terms of people being interested in this for administering the cloud. As you move up to cloud environments and you are defining new operating environments in the cloud while putting new applications up in the cloud, you need to make sure that your administrative model is protected.

Oftentimes, you use a browser directly to provide all of the security interfaces for the cloud, and by publishing that browser and putting this solution in front of it, you can make sure that malware is not interrupting your ability to securely administer the cloud environment.

Gardner: Last question to you, Harish. What should organizations do to get ready for this? I hope we have enticed them to learn more about it. For those organizations that actually might want to deploy, what do they need to think about in order to be in the best position to do that?

A new way of life

Agastya: Organizations need to think aboutsecure virtualization as a way of life within organizational behavior. As a result, I think we will start to see more people with titles like Security DevOps (SecDevOps).

As far as specifically using HVI, organizations should be worried about how advanced attacks could enter their data center and potentially result in a very, very dangerous breach and the loss of confidential intellectual property.

If you are worried about that, you are worried about ransomware because an end-user sitting in front of a client browser is potentially putting out your address. You will want to think about a technology like HVI. The first step for that is to talk to us and there is a lot of information on the Bitdefender website as well as on Citrix’s website.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in application transformation, big data, Bitdefender, Citrix, Cloud computing, Cyber security, data analysis, data center, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Government, Identity, risk assessment, Security, Virtualization | Tagged , , , , , , , , , , , , | Leave a comment

SAP Ariba and MercadoLibre to consumerize business commerce in Latin America

The next BriefingsDirect global digital business panel discussion explores how the expansion of automated tactical buying for business commerce is impacting global markets, and what’s in store next for Latin America.

We’ll specifically examine how “spot buy” approaches enable companies to make time-sensitive and often mission-critical purchases, even in complex and dynamic settings, like Latin America.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the rising tide of such tactical business buying improvements, please join our guests, Karen Bruck, Corporate Sales Director at MercadoLibre.com in Buenos Aires, Argentina; Diego Cabrera Canay, Director of Financial Planning at MercadoLibre, and Tony Alvarez, General Manager of SAP Ariba‘s Spot Buy Business. The panel was recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: SAP Ariba Spot Buy has been in the market a few years. Tell us about where it has rolled out so far, why certain markets are being approached, and then about Latin America specifically.

Alvarez: The concept is a few years old, but we’ve been delivering SAP Ariba Spot Buy for about a year. We began in the US, and over the past 12 months the concept of Spot Buy has progressed because of our customer base. Our customer base has pushed us in a direction that is, quite frankly, even beyond Spot Buy — and it’s getting into trusted, vetted content.

Tony Alvarez

Alvarez

We are approaching the market with a two-pronged strategy of, yes, we have the breadth of content so that when somebody goes into an SAP Ariba application they can find what they are looking for, but we also now have parameters and controls that allow them to vet that content and to put a filter on it.

Over the last 12 months, we’ve come a long way. We are live in the US, and with early access in the UK and Germany. We just went live in Australia, and now we are very much looking forward to going live and moving fast into Latin America with MercadoLibre.

Gardner: Spot buying, or tactical buying, is different from strategic or more organized long-term buying. Tell us about this subset of procurement.

Alvarez: SAP Ariba is a 20 year-old company, and its roots are in that rigorous, sourced approach. We do hundreds of billions of dollars through contract catalog on the Ariba Network, but there’s a segment — and we believe it’s upward of 15% of spend — that is spot buy spend. The procurement professional often has no idea what’s being bought. And I think there are two approaches to that — either ignorance is bliss and they are glad that it’s out of their purview, or it also keeps them up at night.

SAP Ariba Spot Buy allows them to have visibility into that spend. By partnering with providers like MercadoLibre, they have content from trusted and vetted sellers to bring to the table – so it’s a really nice match for procurement.

Liberating limits

Gardner: The trick is to allow for flexibility and being dynamic, but also putting in enough rules and policies so that things don’t go off-track.

Alvarez: Exactly. For example, it’s like putting a filter on your kids’ smartphone. You want them to be able to be liberated so they can go and do as they please with phone calls — but not to go off the guardrails.

Gardner: Karen, tell us about MercadoLibre and why Latin America might be a really interesting market for this type of Spot Buy service.

Bruck: MercadoLibre is a leading e-commerce platform in Latin America, where we provide the largest marketplaces in 16 different countries. Our main markets are Brazil, Mexico, and Argentina, and that’s where we are going the start this partnership with SAP Ariba.

Karen Bruck

Bruck

We have upward of 60 million items listed on our platform, and this breadth of supplies will make purchasing very exciting. Latin America is a complicated market — and we like this complexity. We do very well.

It’s complicated because there are different rates of inflation in different countries, and so contracts can be hard to complete. What we bring to the table is an assortment of great payment and shipping solutions that make it easy for companies to purchase items. As Tony was saying, these are not under long-term contracts, but we still get to make use of this vast supply.

Gardner: Tony mentioned that maybe 15% of spend is in this category. Diego, do you think that that number might be higher in some of the markets that you serve?

Cabrera Canay: That’s probably the number — but that is a big number in terms of the spend within companies. So we have to get there and see what happens.

Progressive partnership

Gardner: Tony, tell us about the partnership. What is MercadoLibre.com bringing to the table? What is Ariba bringing to the table? How does this fit together for a whole that is greater than the sum of its parts?

Alvarez: It really is a well-matched partnership. SAP Ariba is the leading cloud procurement platform, period. When you look in Latin America, our penetration with SAP Enterprise Resource Planning (ERP) is even greater. We have a very strong installed base with SAP ERP.

Our plan is to take the SAP Ariba Spot Buy content and make it available to the SAP installed base. So this goes way beyond just SAP Ariba. And when you think about what Karen mentioned — difficulties in Latin America with high inflation — the catalog approach is not used as much in Latin America because everything is so dynamic.

For example, you might sign a contract but in just in a couple of weeks that contract may be obsolete, or unfavorable because of a change in pricing. But once we build controls and parameters in SAP Ariba Spot Buy, you can layer that on top of MercadoLibre content, which is super-broad. If you’re looking for it you’re going to find it, and that content is constantly updated. You gain real-time access to the latest information, and then the procurement person gets the benefit of control.

So I’m very optimistic. As Diego mentioned, I think 15% is really on the low-end in Latin America for this type of spend. I think this will be a really nice way to put digital catalog buying in the hands of large enterprise buyers.

Gardner: Speaking of large enterprise buyers, if I’m a purchasing official in one of your new markets, what should I be thinking about how this is going to benefit me?

Transparent, trusted transactions

It saves a lot of time, it makes the comparison very transparent, and you are able to control the different options. Overall, it’s a win-win … a partnership, a match made in heaven.

Bruck: Let me talk about this from experience. As a country manager at MercadoLibre, I had to do a lot of the procurement, together with our procurement officers. It was really frustrating at times because all of these purchases had to be one-off engagements, with a different vendor every time. That takes a lot of time. You also have to bring in price comparisons, and that’s not always a simple process.

So what this platform gives you is the ability to be very transparent about prices and among different supplies. That makes it very easy to be able to buy every time without having to call and get the vendor to be in your own buying platform.

It saves a lot of time, it makes the comparison very transparent, and you are able to control the different options. Overall, it’s a win-win. So I do believe this is a partnership, a match made in heaven.

We were also very interested in business-to-business (B2B) industries. When Tony and SAP Ariba came to our offices to offer this partnership, we thought this would be a great way to leverage their needs with our supply and make it work.

Gardner: For sellers, this enables them to do repeated business more easily, more automated and so at scale. For buyers, with transparency they have more insight into getting the best prices, the best terms of delivery. Let’s expand on that win-win. Diego, tell us about the business benefits for all parties.

Big and small, meet at the mall 

Cabrera Canay: In the past few years, we have been working to make MercadoLibre the biggest “mall” in e-commerce. We have the most important brands and the most important retailers selling through MercadoLibre.

Diego Cabrera Canay

Cabrera Canay

What differentiates us is that we are confident we have the best prices — and also other great services such as free shipping, easy payments, and financing. We are sure that we can offer the buyers better purchasing.

Obviously, from the side of sellers, this all provides higher demand, it raises the bar in terms of having qualified buyers, and then giving the best services. That’s very exciting for us.

Gardner: Tony, we mentioned large enterprises, but this cuts across a great deal more of the economy, such as small- to medium sized (SMB) businesses. Tell us about how this works across diverse economies where there are large players but lots of small ones, too?

Alvarez: On the sales side, this gives really small businesses opportunity to reach large enterprise buyers that probably weren’t there before.

Diego was being modest, but MercadoLibre’s payment structure, MercadoPago, is incredibly robust, and it’s incredibly valuable to that end-seller, and also to the buyer.

Just having that platform and then connecting — you are basically taking two populations, the large and small sellers, and the large and small buyers, and allowing them to commingle more than they ever had in the past.

Gardner: Karen, as you mentioned from your own experience, when you’re dealing with paper, and you are dealing with one-offs, it’s hard to just keep track of the process, never mind to analyze it. But when we go digital, when we have a platform, when we have business networks at work, then we can start to analyze things for companies — and more broadly into markets.

How do you see this partnership accelerating the ability to leverage analytics, leverage some of the back-end platform technologies with SAP HANA and SAP Ariba, and making more strides toward productivity for your customers?

Data discoveries

Bruck: Right. When everything is tracked, as this will be, because every single purchase will be inside their SAP Ariba platform, it is all part of your “big data.” So then you can actually drop it, control it, analyze it, and say, “Hey, maybe these particular purchases mean that we should have long-term contracts, or that our long-term contracts were not priced correctly,” and maybe that’s an opportunity to save money and lower costs.

So once you can track data, you can do a lot of things, and discover new opportunities for either being more efficient or reducing costs – and that’s ultimately what we all want in all the departments of our companies.

Gardner: And for those listeners and readers who are interested in taking advantage of these services, and ultimately that great ability to analyze, what should they be doing now to get ready? Are there some things they could do culturally, organizationally, in order to become that more digital business when these services are available to them?

Paper is terrible for companies; you have to rethink your purchase processing in a digital way.

Cabrera Canay: I can talk about in our own case, where we are rebuilding our purchase processes. Paper is terrible for companies; you have to rethink your purchase processing in a digital way. Once you do it, SAP Ariba is a great solution, and with SAP Ariba Spot Buy we will have the best conditions for the buyers.

Bruck: It’s a natural process. People are going digital and embracing these new trends and technologies. It will make them more efficient. If they get up to speed quickly, it will become less about controlling stuff that they don’t need to control. They will really understand the benefits, so it will be a natural adoption.

Gardner: Tony, coming back full circle, as you have rolled SAP Ariba Spot Buy out from North America to Europe to Asia-Pacific, and now to Latin America — what have you learned in the way people use it?

Alvarez: First, at a macro level, people have found this to be a useful tool to replace some of the contracts that were less important, and so they can rely on marketplaces.

Second, we’ve really found as we’ve deployed in the US that a lot of times multinational companies are like, “Hey, that’s great, I love this, but I really want to use this in Latin America.” So they want to go and get visibility elsewhere.

Turn-key technique

Third, they want a tool that doesn’t require any training. If I’m a procurement professional, I want my users to already be expert at using the tool. We’ve designed this in the process context, and in concert with the content partners. You can just walk up and start using it. You don’t have to be an expert, and it keeps you within the guardrails without even thinking about it.

Gardner: And being a cloud-based, software-as-a-service (SaaS) solution you’re always analyzing how it’s being used — going after that ultimate optimized user experience — and then building those improvements back in on a constant basis?

Alvarez: Exactly. Always.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Business intelligence, Business networks, Cloud computing, data analysis, Enterprise transformation, ERP, machine learning, procurement, SAP, SAP Ariba, Spot buying, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Awesome Procurement —Survey shows how business networks fuel innovation and business transformation

The next BriefingsDirect digital business insights interview explores the successful habits, practices, and culture that define highly effective procurement organizations.

We’ll uncover unique new research that identifies and measures how innovative companies have optimized their practices to overcome the many challenges facing business-to-business (B2B) commerce.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the traits and best practices of the most successful procurement organizations, please join Kay Ree Lee, Director of Business Analytics and Insights at SAP Ariba. The interview was recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas, and is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Procurement is more complex than ever, supply chains stretch around the globe, regulation is on the rise, and risk is heightened across many fronts. Despite these, innovative companies have figured out how to overcome their challenges, and you have uncovered some of their secrets through your Annual Benchmarking Survey. Tell us about your research and your findings.

Lee: Every year we conduct a large benchmark program benefiting our customers that combines a traditional survey with data from the procurement applications, as well as business network.

Kay Ree Lee

Lee

This past year, more than 200 customers participated, covering more than $400 billion in spend. We analyzed the quantitative and qualitative responses of the survey and identified the intersection between those responses for top performers compared to average performers. This has allowed us to draw correlations between what top performers did well and the practices that drove those achievements.

Gardner: What’s changed from the past, what are you seeing as long-term trends?

Lee: There are three things that are quite different from when we last talked about this a year ago.

The number one trend that we see is that digital procurement is gaining momentum quickly. A lot of organizations are now offering self-service tools to their internal stakeholders. These self-service tools enable the user to evaluate and compare item specifications and purchase items in an electronic marketplace, which allows them to operate 24×7, around-the-clock. They are also utilizing digital networks to reach and collaborate with others on a larger scale.

The second trend that we see is that while risk management is generally acknowledged as important and critical, for the average company, a large proportion of their spend is not managed. Our benchmark data indicates that an average company manages 68% of their spend. This leaves 32% of spend that is unmanaged. If this spend is not managed, the average company is also probably not managing their risk. So, what happens when something unexpected occurs to that non-managed spend?

The third trend that we see is related to compliance management. We see compliance management as a way for organizations to deliver savings to the bottom line. Capturing savings through sourcing and negotiation is a good start,  but at the end of the day, eliminating loopholes through a focus on implementation and compliance management is how organizations deliver and realize negotiated savings.

Gardner: You have uncovered some essential secrets — or the secret sauce — behind procurement success in a digital economy. Please describe those.

Five elements driving procurement processes

Lee: From the data, we identified five key takeaways. First, we see that procurement organizations continue to expand their sphere of influence to greater depth and quality within their organizations. This is important because it shows that the procurement organization and the work that procurement professionals are involved in matters and is appreciated within the organization.

The second takeaway is that – while cost reduction savings is near and dear to the heart of most procurement professionals — leading organizations are focused on capturing value beyond basic cost reduction. They are focused on capturing value in other areas and tracking that value better.

The third takeaway is that digital procurement is firing on all cylinders and is front and center in people’s minds. This was reflected in the transactional data that we extracted.

The fourth takeaway is related to risk management. This is a key focus area that we see instead of just news tracking related to your suppliers.

The fifth takeaway is — compliance management and closing the purchasing loopholes is what will help procurement deliver bottom-line savings.

Gardner: What next are some of the best practices that are driving procurement organizations to have a strategic impact at their companies, culturally?

Lee: To have a strategic impact in the business, procurement needs to be proactive in engaging the business. They should have a mentality of helping the business solve business problems as opposed to asking stakeholders to follow a prescribed procurement process. Playing a strategic role is a key practice that drives impact.

Another practice that drives strategic impact is the ability to utilize and adopt technology to your advantage through the use of digital networks.

They should also focus on broadening the value proposition of procurement. We see leading organizations placing emphasis on contributing to revenue growth, or increasing their involvement in product development, or co-innovation that contributes to a more efficient and effective process.

Another practice that drives strategic impact is the ability to utilize and adopt technology to your advantage through the use of digital networks, system controls to direct compliance, automation through workflow, et cetera.

These are examples of practices and focus areas that are becoming more important to organizations.

Using technology to track technology usage

Gardner: In many cases, we see the use of technology having a virtuous adoption cycle in procurement. So the more technology used, the better they become at it, and the more technology can be exploited, and so on. Where are we seeing that? How are leading organizations becoming highly technical to gain an advantage?

Lee: Companies that adopt new technology capabilities are able to elevate their performance and differentiate themselves through their capabilities. This is also just a start. Procurement organizations are pivoting towards advanced and futuristic concepts, and leaving behind the single-minded focus on cost reduction and cost efficiency.

Digital procurement utilizing electronic marketplaces, virtual catalogs, gaining visibility into the lifecycle of purchase transactions, predictive risk management, and utilizing large volumes of data to improve decision-making – these are key capabilities that benefit the bold and the future-minded. This enables the transformation of procurement, and forms new roles and requirements for the future procurement organization.

Gardner: We are also seeing more analytics become available as we have more data-driven and digital processes. Is there any indication from your research that procurement people are adopting data-scientist-ways of thinking? How are they using analysis more now that the data and analysis are available through the technology?

If you extract all of that data, cleanse it, mine it, and make sense out of it, you can then make informed business decisions and create valuable insights.

Lee: You are right. The users of procurement data want insights. We are working with a couple of organizations on co-innovation projects. These organizations   actively research, analyze, and use their data to answer questions such as:

  • How does an organization validate that the prices they are paying are competitive in the marketplace?
  • After an organization conducts a sourcing event and implements the categories, how do they actually validate that the price paid is what was negotiated?
  • How do we categorize spend accurately, particularly if a majority of spend is services spend where the descriptions are non-standard?
  • Are we using the right contracts with the right pricing?

As you can imagine, when people enter transactions in a system, not all of it is contract-based or catalog-based. There is still a lot of free-form text. But if you extract all of that data, cleanse it, mine it, and make sense out of it, you can then make informed business decisions and create valuable insights. This goes back to the managing compliance practice we talked about earlier.

They are also looking to answer questions like, how do we scale supplier risk management to manage all of our suppliers systematically, as opposed to just managing the top-tier suppliers?

These two organizations are taking data analysis further in terms of creating advantages that begin to imbue excellence into modern procurement and across all of their operations.

Gardner: Kay Ree, now that you have been tracking this Benchmark Survey for a few years, and looking at this year’s results, what would you recommend that people do based on your findings?

Future focus: Cost-reduction savings and beyond

Lee: There are several recommendations that we have. One is that procurement should continue to expand their span of influence across the organization. There are different ways to do this but it starts with an understanding of the stakeholder requirements.

The second is about capturing value beyond cost-reduction savings. From a savings perspective, the recommendation we have is to continue to track sourcing savings — because cost-reduction savings are important. But there are other measures of value to track beyond cost savings. That includes things like contribution to revenue, involvement in product development, et cetera.

The third recommendation relates to adopting digital procurement by embracing technology. For example, SAP Ariba has recently introduced some innovations. I think the user really has an advantage in terms of going out there, evaluating what is out there, trying it out, and then seeing what works for them and their organization.

As organizations expand their footprint globally, the fourth recommendation focuses on transaction efficiency. The way procurement can support organizations operating globally is by offering self-service technology so that they can do more with less. With self-service technology, no one in procurement needs to be there to help a user buy. The user goes on the procurement system and creates transactions while their counterparts in other parts of the world may be offline.

The fifth recommendation is related to risk management. A lot of organizations when they say, “risk management,” they are really only tracking news related to their suppliers. But risk management includes things like predictive analytics, predictive risk measures beyond your strategic suppliers, looking deeper into supply chains, and across all your vendors. If you can measure risk for your suppliers, why not make it systematic? We now have the ability to manage a larger volume of suppliers, to in fact manage all of them. The ones that bubble to the top, the ones that are the most risky, those are the ones that you create contingency plans for. That helps organizations really prepare to respond to disruptions in their business.

The last recommendation is around compliance management, which includes internal and external compliance. So, internal adherence to procurement policies and procedures, and then also external following of governmental regulations. This helps the organization close all the loopholes and ensure that sourcing savings get to the bottom line.

Be a leader, not a laggard

Gardner: When we examine and benchmark companies through this data, we identify leaders, and perhaps laggards — and there is a delta between them. In trying to encourage laggards to transform — to be more digital, to take upon themselves these recommendations that you have — how can we entice them? What do you get when you are a leader? What defines the business value that you can deliver when you are taking advantage of these technologies, following these best practices?

Lee: Leading organizations see higher cost reduction savings, process efficiency savings and better collaboration internally and externally. These benefits should speak for themselves and entice both the average and the laggards to strive for improvements and transformation.

From a numbers perspective, top performers achieve 9.7% savings as a percent of sourced spend. This translates to approximately $20M higher savings per $B in spend compared to the average organization.

We talked about compliance management earlier. A 5% increase in compliance increases realized savings of $4.4M per $1B in spend. These are real hard dollar savings that top performers are able to achieve.

In addition, top performers are able to attract a talent pool that will help the procurement organization perform even better. If you look at some of the procurement research, industry analysts and leaders are predicting that there may be a talent shortage in procurement. But, as a top performer, if you go out and recruit, it is easier to entice talent to the organization. People want to do cool things and they want to use new technology in their roles.

Gardner: Wrapping up, we are seeing some new and compellingtechnologies here at Ariba LIVE 2017 — more use of artificial intelligence(AI), increased use of bringing predictive tools into a context so that they can be of value to procurement during the life-cycle of a process.

As we think about the future, and more of these technologies become available, what is it that companies should be doing now to put themselves in the best position to take advantage of all of that?

Curious org

Lee: It’s important to be curious about the technology available in the market and perhaps structure the organization in such a way that there is a team of people on the procurement team who are continuously evaluating the different procurement technologies from different vendors out there. Then they can make decisions on what best fits their organization.

Having people who can look ahead, evaluate, and then talk about the requirements, then understand the architecture, and evaluate what’s out there and what would make sense for them in the future. This is a complex role. He or she has to understand the current architecture of the business, the requirements from the stakeholders, and then evaluate what technology is available. They must then determine if it will assist the organization in the future, and if adopting these solutions provides a return on investment and ongoing payback.

So I think being curious, understanding the business really well, and then wearing a technology hat to understand what’s out there are key. You can then be helpful to the organization and envision how adopting these newer technologies will play out.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, CRM, data analysis, Enterprise transformation, ERP, Information management, machine learning, Networked economy, procurement, SAP, SAP Ariba, Security, Spot buying | Tagged , , , , , , , , , , , | Leave a comment

Experts define new ways to manage supply chain risk in a digital economy

The next BriefingsDirect digital business thought leadership panel discussion explores new ways that companies can gain improved visibility, analytics, and predictive responses to better manage supply chain risk in the digital economy.

The panel examines how companies such as Nielsen are using cognitive computing search engines, and even machine learning and artificial intelligence (AI), to reduce risk in their overall buying and acquisitions.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the exploding sophistication around gaining insights into advanced business commerce, we welcome James Edward Johnson, Director of Supply Chain Risk Management and Analysis at Nielsen; Dan Adamson, Founder and CEO of OutsideIQ in Toronto, and Padmini Ranganathan, Vice President of Products and Innovation at SAP Ariba.

The panel was assembled and recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Padmini, we heard at SAP Ariba LIVE that risk is opportunity. That stuck with me. Are the technologies really now sufficient that we can fully examine risks to such a degree that we can turn that into a significant business competitive advantage? That is to say, those who take on risk seriously, can they really have a big jump over their competitors?

Padmini Ranganathan (1)

Ranganathan

Ranganathan: I come from Silicon Valley, so we have to take risks for startups to grow into big businesses, and we have seen a lot of successful entrepreneurs do that. Clearly, taking risks drives bigger opportunity.

But in this world of supplier and supply chain risk management, it’s even more important and imperative that the buyer and supplier relationships are risk-aware and risk-free. The more transparent that relationship becomes, the more opportunity for driving more business between those relationships.

That context of growing business — as well as growing the trust and the transparent relationships — in a supply chain is better managed by understanding the supplier base. Understanding the risks in the supplier base, and then converting them into opportunities, allows mitigating and solving problems jointly. By collaborating together, they form partnerships.

Gardner: Dan, it seems that what was once acceptable risk can now be significantly reduced. How do people in procurement and supply chain management know what acceptable risk is — or maybe they shouldn’t accept any risk?

Adamson: My roots are also from Silicon Valley, and I think you are absolutely right that at times you should be taking risks — but not unnecessarily. What the procurement side has struggled with — and this is from me jumping into financial institutions where they treat risk very differently through to procurement – is risk versus the price-point to avoid that risk. That’s traditionally been the big problem.

Dan Adamson

Adamson

For every vendor that you on-board, you have to pay $1,000 for a due diligence report and it’s really not price-effective. But, being able to maintain and monitor that vendor on a regular basis at acceptable cost – then there’s a real risk-versus-reward benefit in there.

What we are helping to drive are a new set of technology solutions that enable a deeper level of due diligence through technology, through cognitive computing, that wasn’t previously possible at the price point that makes it cost-effective. Now it is possible to clamp down and avoid risk where necessary.

Gardner: James, as a consumer of some of these technologies, do you really feel that there has been a significant change in that value equation, that for less money output you are getting a lot less risk?

Knowing what you’re up against

Johnson: To some degree that value was always there; it was just difficult to help people see that value. Obviously tools like this will help us see that value more readily.

It used to be that in order to show the value, you actually had to do a lot of work, and it was challenging. What we are talking about here is that we can begin to boil the ocean. You can test these products, and you can do a lot of work just looking at test results.

And, it’s a lot easier to see the value because you will unearth things that you couldn’t have seen in the past.

James Edward Johnson

Johnson

Whereas it used to take a full-blown implementation to begin to grasp those risks, you can now just test your data and see what you find. Most people, once they have their eyes wide open, will be at least a little more fearful.  But, at the same time — and this goes back to the opportunity question you asked — they will see the opportunity to actually tackle these risks. It’s not like those risks didn’t exist in the past, but now they know they are there — and they can decide to do something about it, or not.

Gardner: So rather than avoid the entire process, now you can go at the process but with more granular tools to assess your risks and then manage them properly?

Johnson: That’s right. I wouldn’t say that we should have a risk-free environment; that would cost more money than we’re willing to pay. That said, we should be more conscious of what we’re not yet willing to pay for.

Rather than just leaving the risk out there and avoiding business where you can’t access information about what you don’t know — now you’ll know something. It’s your choice to decide whether or not you want to go down the route of eliminating that risk, of living with that risk, or maybe something in between. That’s where the sweet spot is. There are probably a lot of intermediate actions that people would be taking now that are very cheap, but they haven’t even thought to do so, because they haven’t assessed where the risk is.

Gardner: Padmini, because we’re looking at a complex landscape — a supply chain, a global supply chain, with many tiers — when we have a risk solution, it seems that it’s a team sport. It requires an ecosystem approach. What has SAP Ariba done, and what is the news at SAP Ariba LIVE? Why is it important to be a team player when it comes to a fuller risk reduction opportunity?

Teamwork

Ranganathan: You said it right. The risk domain world is large, and it is specialized. The language that the compliance people use in the risk world is somewhat similar to the language that the lawyers use, but very different from the language that the information technology (IT) security and information security risk teams use.

The reason you can’t see many of the risks is partly because the data, the information, and the fragmentation have been too broad, too wide. It’s also because the type of risks, and the people who deal with these risks, are also scattered across the organization.

So a platform that supports bringing all of this together is number one. Second, the platform must support the end-to-end process of managing those supply chain relationships, and managing the full supply chain and gain the transparency across it. That’s where SAP Ariba has headed with Direct Materials Sourcing and with getting more into supply chain collaboration. That’s what you heard at SAP Ariba LIVE.

We all understand that supply chain much better when we are in SAP Ariba, and then you have this ecosystem of partners and providers. You have the technology with SAP and HANA to gain the ability to mash up big data and set it in context, and to understand the patterns. We also have the open ecosystem and the open source platform to allow us to take that even wider. And last but not the least, there is the business network.

So it’s not just between one company and another company, it’s a network of companies operating together. The momentum of that collaboration allows users to say, “Okay, I am going to push for finding ethical companies to do business with,” — and then that’s really where the power of the network multiplies.

Gardner: Dan, when a company nowadays buys something in a global supply chain, they are not just buying a product — they are buying everything that’s gone on with that product, such as the legacy of that product, from cradle to PO. What is it that OutsideIQ brings to the table that helps them get a better handle on what that legacy really is?

Dig deep, reduce risk, save time

Adamson: Yes, and they are not just buying from that seller, they are buying from the seller that sold it to that seller, and so they are buying a lot of history there — and there is a lot of potential risk behind the scenes.

That’s why this previously has been a manual process, because there has been a lot of contextual work in pulling out those needles from the haystack. It required a human level of digging into context to get to those needles.

The exciting thing that we bring is a cognitive computing platform that’s trainable — and it’s been trained by FinCrime’s experts and corporate compliance experts. Increasingly, supply management experts help us know what to look for. The platform has the capability to learn about its subject, so it can go deeper. It can actually pivot on where it’s searching. If it finds a presence in Afghanistan, for example, well then that’s a potential risk in itself, but it can then go dig deeper on that.

And that level of deeper digging is something that a human really had to do before. This is the exciting revolution that’s occurring. Now we can bring back that data, it can be unstructured, it can be structured, yet we can piece it together and provide some structure that is then returned to SAP Ariba.

The great thing about the supply management risk platform or toolkit that’s being launched at SAP Ariba LIVE is that there’s another level of context on top of that. Ariba understands the relationship between the supplier and the buyer, and that’s an important context to apply as well.

How you determine risk scores on top of all of that is very critical. You need to weed out all of the noise, otherwise it would be a huge data science exercise and everyone would be spinning his or her wheels.

This is now a huge opportunity for clients like James to truly get some low-hanging fruit value, where previously it would have been literally a witch-hunt or a huge mining expedition. We are now able to achieve this higher level of value.

Gardner: James, Dan just described what others are calling investigative cognitive computing brought to bear on this supply chain risk problem. As someone who is in the business of trying to get the best tools for their organization, where do you come down on this? How important is this to you?

Johnson: It’s very important. I have done the kinds of investigations that he is talking about. For example, if I am looking at a vendor in a high-risk country, particularly a small vendor that doesn’t have an international presence  that is problematic for most supplier investigations. What do I do? I will go and do some of the investigation that Dan is talking about.

Now I’m usually sitting at my desk in Chicago. I’m not going out in the world. So there is a heightened level of due-diligence that I suspect neither of us are really talking about here. With that limitation, you want to look up not only the people, you want to look up all their connections. You might have had a due-diligence form completed, but that’s an interested party giving you information, what do you do with it?

Well, I can run the risk search on more than just the entity that I’m transacting with.  I am going to run it on everyone that Dan mentioned. Then I am going to look up all their LinkedIn profiles, see who they are connected to. Do any of those people show any red flags? I’d look at the bank that they use. Are there any red flags with their bank?

I can do all that work, and I can spend several hours doing all that work. As a lawyer I might dig a little deeper than someone else, but in the end, it’s human labor going into the effort.

Gardner: And that really doesn’t scale very well.

Johnson: That does not scale at all. I am not going to hire a team of lawyers for every supplier. The reality here is that now I can do some level of that time-consuming work with every supplier by using the kind of technology that Dan is talking about.

The promise of OutsideIQ technology is incredible. It is an early and quickly expanding, opportunity. It’s because of relationships like the one between SAP Ariba and OutsideIQ that I see a huge opportunity between Nielsen and SAP Ariba. We are both on the same roadmap.

Nielsen has a lot of work to do, SAP Ariba has a lot of work to do, and that work will never end, and that’s okay. We just need to be comfortable with it, and work together to build a better world.

Gardner: Tell us about Nielsen. Then secondarily, what part of your procurement, your supply chain, do you think this will impact best first?

Automatic, systematic risk management

Johnson: Nielsen is a market research company. We answer two questions: what do people watch? And what do people buy? It sounds very simple, but when you cover 90% of the world’s population, which we do – more than six billion people — you can imagine that it gets a little bit more complicated.

We house about 54 petabytes of database data. So the scale there is huge. We have 43,000 employees. It’s not a small company. You might know Nielsen for the set-top boxes in the US that tell what the ratings were overnight for the Super Bowl, for example, but it’s a lot more than that. And you can imagine, especially when you’re trying to answer what do people buy in  developing countries with emerging economies? You are touching some riskier things.

In terms of what this SAP Ariba collaboration can solve for us, the first quick hit is that we will no longer have to leverage multiple separate sources of information. I can now leverage all the sources of information at one time through one interface. It is already being used to deliver information to people who are involved in the procurement chain. That’s the huge quick win.

The secondary win is from the efficiency that we get in doing that first layer of risk management. Now we can start to address that middle tier that I mentioned. We can respond to certain kinds of risk that, today, we are doing ad-hoc, but not systematically. There is that systematic change that will allow us to not only target the 100 to 200 vendors that we might prioritize — but the thousands of vendors that are somewhere in our system, too.

That’s going to revolutionize things, especially once you fold in the environmental, social and governance (ESG) work that, today, is very focused for us. If I can spread that out to the whole supply chain, that’s revolutionary. There are a lot of low-cost things that you can do if you just have the information.

So it’s not always a question of, “am I going to do good in the world and how much is it going to cost me?” It’s really a question of, “What is the good in the world that’s freely available to me, that I’m not even touching?” That’s amazing! And, that’s the kind of thing that you can go to work for, and be happy about your work, and not just do what you need to do to get a paycheck.

Gardner: It’s not just avoiding the bad things; it’s the false positives that you want to remove so that you can get the full benefit of a diverse, rich supplier network to choose from.

Johnson: Right, and today we are essentially wasting a lot of time on suspected positives that turn out to be false. We waste time on them because we go deeper with a human than we need to. Let’s let the machines go as deep as they can, and then let the humans come in to take over where we make a difference.

Gardner: Padmini, it’s interesting to me that he is now talking about making this methodological approach standardized, part of due-diligence that’s not ad-hoc, it’s not exception management. As companies make this a standard part of their supply chain evaluations, how can we make this even richer and easier to use?

Ranganathan: The first step was the data. It’s the plumbing; we have to get that right. It’s about the way you look at your master data, which is suppliers; the way you look at what you are buying, which is categories of spend; and where you are buying from, which is all the regions. So you already have the metrics segmentation of that master data, and everything else that you can do with SAP Ariba.

The next step is then the process, because it’s really not a one-size-fits-all. It cannot be a one-size-fits-all, where every supplier that you on-board you are going to ask them the same set of questions, check the box and move on.

I am going to use the print service vendor example again, which is my favorite. For marketing materials printing, you have a certain level of risk, and that’s all you need to look at. But you still want, of course, to look at them for any adverse media incidents, or whether they suddenly got on a watch-list for something, you do want to know that.

But when one of your business units begins to use them for customer-confidential data and statement printing — the level of risk shoots up. So the intensity of risk assessments and the risk audits and things that you would do with that vendor for that level of risk then has to be engineered and geared to that type of risk.

So it cannot be a one-size-fits-all; it has to go past the standard. So the standardization is not in the process; the standardization is in the way you look at risk so that you can determine how much of the process do I need to apply and I can stay in tune.

Gardner: Dan, clearly SAP Ariba and Nielsen, they want the “dials,” they want to be able to tune this in. What’s coming next, what should we expect in terms of what you can bring to the table, and other partners like yourselves, in bringing the rich, customizable inference and understanding benefits that these other organizations want?

Constructing cognitive computing by layer

Adamson: We are definitely in early days on the one hand. But on the other hand, we have seen historically many AI failures, where we fail to commercialize AI technologies. This time it’s a little different, because of the big data movement, because of the well-known use cases in machine learning that have been very successful, the pattern matching and recommending and classifying. We are using that as a backbone to build layers of cognitive computing on top of that.

And I think as Padmini said, we are providing a first layer, where it’s getting stronger and stronger. We can weed out up to 95% of the false-positives to start from, and really let the humans look at the thorny or potentially thorny issues that are left over. That’s a huge return on investment (ROI) and a timesaver by itself.

But on top of that, you can add in another layer of cognitive computing, and that might be at the workflow layer that recognizes that data and says, “Jeez, just a second here, there’s a confidentiality potential issue here, let’s treat this vendor differently and let’s go as far as plugging in a special clause into the contract.” This is, I think, where SAP Ariba is going with that. It’s building a layer of cognitive computing on top of another layer of cognitive computing.

Actually, human processes work like that, too. There is a lot of fundamental pattern recognition at the basis of our cognitive thought, and on top of that we layer on top logic. So it’s a fun time to be in this field, executing one layer at a time, and it’s an exciting approach.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, big data, Cloud computing, Cyber security, ERP, Identity, Internet of Things, machine learning, Networked economy, procurement, risk assessment, SAP, SAP Ariba, Security, Spot buying | Tagged , , , , , , , , , , , , , | Leave a comment

How SAP Ariba became a first-mover as Blockchain comes to B2B procurement

The next BriefingsDirect digital business thought leadership panel discussion examines the major opportunity from bringing Blockchain technology to business-to-business (B2B) procurement and supply chain management.

We will now explore how Blockchain’s unique capabilities can provide comprehensive visibility across global supply chains and drive simpler verification of authenticity, security, and ultimately control.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how Blockchain is poised to impact and improve supply chain risk and management, we’re joined by Joe Fox, Senior Vice President for Business Development and Strategy at SAP Ariba, and Leanne Kemp, Founder and CEO of Everledger, based in London.

The panel was assembled and recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Joe, Blockchain has emerged as a network methodology, running crypto currency Bitcoin, as most people are aware of it. It’s a digitally shared record of transactions maintained by a network of computers, not necessarily with centralized authority. What could this be used for powerfully when it comes to gaining supply chain integrity?

Fox: Blockchain did start in the Bitcoin area, as peer-to-peer consumer functionality. But a lot of the capabilities of Blockchain have been recognized as important for new areas of innovation in the enterprise software space.

Joe Fox

Fox

Those areas of innovation are around “trusted commerce.” Trusted commerce allows buyers and sellers, and third parties, to gain more visibility into asset-tracking. Not just asset tracking in the context of the buyer receiving and the seller shipping — but in the context of where is the good in transit? What do I need to do to protect that good? What is the transfer of funds associated with that important asset? There are even areas of other applications, such as an insurance aspect or some kind of ownership-proof.

Gardner: It sounds to me like we are adding lot of metadata to a business process. What’s different when you apply that through Blockchain than if you were doing it through a platform?

Inherit the trust

Fox: That’s a great question. Blockchain is like the cloud from the perspective of it’s an innovation at the platform layer. But the chain is only as valuable as the external trust that it inherits. That external trust that it inherits is the proof of what you have put on the chain digitally. And that includes that proof of who has taken it off and in what way they have control.

As we associate a chain transaction, or a posting to the ledger with its original transactions within the SAP Ariba Network, we are actually adding a lot of prominence to that single Blockchain record. That’s the real key, marrying the transactional world and the B2B world with this new trusted commerce capability that comes with Blockchain.

Gardner: Leanne, we have you here as a prime example of where Blockchain is being used outside of its original adoption. Tell us first about Everledger, and then what it was you saw in Blockchain that made you think it was applicable to a much wider businesscapability.

Kemp: Everledger is a fast-moving startup using the best of emerging technology to assist in the reduction of risk and fraud. We began in April of 2015, so it’s actually our birthday this week. We started in the world of diamonds where we apply blockchain technology to bring transparency to a once opaque market.

Leanne Kemp

Kemp

And what did I see in the technology? At the very core of cryptocurrency, they were solving the problem of double-spend. They were solving the problem of transfer of value, and we could translate those very two powerful concepts into the diamond industry.

At the heart of the diamond industry, beyond the physical object itself, is certification, and certificates in the diamond industry are the currency of trade. Diamonds are cited on web sites around the world, and they are mostly sold off the merit of the certification. We were able to see the potential of the cryptocurrency, but we could decouple the currency from the ledger and we were able to then use the synthesis of the currency as a way to transfer value, or transfer ownership or custody. And, of course, diamonds are a girl’s best friend, so we might as well start there.

Dealing with diamonds

Gardner: What was the problem in the diamond industry that you were solving? What was not possible that now is?

Kemp: The diamond industry boasts some pretty impressive numbers. First, it’s been around for 130 years. Most of the relationships among buyers and sellers have survived generation upon generation based on a gentleman’s handshake and trust.

The industry itself has been bound tightly with those relationships. As time has passed and generations have passed, what we are starting to see is a glacial melt. Some of the major players have sold off entities into other regions, and now that gentleman’s handshake needs to be transposed into an electronic form.

Some of the major players in the market, of course, still reside today. But most of the data under their control sits in a siloed environment. Even the machines that are on the pipeline that help provide identity to the physical object are also black-boxed in terms of data.

We are able to bring a business network to an existing market. It’s global. Some 81 countries around the world trade in rough diamonds. And, of course, the value of the diamonds increases as they pass through their evolutionary chain. We are able to bring an aggregated set of data. Not only that, we transpose the human element of trust — the gentleman’s handshake, the chit of paper and the promise to pay that’s largely existed and has built has built 130 years of trade.

We are now able to transpose that into a set of electronic-form technologies — Blockchain, smart contracts, cryptography, machine vision — and we are able to take forward a technology platform that will see transactional trust being embedded well beyond my lifetime — for generations to come.

Gardner: Joe, we have just heard how this is a problem-solution value in the diamond industry. But SAP Ariba has its eyes on many industries. What is it about the way things are done now in general business that isn’t good enough but that Blockchain can help improve?

Fox: As we have spent years at Ariba solving procurement problems, we identified some of the toughest. When I saw Everledger, it occurred to me that they may have cracked the nut on one of the toughest areas of B2B trade — and that is true understanding, visibility, and control of asset movement.

It dawned on me, too, that if you can track and trace diamonds, you can track and trace anything. I really felt like we could team up with this young company and leverage the unique way they figured out how to track and trace diamonds and apply that across a huge procurement problem. And that is, how do a supplier and a buyer manage the movement of any asset after they have purchased it? How do we actually associate that movement of the asset back to its original transactions that approved the commit-to-pay? How do you associate a digital purchase order (PO) with a digital movement of the asset, and then to the actual physical asset? That’s what we really are teaming up to do.

That receipt of the asset has been a dark space in the B2B world for a long time. Sure, you can get a shipping notice, but most businesses don’t do goods receipts. And as the asset flows through the supply chain — especially the more expensive the item is — that lack of visibility and control causes significant problems. Maybe the most important one is: overpaying for inventory to cover actual lost supply chain items in transit.

I talked to a really large UK-based telecom company and they told me that what we are going to do with Everledger, with just their fiber optics, they could cut their buying in half. Why? Because they overbuy their fiber optics to make sure they are never short on fiber optic inventory.

That precision of buying and delivery applies across the board to all merchants and all supply chains, even middle of the supply chain manufacturers. Whenever you have disruption to your inbound supply, that’s going to disrupt your profitability.

Gardner: It sounds as if what we are really doing here is getting a highly capable means — that’s highly extensible — to remove the margin of error from the tracking of goods, from cradle to grave.

Chain transactions

Fox: That’s exactly right. And the Internet is the enabler, because Blockchain is everywhere. Now, as the asset moves, you have the really cool stuff that Everledger has done, and other things we are going to do together – and that’s going to allow anybody from anywhere to post to the chain the asset receipt and asset movement.

For example, with a large container coming from overseas, you will have the chain record of every place that container has been. If it doesn’t show up at a dock, you now have visibility as the buyer that there is a supply chain disruption. That chain being out on the Internet, at a layer that’s accessible by everyone, is one of the keys to this technology.

We are going to be focusing on connecting the fabric of the chain together with Hyperledger. Everledger builds on the Hyperledger platform. The fabric that we are going to tie into is going to directly connect those block posts back to the original transactions, like the purchase order, the invoice, the ship notice. Then the companies can see not only where their asset is, but also view it in context of the transactions that resulted in the shipment.

Gardner: So the old adage — trust but verify — we can now put that to work and truly verify. There’s newstaking place here at SAP Ariba LIVE between Everledger and SAP Ariba. Tell us about that, and how the two companies — one quite small, one very large — are going to work together.

Fox: Ariba is all-in on transforming the procurement industry, the procurement space, the processes of procurement for our customers, buyers and sellers, and we are going to partner heavily with key players like Everledger.

Part of the announcement is this partnership with Everledger around track and trace, but it is not limited to track and trace. We will leverage what they have learned across our platform of $1 trillion a year in spend, with 2.5 million companies trading assets with each other. We are going to apply this partnership to many other capabilities within that.

Kemp: I am very excited. It’s a moment in time that I think I will remember for years to come. In March we also made an importantannouncement with IBM on some of the work that we have done beyond identifying objects. And that is to take the next step around ensuring that we have an ethical trade platform, meaning one that is grounded in cognitive compliance.

We will be able to identify the asset, but also know, for example in the diamond industry, that a diamond has passed through the right channels, paid the dutiful taxes that are due as a part of an international trade platform, and ensure all compliance is hardened within the chain.

I am hugely excited about the opportunity that sits before me. I am sincerely grateful that such a young company has been afforded the opportunity to really show how we are going to shine.

If you think about it, Blockchain is an evolution of the Internet.

Gardner: When it comes to open trade, removing friction from commerce, these have been goals for hundreds of years. But we really seem to be onto something that can make this highly scalable, very rich — almost an unlimited amount of data applied to any asset, connected to a ledger that’s a fluid, movable, yet tangible resource.

Fox: That’s right.

Gardner: So where do we go next, Joe? If the sky is the limit, describe the sky for me? How big is this, and where can you take it beyond individual industries? It sounds like there is more potential here.

Reduced friction costs

Fox: There is a lot of potential. If you think about it, Blockchain is an evolution of the Internet; we are going to be able to take advantage of that.

The new evolution is that it’s a structured capability across the Internet itself. It’s going to be open, and it’s going to be able to allow companies to ledger their interactions with each other. They are going to be able, in an immutable way, to track who owns which asset, where the assets are, and be able to then use that as an audit capability.

That’s all very important to businesses, and until now the Internet itself has not really had a structure for business. It’s been open, the Wild West. This structure for business is going to help with what I call trusted commerce because in the end businesses establish relationships because they want to do business with each other, not based on what technology they have.

Another key fact about Blockchain is that it’s going to reduce friction in global B2B. I always like to say if you just accelerated B2B payments by a few days globally, you would open up Gross Domestic Product (GDP), and economies would start growing dramatically. This friction around assets has a direct tie to how slowly money moves around the globe, and the overall cost and friction from that.

So how big could it go? Well, I think that we are going to innovate together with Everledger and other partners using the Hyperledger framework. We are going to add every buyer and seller on the Ariba Network onto the chain. They are just going to get it as part of our platform.

Then we are going to begin ledgering all the transactions that they think make sense between themselves. We are going to release a couple of key functions, such as smart contracts, so their contract business rules can be applicable in the flow of commerce — at the time commerce is happening, not locked up in some contract, or in some drawer or Portable Document Format (PDF) file. We are going to start with those things.

I don’t know what applications we are going to build beyond that, but that’s the excitement of it. I think the fact that we don’t know is the big play.

Gardner: From a business person’s perspective, they don’t probably care too much that it’s Blockchain that’s enabling this, just like a lot of people didn’t care 20 years ago that it was the Internet that was allowing them to shop online or send emails to anybody anywhere. What is it that we would tease out of this, rather than what the technology is, what’s the business benefit that people should be thinking about?

Fox: Everybody wants digital trust, right? Leanne, why don’t you share some of the things you guys have been exploring?

Making the opaque transparent

Kemp: In the diamond industry, there is fraud related to document tampering. Typically paper certificates exist across the backbone, so it’s very easy to be able to transpose those into a PDF and make appropriate changes for self-gain.

Double-financing of the pipeline is a very real problem; invoicing, of course accounts receivable, they have the ability to have banks finance those invoices two, three, four times.

We have issues with round-tripping of diamonds through countries, where transfer pricing isn’t declared correctly, along with the avoidance of tax and duties.

All of these issues are the dark side of the market. But, now we have the ability to bring transparency around any object, particularly in diamonds — the one commodity that’s yet to have true financial products wrapped around it. Now, what do I mean by that? It doesn’t have a futures market yet. It doesn’t have exchange traded funds (ETFs), but the performance of diamonds has outperformed gold, platinum and palladium.

Now, what does this mean? It means we can bring transparency to the once opaque, have the ability to know if an object has gone through an ethical chain, and then realize the true value of that asset. This process allows us to start and think about how new financial products can be formed around these assets.

We are hugely interested in rising asset classes beyond just the commodity section of the market. This platform shift is like going from the World Wide Web to the World Wide Ledger. Joe was absolutely correct when he mentioned that the Internet hasn’t been woven for transactional trust — but we have the ability to do this now.

So from a business perspective, you can begin to really innovate on top of this exponential set of technology stacks. A lot of companies quote Everledger as a Blockchain company. I have to correct them and I say that we are an emerging technology company. We use the very best of Blockchain and smart contracts, machine vision, sensorial data points, for us to be able to form the identity of objects.

Now, why is that important? Most financial services companies have really been focused on Know Your Customer (KYC), but we believe that it’s Know Your Object (KYO) that really creates an entirely new context around it.

Now, that transformation and the relationship of the object have already started to move. When you think about Internet of Things (IoT), mobile phones, and autonomous cars — these are largely devices to the fabric of the web. But are they connected to the fabric of the transactions and the identity around those objects?

Insurance companies have begun to understand this. My work in the last 10 years has been deeply involved in insurance. As you begin to build and understand the chain of trust and the chain of risk, then tectonic plate shifts in financial services begin to unfold.

Apps and assets, on and off the chain

Fox: It’s not just about the chain, it’s about the apps we build on top, and it’s really about what is the value to the buyer and the seller as we build those apps on top.

To Leanne’s point, it’s first going to be about the object. The funny thing is we have struggled to be able to, in a digital way, provide visibility and control of an object and this is going to fix that. In the end, B2B, which is where SAP Ariba is, is about somebody getting something and paying for it. And that physical asset that they are getting is being paid for with another asset. They are just two different forms. By digitizing both and keeping that in a ledger that really cannot be altered — it will be the truth, but it’s open to everyone, buyers and sellers.

Businesses will have to invent ways to control how frictionless this is going to be. I will give you a perfect example. In the past if I told you I could do an international payment of $1 million to somebody in two minutes, you would have told me I was crazy. With Blockchain, one corporation can pay another corporation $1 million in two minutes, internationally.

And on the chain companies like Everledger can build capabilities that do the currency translation on the fly, as it’s passing through, and that doesn’t dis-remediate the banks because how did the $1 million get onto the chain in the first place? Someone put it on the chain through a bank. The bank is backing that digital version. How does it get off the chain so you can actually do something with it? It goes through another bank. It’s actually going to make the banks more important. Again, Blockchain is only as good as the external trust that it inherits.

I really think we have to focus on getting the chain out there and really building these applications on top.

Listen to the podcast. Find it on iTunes. Get the mobile appRead a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, Business networks, Cloud computing, Cyber security, enterprise architecture, Enterprise transformation, ERP, Identity, Internet of Things, machine learning, Networked economy, procurement, SAP, SAP Ariba, Security, Spot buying | Tagged , , , , , , , , , , , | Leave a comment

Inside story of building a global security operations center for cyber defense

The next BriefingsDirect inside story examination of security best practices focuses on the building of a global security operations center (SOC) for cyber defense.

Learn here how Zayo Group in Boulder, Colorado built a state-of-the-art SOC as it expanded its international managed security service provider practice.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript.

Hear directly from Mike Vamvakaris, Vice President of Managed Cyber Security at Zayo Group, on the build-out, best practices, and end-results from this impressive project. The moderator is Serge Bertini, Vice President of Sales and General Manager of the Canada Security Division at Hewlett Packard Enterprise (HPE).

Serge Bertini: Mike, you and I have talked many times about the importance of managed security service providers (MSSPs), global SOCs, but for our readers, I want to take them back on the journey that you and I went through to get into the SOC business, and what it took from you to build this up.

So if you could, please describe Zayo’s business and what made you decide to jump into the MSSP field.

Mike Vamvakaris: Thanks for the opportunity. Zayo Group is a global communications and infrastructure provider. We serve more than 365 markets. We have 61 international data centers on-net, off-net, and more than 3,000 employees.

Mike Vamvakaris copy

Vamvakaris

Zayo Canada required a SOC to serve a large government client that required really strict compliance, encryption, and correlational analysis.

Upon further expansion, the SOC we built in Canada became a global SOC, and now it can serve international customers as well. Inside the SOC, you will find things such as US Federal Information Processing Standard (FIPS) 140-2 security standards compliance. We do threat hunting, threat intelligence. We are also doing machine learning, all in a protected facility via five-zone SOC.

This facility was not easy to build; it was a journey, as we have talked about many times in person, Serge.

Holistic Security

Bertini: What you guys have built is a state-of-the-art facility. I am seeing how it helps you attract more customers, because not only do you have critical infrastructure in your MSSP, but also you can attract customers whose stringent security and privacy concerns can be met.

Vamvakaris: Zayo is in a unique position now. We have grown the brand aggressively through organic and inorganic activities, and we are able to offer holistic and end-to-end security services to our customers, both via connectivity and non-connectivity.

For example, within our facility, we will have multiple firewalling and distributed denial-of-service (DDoS) technologies — now all being protected and correlated by our state-of-the-art SOC, as you described. So this is a really exciting and new opportunity that began more than two years ago with what you at HPE have done for us. Now we have the opportunity to turn and pivot what we built here and take that out globally.

Bertini: What made you decide on HPE ArcSight, and what did you see in ArcSight that was able to meet your long-term vision and requirements?

Turnkey Solutions

Vamvakaris: That’s a good question. It wasn’t an easy decision. We have talked about this openly and candidly. We did a lot of benchmarking exercises, and obviously selected HPE ArcSight in the end. We looked at everyone, without going into detail. Your listeners will know who they are.

But we needed something that supported multi-tenancy, so the single pane of window view. We are serving multiple customers all over the world, and ArcSight allowed us to scale without applying tremendous amount of capital expenditure (CAPEX) investment and ongoing operational expenditure (OPEX) to support infrastructure and the resources inside the SOC. It was key for me on the business side that the business-case was well supported.

We had a very strict industry regulation in working with a large government customer, to be FIPS-compliant. So out of the box, a lot of the vendors that we were looking at didn’t even meet those requirements.

Another thing I really liked about ArcSight, when we did our benchmarking, is the event log filtration. There really wasn’t anyone else that could actually do the filtration at the throughput and the capacity we needed. So that really lent itself very well. Just making sure that you are getting the salient events and kind of filtering out the noncritical alerts that we still need to be looking at was key for us.

Something that you and I have talked about is the strategic information and operations center (SIOC) service. As a company that knew we needed to build around SOC, to protect our own backbone, and offer those services to our extended connectivity customers, we enlisted SIOC services very early to help us with everything from instant response management, building up the Wiki, even hiring and helping us retain critical skill sets in the SOC.

From an end-to-end perspective, this is why we went with ArcSight and HPE. They offered us a turnkey solution, to really get us something that was running.

The Trifecta: People, Process, Technology

Bertini: In this market, what a lot of our customers see is that their biggest challenge is people. There are a lot of people when it comes to setting up MSSPs. The investment that you made is the big differentiator, because it’s not just the technology, it’s the people and process. When I look at the market and the need in this market, there is a lack of talented people.

Serge Bertini (1)

Bertini

How did you build your process and the people? What did you have to do yourself to build the strength of your bench? Later on we can talk a little bit more about Zayo and how HPE can help put all of this together.

Vamvakaris: We were the single tenant, if you will. Ultimately we needed to go international very quickly. So we went from humble beginnings to an international capability. It’s a great story.

For us, you nailed it on the head. SOC, the technology obviously is pertinent, you have to understand your use cases, your policies that you are trying to use and protect your customers with those. We needed something very modular and ArcSight worked for that.

But within the SOC, our customers require things like customized reporting and even customized instant-response plans that are tailored to meet their unique audits or industry regulations. It’s people, process and tools or technology, as they say. I mean, that is the lifeline of your SOC.

One of the things we realized early on, you have to focus on everything from your triage, to instant response, to your kill-chain processes. This is something we have invested significantly in, and this is where we believe we actually add a lot of value to our customers.

Bertini: So it’s not just a logging capability, you guys went way beyond providing just the eyes on the glass to the red team and the tiger team and everything else in between.

Vamvakaris: Let me give you an example. Within the SOC, we have SOC Level 1, all the way to Level 3, and then we have threat hunting. So inside we do threat intelligence. We are now using machine-learning technologies. We have threat hunting, predictive analytics, and we are moving into user behavior analysis.

Remember the way I talked about SOC Level 1, Level 2, Level 3, this is a 24×7, 365-day facility. This is a five-zone SOC for enhanced access control, mantraps inside to factor biometric access control. It’s a facility that we are very proud of and that we love showcasing.

Bertini: You are a very modest person, but in the span of two years you have done a lot. You started with probably one of the largest mammoth customers, but one thing that you didn’t really talk about is, you are also drinking your own champagne.

Tell us a little bit more about, Zayo. It’s a large corporation, diverse and global. Tell us about the integration of Zayo into your own SOC, too.

Drinking your own Champagne

Vamvakaris: Customers always ask us about this. We have all kinds of fiber or Ethernet, large super highway customers I call them, massive data connectivity, and Zayo is well-known in the industry for that; obviously one of the leaders.

The interesting part is that we are able to turn and pivot, not only to our customers, but we are also now securing our own assets — not just the enterprise, but on the backbone.

So you are right, we sip our own champagne. We protect our customers from threats and unauthorized data exfiltration, and we also do that for ourselves. So we are talking about a global multinational backbone environment.

Bertini: That’s pretty neat. What sort of threats are you starting to see in the market and how are you preventing those attacks, or at least how can you be aware in advance of what is coming down the pipe?

Vamvakaris: It’s a perpetual problem. We are invested in what’s called an ethical hacking team, which is the whole white hat/black hat piece.

In practice, we’re trying to — I won’t say break into networks, but certainly testing the policies, the cyber frameworks that companies think they have, and we go out of our way to make sure that that is actually the case, and we will go back and do an analysis for them.

If you don’t know who is knocking at the door, how are you going to protect yourself, right?

So where do I see the market going? Well, we see a lot of ransomware; we see a lot of targeted spear phishing. Things are just getting worse, and I always talk about how this is no longer an IT issue, but it’s a business problem.

People now are using very crafty organizational and behavior-style tactics of acquiring identities and mapping them back to individuals in a company. They can have targeted data exfiltration by fooling or tricking users into giving up passwords or access and sign all types of waivers. You hear about this everyday somewhere that someone accidentally clicked on something, and the next thing you know they have wired money across the world to someone.

So we actually see things like that. Obviously we’re very private in terms of where we see them and how we see them, but we protect against those types of scenarios.

Gone are the days where companies are just worried about their customer provided equipment or even cloud firewalls. The analogy I say, Serge, is if you don’t know who is knocking at the door, how are you going to protect yourself, right?

You need to be able to understand who is out there, what they are trying to do, to be able to mitigate that. That’s why I talk about threat hunting and threat intelligence.

Partners in Avoiding Crime

Bertini: I couldn’t agree more with you. To me, what I see is the partnership that we built between Zayo and HPE and that’s a testament of how the business needs to evolve. What we have done is pretty unique in this market, and we truly act as a partner, it’s not a vendor-relationship type of situation.

Can you describe how our SIOC was able to help you get to the next level, because it’s about time-to-market, at the end of the day. Talk about best practices that you have learned, and what you have implemented.

Vamvakaris: We grew out to be an international SOC, and that practice began with one large request for proposal (RFP) customer. So we had a time-to-market issue compressed. We needed to be up and running, and that’s fully turnkey, everything.

When we began this journey, we knew we couldn’t do it ourselves. We selected the technology, we benchmarked that, and we went for the Gartner Magic Quadrant. We were always impressed at HPE ArcSight, over the years, if not a decade, that it’s been in that magic quadrant. That was very impressive for us.

But what really stood out is the HPE SIOC.

We enlisted the SIOC services, essentially the consulting arm of HPE, to help us build out our world-class multizone SOC. That really did help us get to market. In this case, we would have been paying penalties if we weren’t up and running. That did not happen.

The SIOC came in and assessed everything that we talked about earlier, they stress-tested our triage model and instant response plan. They helped us on the kill chain; they helped us with the Wiki. What was really nice and refreshing was that they helped us find talent where our SOC is located. That for me was critical. Frankly, that was a differentiator. No one else was offering those types of services.

Bertini: How is all of this benefitting you at the end of the day? And where do you see the growth in your business coming for the next few years?

Ahead in the Cloud

Vamvakaris: We could not have done this on our own. We are fortunate enough that we have learned so much now in-house.

But we are living in an interconnected world. Like it or not, we are about to automate that world with the Internet of things (IoT), and always-on mobile technologies, and everyone talks about pushing things to the cloud.

The opportunity for us is exciting. I believe in a complete, free, open digital world, which means we are going to need — for a long time — to protect the companies as they move their assets to the cloud, and as they continue to do mobile workforce strategies — and we are excited about that. We get to be a partner in this ecosystem of a new digital era. I think we are just getting started.

The timing then is perfect, it’s exciting, and I think that we are going to see a lot of explosive growth. We have already started to see that, and now I think it’s just going to get even more-and-more exciting as we go on.

It’s not just about having the human capabilities, but it’s also augmenting them with the right technologies and tools so they can respond faster, they can get to the issues.

Bertini: You have talked about automation, artificial intelligence (AI), and machine learning. How are those helping you to optimize your operations and then ultimately benefitting you financially?

Vamvakaris: As anyone out there who has built a SOC knows, you’re only as good as your people, processes, and tools. So we have our tools, we have our processes — but the people, that cyber security talent is not cheap. The SOC analysts have a tough job. So the more we can automate, and the more we can give them help, the better. A big push now is for AI, which really is machine learning, and automating and creating a baseline of things from which you can create a pattern, if you will, of repeatable incidents, and then understanding that all ahead of time.

We are working with that technology. Obviously HPE ArcSight is the engine to the SOC, for correlational analysis, experience-sampling methods specifically, but outside there are peripherals that tie into that.

It’s not just about having the human capabilities, but it’s also augmenting them with the right technologies and tools so they can respond faster, they can get to the issues; they can do a kill chain process quickly. From an OPEX perspective, we can free up the Level 1 and Level 2 talent and move them into the forensic space. That’s really the vision of Zayo.

We are working with technologies including HPE ArcSight to plug into that engine that actually helps us free up the incident-response and move that into forensics. The proactive threat hunting and threat intelligence — that’s where I see the future for us, and that’s where we’re going.

Bertini: Amazing. Mike, with what you have learned over the last few years, if you had to do this all over again, what would you do differently?

Practice makes perfect

Vamvakaris: I would beg for more time, but I can’t do that. It was tough, it was tough. There were days when we didn’t think we were going to make it. We are very proud and we love showcasing what we built — it’s an amazing, world-class facility.

But what would I do differently? We probably spent too much time second-guessing ourselves, trying to get everything perfect. Yet it’s never going to be perfect. A SOC is a living, breathing thing — it’s all about the people inside and the processes they use. The technologies work, and getting the right technology, and understanding your use cases and what you are trying to achieve, is key. Not trying to make it perfect and just getting it out there and then being more flexible in making corrections, [that would have been better].

In our case, because it was a large government customer, the regulations that we had to meet, we built that capability the first time, we built this from the ground up properly — as painful as that was, we can now learn from that.

In hindsight, did we have to have everything perfect? Probably not. Looking back at the compressed schedule, being audited every quarter, that capability has nonetheless put us in a better place for the future.

Bertini: Mike, kudos to you and your team. I have worked with your team for the last two to three years, and what you have done has showed us a miracle. What you built is a top-class MSSP, with some of the most stringent requirements from the government, and it shows.

Now, when you guys talk, when you present to a customer, and when we do joint-calls with the customers — we are an extension of each other. We at HPE are just feeding you the technology, but how you have implemented it and built it together with your people, process, and technology — it’s fantastic.

So with that, I really thank you. I’m looking forward to the next few years together, to being successful, and bringing all our customers under your roof.

Vamvakaris: This is the partnership that we talked about. I think that’s probably the most important thing. If you do endeavor to do this, you really do need to bring a partner to the table. HPE helped us scale globally, with cost savings and an accelerated launch. That actually can happen with a world-class partnership. So I also look forward to working with you, and serving both of our customer bases, and bringing this great capability out into the market.

Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in big data, Cloud computing, Cyber security, data analysis, data center, Enterprise architect, Hewlett Packard Enterprise, managed services, risk assessment, Security | Tagged , , , , , , , , , | Leave a comment

Diversity spend: When doing good leads to doing well

The next BriefingsDirect digital business thought leadership panel discussion focuses on the latest path to gaining improved diversity across inclusive supply chains.

The panel examines why companies are seeking to improve supplier diversity, the business and societal benefits, and the new tools and technologies that are making attaining inclusive suppliers easier than ever.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the increasingly data-driven path to identifying and achieving the workforce that meets all requirements, please welcome Rod Robinson, Founder and CEO of ConnXus; Jon Stevens, Global Senior Vice President of B2B Commerce and Payments at SAP Ariba, and Quentin McCorvey, Sr., President of M and R Distribution Services.

The panel was assembled and recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas. The discussion was moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jon, why is it important to seek diversity in procurement and across supply chains? What are the reasons for doing this?

Stevens: It’s a very good question. It’s for few reasons. Number one, there is a global war for talent, and when you can get a diverse point of view, when you can include multiple different perspectives, that usually helps drive several other benefits, one of which could even be innovation.

We often see companies investing deeply inside their supply chain, working with a diverse set of suppliers, and they are gaining huge rewards from an innovation standpoint. When you look at the leading companies that leverage their suppliers to help drive new product innovation, it usually comes from these areas.

We also see companies more focused on longer-term relationships with their suppliers. Having a diverse perspective — and having a set of diverse suppliers — helps with those longer-term relationships, as both companies continue to grow in the process.

Gardner: Rod, what are you seeing in the marketplace as the major trends and drivers that have more businesses seeking more inclusivity and diversity in their suppliers?

Diversity benefits business

Robinson: As a former chief procurement officer (CPO), the one thing that I can definitely say that I have witnessed is that more diverse and inclusive supply chains are more innovative and deliver high value.

rod-robinson-400x266-300x200 (1)

Robinson

I recently wrote a blog where I highlighted some statistics that I think every procurement professional should know: One is that 99.9% of all US firms are in a small business category. Women- and minority-owned businesses represent more than 50% of the total, which is responsible for employing around 140 million people.

This represents a significant portion of the workforce. As we all know, small businesses really are the economic engine of the economy – small businesses are responsible for 65% of net new jobs.

At the end of the day, women and minorities represent more than 50% of all businesses, but they only represent about 6% of the total revenue generated.

The only thing that I would add is that diversity is vitally important as an economic driver for our economy.

Gardner: Rod points out a rich new wellspring of skills, talent and energy coming up organically from the small to medium-sized businesses. On the other hand, major national and international brands are demanding more inclusivity and diversity from their suppliers. If you are in the middle of that supply chain, is this something that should interest you?

Targeting talent worldwide

Stevens: You are spot-on. We definitely see our leading customers looking across that landscape, whether they are a large- or medium-sized company. The war for talent is only going to increase. Companies will need to seek even more diverse sources of talent. They are really going to have to stretch themselves to look outside the walls of their country to find talent, whereas other companies may not be doing so. So you’re going to see rising diversity programs.

Jon Stevens

Stevens

We have several customers in emerging parts of the world; let’s take South Africa for example. I spend a lot of time in South Africa, and one of our customers there, Nedbank, invests a lot of time and a lot of money in the growth and development of the small businesses. In South Africa, the statistics that Rod talked about are even greater as far as the portion of small companies. So we are seeing that trend grow even faster outside of the US, and it’s definitely going to continue.

Gardner: Rod, you mentioned that there are statistics, studies and research out there that indicate that this isn’t just a requirement, it’s really good business. I think McKinsey came out with a study, too, that found the top quarter of those companies seeking and gaining gender, racial and ethnic diversity were more likely to have a better financial return. So this isn’t just the right thing to do, but it’s also apparently demonstrated as being good business, too. Do you have any other insights into why the business case for this is so strong?

Diversity delivers innovation

Robinson: Speaking from first-hand experience, having been responsible for procurement and supplier diversity within a large company, there were many drivers. We had federal contracts that required us to commit to a certain level of engagement (and spending) with diverse suppliers.  We had to report on those stats and report our progress on a monthly and/or quarterly basis. It was interesting that while we were required by these contractual mandates — not only from the government but also customers like Procter and Gamble, Macy’s, and others — we started to realize that this is really creating more competition within categories that we were taking to market. It was bringing value to the organizations.

We had situations where we were subcontracting to diverse suppliers that were providing us with access to markets that we didn’t even realize that we were missing. So again, to Jon’s point, it’s more than just checking a box. We began to realize that this is really a market-imperative. This is something that is creating value for the organization.

We began to realize that this is really a market-imperative. This is something that is creating value for the organization.

The whole concept of supplier diversity started with the US government back in the late ’60s and early ’70s. That was the catalyst, but companies realized that it was delivering significant value to the organization, and it’s helped to introduce new, innovative companies across the supply chain.

At ConnXus, our big break came when McDonald’s gave us an opportunity five years ago. They took a chance on us when we were a start-up company of four.  We are now a company of 25. Obviously, revenues have grown significantly and we’ve been able to attract partners like SAP Ariba. That’s the way it should work. You always want to look for opportunities to identify new, innovative suppliers to introduce into a supply chain; otherwise we get stagnant.

Small but mighty

Stevens: I’ll add to what Rod said. This is just the sort of feedback we hear from our customers, the fact that a lot of the companies that are in this inclusive space are small — and we think that’s a big advantage.

Speed, quickness and flexibility are something you often see from diverse suppliers, or certainly smaller businesses, so a company that can have that in its portfolio has better responsiveness to their customer needs, versus a supply chain with very large processes or large organizations where it takes a while to respond to market needs. The quick in today’s world will be far more successful, and having a diverse set of suppliers allows you to respond incredibly quickly. There is obviously a financial benefit in doing so.

Gardner: A big item of conversation here at SAP Ariba LIVE is how to reduce risk across your supply chain. Just like any economic activity, if you have a diversified portfolio, with different sizes of companies, different geographic locations, and different workforce components — that can be a real advantage.

Now that we’ve established that there is a strong business case and rationale for seeking diversity, why do procurement professionals have trouble finding that diversity? Let’s go to Quentin. What’s holding back procurement professionals from finding the companies that they want?

McCorvey: Probably the biggest challenge is that the whole trend of supply chain optimization, of driving cost out of the supply chain, seems to be at odds with being inclusive, responsive, and in bringing in your own diverse suppliers. A company may have had 20 to 30 suppliers of a product, and then they look to drive that down with to just one or two suppliers. They negotiate contract prices for three-year contracts. That tends to weed out some of the smaller, more diverse organizations for several reasons.

Quentin McCorvey Sr.

McCorvey

For example, Rod talked about McDonald’s taking a chance on him. Well, they took a chance on him being a four-person organization; if he had to [grow first] he never would have had the opportunity.

For a company that requires a product in the market for every location nationally — as opposed to regionally — at a certain price, that tends to challenge a lot of the inclusion or the diversity in the supply chain.

Gardner: Right. Some companies have rules in place that don’t provide the flexibility to attract a richer supplier environment. What is being done from your perspective at SAP Ariba, Jon, to go after such a calcification of rules that leads to somewhat limited thinking in terms of where they can find choices?

Power through partnerships

Stevens: That short-term thinking that Quentin talked about is absolutely one of the big barriers, and that generally comes down to metrics. What are they trying to measure? What are they trying to accomplish?

The more thought-leading companies are able to look past something in the first year or two, and focus on not just driving cost out, as Quentin talked about, but discovering what else their suppliers can help with, whether it’s something from a regulatory standpoint or something from a product and innovation perspective.

Certainly, one challenge is that short-term thinking, the other is access to information. We see far too many procurement organizations that just aren’t thinking on a broader scale, whether it’s a diverse scale or a global scale. What SAP Ariba is now bringing to the table with our solutions is being able to include information about where to find diverse suppliers, where to search and locate suppliers, and we do that through many partnerships.

We have a solution in South Africa called Tradeworld, which addresses this very topic for that market. We have a solution called SAP Ariba Spot Buy, which allows us to bring diverse suppliers automatically into a catalog for procurement organizations to leverage. And at SAP Ariba LIVE 2017 we announced that we are partnering with Rod and his firm, ConnXus, to expand the diversity marketplace by linking the ConnXus database and the SAP Ariba Network, which opens the door to more opportunities for all of our customers.

Robinson: If I could add to Jon’s point, one thing I also look forward to as a part of our partnership with SAP Ariba is thought leadership. There are opportunities for us to share best practices. We know companies who are doing it really well, we know the companies that maybe struggling with it, but within our joint customer portfolios, we will be able to share some of those best practices.

For example, there may be situations where a company is doing a big maintenance, repair and operations (MRO) bid and you have some large players involved, such as W.W. Grainger. There may be opportunities to introduce Grainger to smaller suppliers that maybe provide fewer stock keeping units (SKUs) that they can leverage strategically across their accounts. I have been involved in a number of initiatives like that. Those are the types of insights that we will be able to bring to the table, and that really excites me about this partnership.

Gardner: Those insights, that data, and the ability to leverage a business network to automate and facilitate that all at scale is key. From what we are hearing here at SAP Ariba LIVE, leveraging that business network is essential. Rod, tell us aboutConnXus? What’s being announced here?

Seek and ye shall find in the connected cloud

Robinson: ConnXus is a next-generation procurement platform that specializes in making corporate supply chains more inclusive, transparent, and compliant. As I mentioned, we serve several global companies, many of which we share relationships with SAP Ariba.  Our cloud-based platform makes it easy for companies to track, monitor, and report against their supplier diversity objectives.

One of the major features is our supplier database, which provides real-time searchable access to nearly two million vetted women-, minority- and veteran-owned businesses across hundreds of categories. We integrate with the SAP Ariba Network. That makes it simple for companies to identify vetted, diverse suppliers. They can also search on various criteria including certifications, category, and geography. We have local, national and global capabilities.  SAP Ariba already is in a number of markets that we are looking to penetrate.

Gardner: I was really impressed when I looked at the ConnXus database, how rich and detailed it is, and not just ownership of companies but also the composition of those companies, where those people are located. So you would actually know where your inclusive supply chain is going to be, where the rubber hits the road on that, so to speak.

Jon, tell us about the news here on March 21, 2017, a marriage between SAP Ariba and ConnXus.

Stevens: The SAP Ariba Network has a community of over 2.5 million companies, and it’s companies like M and R Distribution Services that we have been able to help grow and foster over time, using some of the solutions I talked about and Ariba Discovery.

Adding to the information that Rod just talked about, we are greatly expanding that. We have the world’s largest, most global business network and now we have the world’s most diverse business network, due to the partnership with ConnXus being able to provide that information through various processes.

The partnership with ConnXus will allow us to provide a lot more education, a lot more awareness.

Fortune 2000 companies are looking all the time through requests for proposal (RFPs), through sourcing events, and analyzing supplier performance on the SAP Ariba Network. The partnership with ConnXus will allow us to provide a lot more education, a lot more awareness to them.

For the suppliers that are on our network and those who will be joining us as a part of being in ConnXus, we expect to drive a lot more business.

Gardner: If I am a purchasing agent or a procurement officer and I want to improve my supplier inclusion program, how would something like, say, SAP Ariba Spot Buy using the ConnXus database, benefit me?

Stevens: As you decide to search for a category, we will return to you several things, one of which is now the diverse supplier list that ConnXus has. One of the things we are going to be doing with SAP Ariba Spot Buy is to have a section that highlights the diversity category so that it’s front and center for a purchasing agent to use and to take advantage of.

Gardner: Clearly there is strong value and benefit here if you are a procurement officer to get involved with the ConnXus database and Ariba Network. Quentin, at M and R Distribution Services, tell us from the perspective of a small supplier like yourself, what you’re hearing about Ariba and ConnXus that interests you?

Be fruitful and multiply business opportunities 

McCorvey: You referenced a marriage between SAP Ariba and ConnXus, and part of a marriage is to be fruitful and multiply. So I want them to be fruitful so I can multiply my business opportunities. What that does for a company like ours is, we are looking for opportunities. It’s tougher for me to compete as a small business against a Grainger, or against a Fastenal, or against other larger companies like that.

So when I am going after opportunities like that, it’s going to be tough for me to win those large-scale RFPs. But if there is a target spot opportunity that I am looking for or within a region, it’s something that I can begin to do if a company is looking for someone like me.

We’ve talked a lot about corporations and the benefit of corporations, but there is also a consumer benefit, too, because we are in an age where the consumer is socially responsible and really wants to have a company that they are either investing in or they’re buying products from and they look for inclusion in their supply chains.

Folks are looking at that when they are make their investment and consumer decisions. Every company has an extremely diverse consumer base, so why should they not have a diverse supplier base? When companies look at that business ethic and corporate social responsibility as a driving tool for their organization, I want them to be able to find me among the Fortune top 20 companies. The relationship that ConnXus and SAP Ariba are driving really catalyzes these opportunities for me.

Gardner: Rod, if a company like M and R Distribution Services is not yet in your database and they want to be, how might they get going on that process and become vetted and be available to a global environment like the Ariba Network?

Robinson: It’s really simple. One of the things that we have striven to provide is a fantastic, simple user experience. It takes about six minutes to complete the initial supplier profile. Any supplier can complete a profile at no cost.

Many suppliers actually get into our database because of the services that we already provide to large enterprise customers. So if you are a McDonald’s supplier, for example, you are already going to be in our database because we scrub their vendor data on an annual basis. I think Quentin is already in because he happens to be a vendor of one of our customers, or of multiple customers.

There is a vetting process where we integrate with other third-parties to pull in data, and then you become discoverable by all of the buyers on our platform.

Gardner: Before we close out, let’s look to the future. Jon, when we think about getting this rich data, putting it in the hands of the people who can use it, we also are putting it in the hands of the machines that can use it, right?

So when we think about bots and artificial intelligence (AI) trends, what are some of your predictions for how the future will come about when it comes to procurement and inclusive supply chains?

The future is now

Stevens: You talked about trends. One is certainly around transparency and visibility; another one is around predictive analytics and intelligence. We believe that a third is around partnerships like this to drive more collaboration.

But predictive analytics, that’s not a future thing, that’s something we do today and some of the leading procurement companies are figuring out how to take advantage of it. So, for example, when a machine breaks down, you are not waiting for it. Instead, the machine is telling our systems, “Hey, wait a minute, I’ve got a problem.”

Not only that, but they are producing for the buyer the intelligence that they need to order something. We already know who the suppliers are, we already know what potentially should be done, and we are providing these decisions to procurement organizations.

The future, it’s here, you see it in our personal lives, on our phones, when you get recommendations in the morning, on the news, and everything else. It’s here today through some of our solutions.

We began to realize that this is really a market-imperative. This is something that is creating value for the organization.

And this trend around diversity, it’s also here. You mentioned SAP Ariba Spot Buy and we also have some of these other solutions like SAP Ariba Discovery where a procurement person is starting to create a sourcing event. We have the ability in our solutions to automatically recommend suppliers and based off of the goals that that procurement organization has, we can pre-populate and recommend the diverse MRO suppliers that you might want to consider for your program.

You’re seeing that today through the Ariba Network and through things like Guided Buying, where we are helping facilitate many of those steps for procurement organizations. So it’s really fun and the future in many respects is here right now.

Value-driven supply chains

Robinson: I envision a future in procurement of being able to make informed decisions on supplier selection. Procurement professionals are in a great position to change the world, and the CPO of the future; they are going to be Millennials. They want more control, and they want more transparency, and, to Quentin’s point, they want to buy from organizations that share their same values.

Our partnership with SAP Ariba will create this environment where we can move closer to fulfilling this vision of whenever you have a specification that you’ve put into the system, you’ll be pushed supplier options, and you can actually configure your criteria such that you create this optimal supplier mix – whether diversity is important to you, green/environmental issues are important you, if ethical practices are important to you. All of this can be built-in and weighted within your selection. You will create an optimal supplier portfolio that balances all of the things that are important to you and your organization.

McCorvey: Why I am excited? This conversation has come full circle for me. I started off taking about supply optimizations and some of the challenges that they pose for businesses like me. We know that people do business most often with people they know, like and appreciate. What I want to do is turn a digital connection into a digital handshake and use predictive analytics and the connections between Jon and Rod that propose an opportunity for folks to know me, for me to grow as a new organization, and for me to be in the forefront of their minds. That is a challenge that this kind of supply chain optimization helps to overcome.

I’m really happy for where this is going to go in the future. In the end, there are going to be a lot of organizations both large and small that are going to benefit from this partnership. I look forward to the great things that are going to come from it, for not only both organizations — but for people like me across the country.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, Business networks, Cloud computing, Identity, Networked economy, risk assessment, SAP, SAP Ariba, Software, Spot buying, User experience | Tagged , , , , , , , , , , | Leave a comment

How AI, IoT and blockchain will shake up procurement and supply chains

The next BriefingsDirect digital business thought leadership panel discussion focuses on how artificial intelligence (AI), the Internet of things (IoT), machine learning (ML), and blockchain will shake up procurement and supply chain optimization.

Stay with us now as we develop a new vision for how today’s cutting-edge technologies will usher in tomorrow’s most powerful business tools and processes. The panel was assembled and recorded at the recent 2017 SAP Ariba LIVE conference in Las Vegas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about the data-driven, predictive analytics, and augmented intelligence approach to supply chain management and procurement, please welcome the executives from SAP Ariba:

Here are some excerpts:

Gardner: It seems like only yesterday we were confident to have a single view of a customer, or clean data, or maybe a single business process end–to-end value. But now, we are poised to leapfrog the status quo by using words like predictive and proactive for many business functions.

Why are AI and ML such disrupters to how we’ve been doing business processes?

Shahane: If you look back, some of the technological impact  in our private lives, is impacting our public life. Think about the amount of data and signals that we are gathering; we call it big data.

We not only do transactions in our personal life, we also have a lot of content that gets pushed at us. Our phone records, our location as we move, so we are wired and we are hyper-connected.

Dinesh Shahane

Shahane

Similar things are happening to businesses. Since we are so connected, a lot of data is created. Having all that big data – and it could be a problem from the privacy perspective — gives you an opportunity to harness that data, to optimize it and make your processes much more efficient, much more engaged.

If you think about dealing with big data, you try and find patterns in that data, instead of looking at just the raw data. Finding those patterns collectively as a discipline is called machine learning. There are various techniques, and you can find a regression pattern, or you can find a recommendation pattern — you can find all kinds of patterns that will optimize things, and make your experience a lot more engaging.

If you combine all these machine learning techniques with tools such as natural language processing (NLP), higher-level tools such as inference engines, and text-to-speech processing — you get things like Siri and Alexa. It was created for the consumer space, but the same thing could be available for your businesses, and you can train that for your business processes. Overall, these improve efficiency, give delight, and provide a very engaging user experience.

Gardner: Sanjay, from the network perspective it seems like we are able to take advantage of really advanced cloud services, put that into a user experience that could be conversational, like we do with our personal consumer devices.

What is it about the cloud services in the network, however, that are game-changers when it comes to applying AI and ML to just good old business processes?

Multiple intelligence recommended

Almeida: Building on Dinesh’s comment, we have a lot of intelligent devices in our homes. When we watch Netflix, there are a lot of recommendations that happen. We control devices through voice. When we get home the lights are on. There is a lot of intelligence built into our personal lives. And when we go to work, especially in an enterprise, the experience is far different. How do we make sure that your experience at home carries forward to when you are at work?

Sanjay Almeida

Almeida

From the enterprise and business networks perspective, we have a lot of data; a lot of business data about the purchases, the behaviors, the commodities. We can use that data to make the business processes a lot more efficient, using some of the models that Dinesh talked about.

How do we actually do a recommendation so that we move away from traditional search, and take action on rows and columns, and drive that through a voice interface? How do we bring that intelligence together, and recommend the next actions or the next business process? How do we use the data that we have and make it a more recommended-based interaction versus the traditional forms-based interaction?

Gardner: Sudhir, when we go out to the marketplace with these technologies, and people begin to use them for making better decisions, what will that bring to procurement and supply chain activities? Are we really talking about letting the machines make the decisions? Where does the best of what machines do and the best of what people do meet?

Bhojwani: Quite often I get this question, What will be the role of procurement in 2025? Are the machines going to be able to make all the decisions and we will have no role to play? You can say the same thing about all aspects of life, so why only procurement?

I think human intelligence is still here to stay. I believe, personally, it can be augmented. Let’s take a concrete example to see what it means. At SAP Ariba, we are working on a product called product sourcing. Essentially this product takes a bill of material (BOM), and

Sudhir Bhojwani

Bhojwani

it tells you the impact. So what is so cool about it?

One of our customers has a BOM, which is an eight-level deep tree with 10 million nodes in it. In this 10 million-node commodity tree, or BOM, a person is responsible for managing all the items. But how does he or she know what is the impact of a delay on the entire tree? How do you visualize that?I think humans are very poor at visualizing a 10-million node tree; machines are really good at it. Well, where the human is still going to be required is that eventually you have to make a decision. Are we comfortable that the machine alone makes a decision? Only time will tell. I continue to think that this kind of augmented intelligence is what we are looking for, not some machine making complete decisions on our behalf.

Gardner: Dinesh, in order to make this more than what we get in our personal consumer space, which in some cases is nice to have, it doesn’t really change the game. But we are looking for a higher productivity in business. The C-Suite is looking for increased margins; they are looking for big efficiencies. What is it from a business point of view that these technologies can bring? Is this going to be just a lipstick on a pig, so to speak, or do we really get to change how business productivity comes about?

Humans and machines working together

Shahane: I truly believe it will change the productivity. The whole intelligence advantage — if you look at it from a highest perspective like enhanced user experience — provides an ability to help you make your decisions.

When you make decisions having this augmented assistant helping you along the way — and at the same time dealing with large amount of data combined in a business benefit — I think it will make a huge impact.

Let me give you an example. Think about supplier risk. Today, at first you look at risk as the people on the network, and how you are directly doing business with them. You want to know everything about them, their profile, and you care about them being a good business partner to you.

But think about the second, third and fourth years, and some things become not so interesting for your business. All that information for those next years is not directly available on the network; that is distant. But if those signals can be captured and somehow surface in your decision-making, it can really reduce risk.

Reducing risk means more productivity, more benefits to your businesses. So that is one advantage I could see, but there will be a number of advantages. I think we’ll run out of time if we start talking about all of those.

Gardner: Sanjay, help us better understand. When we take these technologies and apply them to procurement, what does that mean for the procurement people themselves?

Almeida: There are two inputs that you need to make strategic decisions, and one is the data. You look at that data and you try to make sense out of it. As Sudhir mentioned, there is a limit to human beings in terms of how much data processing that they can do — and that’s where some of these technologies will help quite a bit to make better decisions.

The other part is personal biases, and eliminating personal biases by using the data. It will improve the accuracy of your strategic decisions. A combination of those two will help make better decisions, faster decisions, and procurement groups can focus on the right stuff, versus being busy with the day-to-day tasks.

Using these technologies, the data, and the power of the data from computational excellence — that’s taking the personal biases out of making decisions. That combination will really help them make better strategic decisions.

Bhojwani: Let me add something to what Sanjay said. One of the biggest things we’re seeing now in procurement, especially in enterprise software in general, is people’s expectations have clearly gone up based on their personal experience outside. I mean, 10 years back I could not have imagined that I would never go to a store to buy shoes. I thought, who buys shoes online? Now, I never go to stores. I don’t know when was the last time I bought shoes anywhere but online? It’s been few years, in fact. Now, think about that expectation on procurement software.

Currently procurement has been looked upon as a gatekeeper; they ensure that nobody does anything wrong. The problem with that approach is it is a “stick” model, there is no “carrot” behind it. What users want is, “Hey, show me the benefit and I will follow the rules.” We can’t punish the entire company because of a couple of bad apples.

By and large, most people want to follow the rules. They just don’t know what the rules are; they don’t have a platform that makes that decision-making easy, that enables them to get the job done sooner, faster, better. And that happens when the user experience is acceptable and where procurement is no longer looked down upon as a gatekeeper. That is the fundamental shift that has to happen, procurement has to start thinking about themselves as an enabler, not a gatekeeper. That’s the fundamental shift.

Gardner: Here at SAP Ariba LIVE 2017, we’re hearing about new products and services. Are there any of the new products and services that we could point to that say, aha, this is a harbinger of things to come?

In blockchain we trust

Shahane: The conversational interfaces and bots, they are a fairly easy technology for anyone to adopt nowadays, especially because some of these algorithms are available so easily. But — from my perspective — I think one of the technologies that will have a huge impact on our life will be advent of IoT devices, 3D printing, and blockchain.

To me, blockchain is themost exciting one. That will have huge impact on the way people look at the business network. Some people think about blockchain as a complementary idea to the network. Other people think that it is contradictory to the network. We believe it is complementary to the network.

Blockchain reaches out to the boundary of your network, to faraway places that we are not even connected to, and brings that into a governance model where all of your processes and all your transactions are captured in the central network.

I believe that a trusted transactional model combined with other innovations like IoT, where a machine could order by itself … My favorite example is when a washing machine starts working when the energy is cheaper … it’s a pretty exciting use-case.

This is a combination of open platforms and IoT combining with blockchain-based energy-rate brokering. These are the kind of use cases that will become possible in the future. I see a platform sitting in the center of all these innovations.

Gardner: Sanjay, let’s look at blockchain from your perspective. How do you see that ability of a distributed network authority fitting into business processes? Maybe people hadn’t quite put those two together.

Almeida: The core concept of blockchain is distributed trust and transparency. When we look at business networks, we obviously have the largest network in the world. We have more than 2.5 million buyers and suppliers transacting on the SAP Ariba Network — but there are hundreds of millions of others who are not on the network. Obviously we would like to get them.

If you use the blockchain technology to bring that trust together, it’s a federated trust model. Then our supply chain would be lot more efficient, a lot more trustworthy. It will improve the efficiency, and all the risk that’s associated with managing suppliers will be managed better by using that technology.

Gardner: So this isn’t a “maybe,” or an “if.” It’s “definitely,” blockchain will be a significant technology for advancing productivity in business processes and business platforms?

Almeida: Absolutely. And you have to have the scale of an SAP Ariba, have the scale from the number of suppliers, the amount of business that happens on the network. So you have to have a scale and technology together to make that happen. We want to be a center of a blockchain, we want to be a blockchain provider, and so that other third-party ecosystem partners can be part of this trusted network and make this process a lot more efficient.

Gardner: Sudhir, for those who are listening and reading this information and are interested in taking advantage of ML and better data, of what the IoT will bring, and AI where it makes sense — what in your estimation should they be doing now in order to prepare themselves as an organization to best take advantage of these? What would you advise them to be doing now in order to better take advantage of these technologies and the services that folks like SAP Ariba can provide so that they can stand out in their industry?

Bhojwani: That’s a very good question, and that’s one of our central themes. At the core of it, I fundamentally believe the tool cannot solve the problem completely on its own, you have to change as well. If the companies continue to want to stick to the old processes — but try to apply the new technology — it doesn’t solve the problem. We have seen that movie played before. People get our tool, they say, hey, we were sold very good visions, so we bought the SAP Ariba tool. We tried to implement it and it didn’t work for us.

When you question that, generally the answer is, we just tried to use the tool — tried to change the tool to fit our model, to fit our process. We didn’t try to change the processes. As for blockchain, enterprises are not used to being for track and trace, they are not really exposing that kind of information in any shape or form – or they are very secretive about it.

So for them to suddenly participate in this requires a change on their side. It requires seeing what is the benefit for me, what is the value that it offers me? Slowly but surely that value is starting to become very, very clear. You hear more companies — especially on the payment side — starting to participate in blockchain. A general ledger will be available on blockchain some day. This is one of the big ideas for SAP.

If you think about SAP, they run more general ledgers in the world than any other company. They are probably the biggest general ledger company that connects all of that. Those things are possible, but it’s still a technology only until the companies want to say, “Hey, this is the value … but I have to change myself as well.”

This changing yourself part, even though it sounds so simple, is what we are seeing in the consumer world. There, change happens a little bit faster than in the enterprise world. But, even that is actually changing, because of the demands that the end-user, the Millennials, when they come into the workforce; the force that they have and the expectations that they have. Enterprises, if they continue to resist, won’t be sustainable.

They will be forced to change. So I personally believe in next three to five years when there are more-and-more Millennials in the workforce, you will see people adopting blockchain and new ledgers at a much faster pace.

A change on both sides

Shahane: I think Sudhir put it very nicely. I think enterprises need to be open to change. You can achieve transformation if the value is clearly articulated. One of the big changes for procurement is you need to transition yourself from being a spend controller into a value creator. There is a lot of technology that will benefit you, and some of the technology vendors like us, we cannot just throw a major change at our users. We have to do it gradually. For example, with AI it will start as augmented first, before it starts making algorithmic decisions.

So it is a change on both sides, and once that happens — and once we trust each other on the system — nice things will happen.

Almeida: One thing I would add to that is organizations need to think about what they want to achieve in the future and adopt the tool and technology and business processes for their future business goals. It’s not about living in the past because the past is going to be gone. So how do you differentiate yourself, your business with the rest of the competition that you have?

The past business processes and people and technology many not necessarily get you over there. So how do you leverage the technology that companies like SAP and Ariba provide? Think about what should be your future business processes. The people that you will have, as Sudhir mentioned, the Millennials, they have different expectations and they won’t accept the status quo.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in application transformation, Ariba, artificial intelligence, big data, Cloud computing, data analysis, Enterprise transformation, ERP, machine learning, SAP, SAP Ariba, Spot buying, User experience | Tagged , , , , , , , , , , , | 1 Comment

Why effective IoT adoption is a team sport, and how to become a player

The next BriefingsDirect Voice of the Customer discussion highlights how Internet of things (IoT) adoption means more than just scaling-up networks. The complexity and novel architectural demands of IoT require a rethinking of the edge of nearly any enterprise.

We’ll explore here how implementing IoT strategies is not a one-size-fits-all endeavor — nor can it be bought off the shelf. What’s more, those new to the computational hive and analytical edge attributes of IoT are discovering that it takes a team approach.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

To explain how many disparate threads of an effective IoT fabric come together, we’re joined by Tushar Halgali, Senior Manager in the Technology Strategy and Architecture Practice at Deloitte Consulting in San Francisco, and Jeff Carlat, Senior Director of Technology Solutions at Hewlett Packard Enterprise (HPE) Strategic Alliances. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What the top trends making organizations recognize the importance of IoT?

Carlat: We’re at the cusp of a very large movement of digitizing entire value chains. Organizations have more and more things connected to the Internet. Look at your Nest thermostats and the sensors that are on everything. The connectivity of that back to the data center to do analytics in real-time is critical for businesses to reach the next level of efficiencies — and to maintain their competitiveness in the market.

Gardner: Tushar, this is a different type of network requirement set. We’re dealing with varied data types, speeds, and volumes in places that we haven’t seen before. What are the obstacles that organizations face as they look at their current infrastructure and the need to adapt?

Halgali: One of the really interesting things we’ve seen is that traditionally organizations have been solving technology-related problems as all information technology (IT)-related problems. There was this whole concept of machine to machine (M2M) a while back. It connected machines to the Internet, but it was a very small segment.

Now, we’re trying to get machines to connect to the Internet and have them communicate with each other. There are a lot of complexities involved. It’s not just the IT pieces, but having the operational technology (OT) connect to the IT world, too. It creates a very complex ecosystem of components.

Gardner: Let’s parse out the differences between OT in the IT. How do you describe those? Why should we understand and appreciate how different they are?

Jeff CarlatCarlat: When we think of OT, you think of long-standing companies out there, Bosch, National Instruments (NI), and many other companies that have been instrumenting sensors for operations, shop floors, oil and gas, and with every pump being sensed. The problem is that humans would have to interact a lot around those sensors, to remediate or to understand when something like a bearing on a pump has failed. [Learn more on OT and IoT.]

What’s key here is that IT, those core data-center technologies that HPE is leading the market in, has the ability of run analytics and to provide intelligence and insights from all of that sensor data. When you can connect the OT devices with the IT — whether in the data center or delivering that IT to the edge, which we call the Intelligent Edge — you can actually do your insights, create your feedback, and provide corrective actions even before things fail, rather than waiting.

Gardner: That failed ball bearing on some device isn’t just alerting the shop floor of a failure, it’s additionally automating a process where the inventory is checked. If it’s not there, the supply chain is checked, the order is put in place, it’s delivered and ready to install before any kind of breakdown — or did I oversimplify that?

End of Downtime

Carlat: That’s a fair representation. We’re working closely with a company called Flowserve. We’re building the telemetry within the pumps so that when a cavitation happens or a bearing is starting to wear out, it will predict the mean time for failure and alert them immediately. It’s all truly connected. It will tell you when it’s going to fail. It provides the access to fix it ahead of time, or as part of a scheduled maintenance plan, rather than during downtime, because downtime in an oil production facility or any business can cost millions of dollars.

Gardner: Tushar, are there any other examples you can think of to illustrate the power and importance of OT and IT together?

How to Gain Business Insights

From the Intelligent IoT Edge

Halgali: If our readers ever get a chance to check out one of the keynote speakers [at HPEDiscover London 2016] on the Intelligent Edge, there’s a good presentation by PTC ThingWorx software, which is an IoT platform and the HPE Edgeline servers in a manufacturing facility. You have conveyor belts that need certain improvements, they’re constantly producing things, and they’re part of the production facility. It’s all tied to the revenue of the organization, and the minute it shuts down, there are problems.

Tushar HalgaliMaintenance needs to be done on those machines, but you don’t want to do it too soon because you’re just spending money unnecessarily and it’s not efficient. You don’t want it too late, because then there’s downtime. So, you want to find the equilibrium between the two.

IoT determines the right window for when that maintenance needs to be done. If there’s a valve problem, and something goes down quickly, sensors track the data and we analyze the information. The minute that data goes off a certain baseline, it will tell you about this problem — and then it will say that there’s the potential in the future for a major problem.

It will actually generate a work order, which then feeds from the OT systems into the IT systems, and it’s all automatic. Then, when mechanics come in to try to solve these problems, they can use augmented reality or virtual reality to look at the machine and then fix the problem.

It’s actually a closed-loop ecosystem that would not have happened in the M2M base. It’s the next layer of maturity or advancement that IoT brings up.

Gardner: We can measure, we can analyze, and we can communicate. That gives us a lot of power. We can move toward minimum viable operations, where we’re not putting parts in place when they’re not needed, but we’re not going down either.

It reminds me of what happened on the financial side of businesses a decade or two ago, where you wanted to have spend management. You couldn’t do it until you knew where all your money was, where all the bills had to be paid, but then doing so, you could really manage things precisely. Those were back office apps, digital ledgers.

So, it’s a maturity coming to devices — analog, digital, what have you, and it’s pretty exciting. What’s the impact here financially, Jeff?

Carlat: Well, huge. Right now, IDC predicts IoT to represent about a $1.3 trillion opportunity by2020. It’s a huge opportunity, not only for incremental revenue for businesses, but increased efficiencies, reducing cost, reducing downtime, reducing risk; so, a tremendous benefit. Companies need to strongly consider a movement for digitizing the value chains to remain competitive in the future.

Bigger and Better Data at the Edge

Gardner: Okay. We understand why it’s important and we have a pretty good idea of what you need to do. Now, how do you get there? Is this big data at the edge? I heard a comment just the other day that there’s no bigger data than edge data and IoT data. We’re going to have to manage scales here that we haven’t seen before.

Carlat: It’s an excellent point. Jet engines that are being used today are generating 5 TB of data every time they land or take off. Imagine that for every plane, every engine that’s flying in the sky, every day, every year. The amount of data is huge. This brings me to the unique way that HPE is approaching this, and we truly believe we are leaders in the data center now and are leaders within IT.

We’re taking that design, that implementation, that knowledge, and we’re designing infrastructure, data center quality infrastructure, that’s put on the edge, ruggedized compute or analytics, and providing the ability to do that analysis, the machine learning, and doing it all locally, rather than sending all that data to the cloud for analytics. Imagine how expensive that would be.

That’s one approach we’re taking on within HPE. But, it’s not just about HPE tackling this. Customers are asking where to start. “This is overwhelming, this is complex. How do we do this?” We’re coming together to do advisory services, talking our customers through this, hand-holding, building a journey for them to do that digitization according to their plans and without disrupting their current environment.

Gardner: Tushar, when you have a small data center at the edge, you’re going to eke out some obvious efficiencies, but this portends unforeseen circumstances that could be very positive. What can you do when you have this level of analytics, and you take it to a macro level? You can begin to analyze things on an industry-level, and then have the opportunity to drill down and find new patterns, new associations, perhaps even new ways to design processes, factory floors, retail environments? What are we talking about in terms of the potential for the analytics when we capture and manage this larger amount of data?

Halgali: We’ve noted there are a lot of IoT use cases, and the value that generates so far has been around cost optimization, efficiencies, risk management, and those kinds of things. But by doing things on the edge, not only can you do all of those, you can start getting into the higher-value areas, such as revenue growth and innovation.

A classic example is remote monitoring. Think of yourself as a healthcare provider who would not be able to get into the business of managing people’s health if they’re all located remotely. If we have certain devices in homes through sensors and everything, you can start tracking their behaviors and their patterns. When they’re taking medicine and those kinds of things, and have all the information created through profiles of those people. You have now distributed the power of taking care of all the constituents in your base, without having to have them physically be in a hospital.

Gardner: Those feedback loops are not just one way where you can analyze, but you can start to apply the results, the benefits of the analysis, right back into the edge.

Carlat: Health and life sciences are great examples of using IoT as a way of more efficiently managing the hospital beds. It costs a lot of money to have people sit in a hospital when they don’t need to be there. To be able provide patient access remotely, to be able monitor them, to be able to intervene on an as-needed basis, drives much greater efficiencies.

We’ve talked a little bit about industrial IoT, we’ve talked a little bit about health and life sciences, but this extends into retail and smart stores, too. We’re doing a lot with Home Depot to deliver the store of the future, bridging the digital with the brick-and-mortar across 2,200 stores in North America.

It also has to do with the experience around campus and branch networks. At Levi’s Stadium in Santa Clara, California, HPE built that out with indoor Global Positioning System (GPS) and built out a mobile app that allows indoor wayfinding. It allows the patrons visiting the game to have a totally new, immersive experience.

They found uploads and downloads of photos, and they found hotspots by mapping out in the stadium. The hotspots had a great unobstructed view of the field, so there were a lot of people there taking pictures. They installed a food stand nearby and they have increased revenues because of strategic placement based on this IoT data. Levi’s Stadium recognized $1 million in additional revenue in the first season and 10 times the growth in the number of contacts that they have in their repository now.

Gardner: So, it’s safe to say that edge computing and intelligence is a gift that will keep giving, at levels organizations haven’t even conceived of yet.

Carlat: I believe it’s a necessity to stay competitive in the world of tomorrow.

How to Gain Business Insights

From the Intelligent IoT Edge

Gardner: If your competitor does this, and you don’t, that’s going to be a big question mark for your customers to mull over.

While we are still on the subject of the edge technical capabilities, by being able to analyze and not just pass along data, it seems to me it’s also a big help when it comes to compliance and security, which are big concerns.

Not only does security get mitigated by hardening or putting up a wall, probably the safest bet is to be able to analyze when something is breached or something is going wrong, and then to control or contain that. Tell me why the HPE Edgeline approach of analyzing data fast and on the edge can also be a big boost to security risk containment and other compliance issues.

Carlat: We do a lot around asset tracking. Typically, you need to send someone out there to remediate. By using Edgeline, using our sensor data, and using asset tagging, you can ensure that the right person can be identified as the service person physically at the pump to replace it, rather than just saying that they did it, writing on paper, and actually being off doing something else. You have security, you have the appropriate compliance levels with the right people fixing the right things in the right manner, and it’s all traceable and trackable.

Halgali: When you begin using edge devices, as well as geolocation services, you have this ability to do fine-grained policy management and access control for not just the people, but also devices. The surface area for IoT is so huge there are many ad-hoc points into the network. By having a security layer, you can control that and edge devices certainly help with that.

A classic example would be if you have a camera in a certain place. The camera is taking constant feeds of things that are going on that are wrong or right; it’s constantly recording the data. But the algorithms that have been fed into the edge device allow it to capture things that are normal, so it can not only alert authorities at the right time, but also store feed only for that. Why store days and day’s worth of images, when you can pick only the ones that truly matter?

As Jeff said, it allows workplace restrictions and compliance, but also in an open area, it allows you to track events that are specific.

In other cases, let’s say the mining industry or the oil and gas industry, where you have workers that are going to be in very remote locations and it’s very hard to track each one of them. When you have the ability to track the assets over time, if things go wrong, then it’s easier to intervene and help out.

Carlat: Here is a great personal example. I went to my auto dealership and I pulled into the garage. Immediately, I was greeted at my door by name, “Hello Mr. Carlat. Are you in for your service?”

I thought, “How do you know I came in? Are you tracking me? How are you doing that?” It turns out, they have radio-frequency identification (RFID) tags. When you come in for service, they apply these tags. As soon as you pull in, they provide a personalized experience.

Also, it yields a net benefit of location tracking. They know exactly where my car is at all stages. If I moved to a rental car that they have there, my profile is automatically transferred over there. It starts their cycle time metrics, too, the traceability of how they’re doing on remediating whatever my service level may be. It’s a whole new experience. I’m now a lifetime-loyal customer of this auto dealer because of that personalization; it’s all coming from implementation of IoT.

Gardner: The implications are vast; whether it’s user experience, operational efficiency, risk reduction, or insights and analysis at different levels of an industry and even a company.

It’s very impressive stuff, when you can measure everything and you can gather as much data as you want and then you can triage, and analyze that data and put the metadata out to the cloud; so much is possible.

We’ve established why this is of interest. Now, let’s think a little bit about how you get there for organizations that are thinking more about re-architecting their edge in order to avail themselves of some of these benefits. What is it about the HPE and Deloitte alliance that allows for a pathway to get on board and start doing things in a proper order to make this happen in the best fashion?

Transformation Journey, One Step at a Time

Halgali: Dana, anytime you do an IoT initiative, the key thing to realize that it should be part of a major digital transformation initiative. Like any other transformation story, there are the people, process, and the technology components of it. Jeff and I can talk about these three at a very high level when you begin talking about the process and the business model.

Deloitte has a huge practice in the strategy and the process space. What we’re looking at is digital industrial value-chain transformation. Let’s look at something like a smart factory.

What’s the value chain for an organization that’s making heavy machinery, end-to-end, all the way from R and D and planning, to procurement and development and shipment, and after-sale repairs, the entire value chain? What does that look like in the new IoT era? Then, decompose that into processes and use cases, and then identify which are the most high-value use cases, quantifying them, because that’s important.

Identifying the use cases that will deliver immediate tangible value in the near term provides the map of where to begin the IoT journey. If you can’t quantify concrete ROI, then what’s the point of investing? That addresses the reason of what IoT can do for the organization and why to leverage this capability. And then, it’s about helping our clients build out the business cases, so that they can justify the investments needed from the shareholders and the board — and can start implementing.

At a very high level, what’s the transformation story? What’s the impact on the business model for the organization? Once you have those strategy questions answered, then you get into the tactical aspects, which is how we execute on it.

From an execution standpoint, let’s look at enablement via technology. Once you have identified which use-cases to implement, you can utilize the pre-integrated, pre-configured IoT offerings that Deloitte and HPE have co-developed. These offerings address use cases such as asset monitoring and maintenance (in oil and gas, manufacturing, and smart cities), and intelligent spaces (in public venues such as malls, retail stores, and stadiums), and digital workplaces (in office buildings). One must also factor in organization, change and communication management as addressing cultural shifts as one of the most challenging aspects of an IoT-enabled digital transformation. Such a holistic approach helps our clients to think big, start small, and scale fast.

Gardner: Tushar just outlined a very nice on-ramp process. What about some places to go for information or calls for action? Where should people get started as they learn how to implement on the process that Tushar just described?

How to Gain Business Insights

From the Intelligent IoT Edge

Carlat: We’re working as one with Deloitte to deliver these transformations. Customers with interest can come to either Deloitte or HPE. We at HPE have a strong group of technology services consultants who can step in and help in partnership with Deloitte as well.

So, come to either company. Any of our partner representatives can get all of this and our websites are loaded with information. We’re here to help. We’re here to hold the hand and lead our customers to digitize and achieve these promised efficiencies that can be garnered from digital value chains.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: HewlettPackard Enterprise.

You may also be interested in:

Posted in big data, Cloud computing, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, Mobile apps, mobile computing, Platform 3.0 | Tagged , , , , , , , , , , | 1 Comment

TasmaNet ups its cloud game to deliver a regional digital services provider solution

The next BriefingsDirect Voice of the Customer cloud adoption patterns discussion explores how integration of the latest cloud tools and methods help smooth out the difficult task of creating and maintaining cloud-infrastructure services contracts.

The results are more flexible digital services that both save cloud consumers money and provide the proper service levels and performance characteristics for each unique enterprise and small business.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Stay with us now as we hear from a Tasmanian digital services provider, TasmaNet, about their solution-level approach to cloud services attainment, especially from mid-market enterprises. To share how proper cloud procurement leads to new digital business innovations, we’re joined by Joel Harris, Managing Director of TasmaNet in Hobart, Tasmania. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s start at a high level, looking at the trends that are driving how cloud services are affecting how procurement is going to be done in 2017. What has changed, in your opinion, in how enterprises are reacting to and leveraging the cloud services nowadays? 

Harris: We’re seeing a real shift in markets, particularly with the small- and medium-sized businesses (SMBs) in their approach and adoption of cloud services. More and more, there is an acceptance that it’s okay to buy products off the Internet. We see it  every day within personal cloud, iPhones, the Apple Store, and Google Play to buy movies. So, there is now the idea in the workplace that it’s acceptable to procure business services online through cloud providers. 

Because of the success of personal cloud with companies such as Apple, there’s a carry-over in that there is an assumed equivalent success in the commercial sense, and unfortunately, that can cause some problems. What we’re seeing is a willingness to start procuring from public, and also some private cloud as well, which is really good. What we’re finding, though, is a lack of awareness about what it means for businesses to buy from a cloud provider.

Gardner: What is it that the people might have wrong? What is it that they’ve not seen in terms of where the real basis for value comes when you create a proper cloud relationship? 

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Harris: Look at the way personal cloud is procured, a simple click, a simple install, and you have the application. If you don’t like it, you can delete it. 

When you come into a commercial environment, it’s not that simple, although there can a perception that it is. When you’re looking at your application, the glossy picture, it may talk about functionality, business improvement, future savings, and things like that. But when you come to the implementation of a cloud product or a cloud service into a business, the business needs to make sure that it has met its service levels, from internal business requirements or external business requirements, and from customers and markets. 

Harris

But you also need to make sure that it has also married up the skills of your workforce. Cloud services are really just a tool for a business to achieve an outcome. So, you’re either arming someone in the workforce with the tool and skills to achieve an outcome or you’re going to use a service from a third-party to achieve an outcome. 

Because we’re still very early in the days of cloud being adopted by SMBs, the amount of work being put into the marrying up of the capabilities of a product, or the imagined capabilities of a product, for future benefits to internal business processes and systems is clearly not as mature as we would like. Certainly, if you look into the marketplace, the availability of partners and skills to help companies with this is also lacking at the moment. 

Cloud Costs

Then, comes the last part that we talked about, which is removing or changing the application. At the moment, a lot of SMBs are still using traditional procurement. Maybe they want a white car. Well, in cloud services there’s always the ability to change the color, but it does come at a cost. There’s traditionally a variation fee or similar charge.

SMBs are getting themselves in a bit of trouble when they say they would like a white car with four seats, and then, later on, find that they actually needed five seats and a utility. How do they go about changing that? 

The cost of change is something that sometimes gets forgotten in those scenarios. Our experience over the last two years is companies overlooking the cost of change when under a cloud-services contract. 

Gardner: I’ve also heard you say, Joel, that cloud isn’t for everyone, what do you mean by that? How would a company know whether cloud is the right fit for it or not?

Harris: Simply look for real, deep understanding of your business. Coming back to the ability to link up service levels, it’s the ability to have a clear view into the future of what a company needs to achieve its outcomes. If you can’t answer those questions for your customer, or the customer can’t answer the questions for you as a cloud provider, then I would advise you to take a step back and really start a new process of understanding what it is the customer wants out of the cloud product. 

Change later on can cost, and small businesses don’t have an amount of money to go in there and continue funding a third party to change the implementation of what, in most cases, becomes a core piece of software in an organization.

Gardner: For the organizations that you work with that are exploring deeper relationships to private cloud, do you find that they’re thinking of the future direction as well or thinking of the strategy that they’d like to go hybrid and ultimately perhaps more public cloud? Is that the common view for those that are ready for cloud now?

Harris: In the enterprise, yes. We’re definitely seeing a huge push by organizations that understand the breakdown of applications between suitable for private cloud and suitable for public cloud. 

As you come down into the SMB market, that line blurs a little bit. We have some companies that wish to put everything in the cloud because it’s easy and that’s the advice they were given. Or, you have people who think they have everything in the cloud, but it’s really a systems integrator that has now taken their servers, put them in a data center, and is managing them as more of a hosted, managed solution. 

Unfortunately, what we are seeing is that a lot of companies don’t know the difference between moving into the cloud and having a systems integrator manage their hardware for them in a data center where they don’t see it.

There’s definitely a large appetite for moving to the as-a-service model in companies that have a C-suite or some level of senior management with ownership of business process. So, if there is a Chief Information Officer (CIO) or a Chief Technology Officer (CTO) or some sort of very senior Information Technology (IT) person that has a business focus on the use of technology, we’re seeing a very strong review of what the company does and why and how things should be moved to either hybrid or 100 percent in either direction.

Gardner: So, clearly the choices you make around cloud affect the choices you make as a business; there really is a transformational aspect to this. Therefore, the contract, that decision document of how you proceed with your cloud relationship, is not just an IT document; it’s really a business document. Tell us why getting the contract right is so important.

Harris: It’s very, very important to involve all the areas of a business when going into a cloud services contract.

Ecosystems of Scale

Gardner: And it’s no longer really one relationship. That is to say that a contract isn’t often just between one party and another. As we’re finding out, this is an ecosystem, a team sport, if you will. How does the contract incorporate the need for an ecosystem and how does TasmaNet help solve that problem of relationship among multiple parties?

Harris: Traditionally, if we look at the procurement department of a company, the procurement department would draft a tender, negotiate a contract between the supplier and the company, and then services would begin to flow, or whatever product was purchased would be delivered. 

More and more, though, in the cloud services contract, the procurement department has little knowledge of the value of the information or the transaction that’s happening between the company and the supplier, and that can be quite dangerous. Even though cloud can be seen as a commodity item, the value of the services that come over the top is very much not a commodity item. It’s actually a high-value item that, in most cases, is something relevant to keeping the company operating.

What we found at TasmaNet was that a lot of the companies moving to cloud don’t have the tools to manage the contract. They’re familiar with traditional procurement arrangements, but in managing a services contract or a cloud services contract, if we want to focus on what TasmaNet provides, you need to know a number of different aspects. 

We created an ecosystem and we said that we were going to create this ecosystem with all of the tools required for our customers. We put in a portal, so that the finance manager can look at the financial performance of the services. Does it meet budget expectations, is it behaving correctly, are we achieving the business outcomes for the dollars that we said it was going to cost?

Then, on the other side, we have a different portal that’s more for the technology administrator about ensuring that the system is performing within the service-level agreements (SLAs) that have been documented either between the company and the service provider or the IT department and the big internal business units. 

It’s important to understand there are probably going to be multiple service levels here, not only between the service provider and the customer, but also the customer and their internal customers. So, it’s important to make sure that they’re managed all the way through. 

We provide a platform so that people can monitor end to end from the customers using, all the way through to the financial manager on the other side.

Gardner: We’ve seen the importance of the contract. We understand that this is a complex transaction that can involve multiple players. But I think there is also another shift when we move from a traditional IT environment to a cloud environment and then ultimately to a hybrid cloud environment, and that’s around skills. What are you seeing that might be some dissonance between what was the skill set before and what we can expect the new skill set for cloud computing success to be?

Sea Change

Harris: We are seeing a huge change, and sometimes this change is very difficult for the people involved. We see that with cloud services coming along, the nature of the tool is changing. A lot of people traditionally have been trained in a single skill set, such as storage or virtualization. Once you start to bring in cloud services, you’re actually bundling a bunch of individual tools and infrastructure together to become one, and all of a sudden, that worker or that individual now has a tool that is made up of an ecosystem of tools. Therefore, their understanding of those different tools and how they report on it and the related elements change.

We see a change from people doing to controlling. We might see a lot of planning to try to avoid events, rather than responding to them. It really does change the ecosystem in your workforce, and it’s probably one of the biggest areas where we see risk arise when people are moving to a cloud-services contract.

Gardner: Is there something also in the realm of digital services, rather than just technology, that larger category of digital services, business-focused outcomes? Is that another thing that we need to take into consideration as organizations are thinking about the right way to transform to be of, for, and by the cloud?

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Harris: It comes back to a business understanding. It’s being able to put a circle around something that’s a process or something we could buy from someone else. We know how important it is to the company, we know what it costs the company, and we know the service levels needed around that particular function. Therefore, we can put it out to the market to evaluate. Should we be looking to buy this as a digital service, should we be looking to outsource the process, or should we be looking to have it internally on our own infrastructure and continue running it?

Those questions and the fact-finding that goes into that at the moment is one of the most important things I encourage a customer looking at cloud services to spend a lot of time on. It’s actually one of the key reasons why we have such a strong partnership at Hewlett Packard Enterprise (HPE). The hardware and infrastructure is so strong and good, the skill set and the programs that we can access to work with our customers to pull out information and put it up into things like enterprise nets to understand what the landscape looks like in a customer is just as important as the infrastructure itself.

Gardner: So, the customer needs to know themselves and see how they fit into these new patterns of business, but as you are a technologist, you also have to have a great deal of visibility into what’s going on within your systems, whether they’re on your premises, or within a public-private cloud continuum of some kind. Tell me about the TasmaNet approach and how you’re using HPE products and solutions to gain that visibility to know yourself even as you are transforming.

Harris: Sure, so a couple of the functions that we use with HPE … They have a very good [cloud workload suitability] capability set called HPE Aura with which they can sit down with us and work through the total cost of ownership for an organization. That’s not just at an IT level, but it’s for almost anything, to look at the work with the accounting team, to look at the total cost, from the electricity, through to dealing with resources, the third party contractors in construction teams. That gives us a very good baseline and understanding of how much it costs today, which is really important for people to understand. 

Then, we also have other capabilities. We work with HPE to model data about the what-if. It’s very important to have that capability when working with a third-party on understanding whether or not you should move to cloud. 

Gardner: Your comments, Joel, bring me back to a better understanding of why a static cloud services contract really might be a shackle on your ability to innovate. So how do you recognize that you need to know what you don’t know going into cloud, and therefore put in place the ability to react in some sort of a short-term basis iterate, what kind of contract allows for that dynamic ability to change? How do you begin to think about a contract that is not static?

Harris: We don’t know the answer yet. We’re doing a lot of work with our current customers and with HPE to look at that. Some of the early options we are looking at is that when we create a master services agreement with a company, even for something that may be considered a commodity, we ensure that we put in a great plan around innovation, risk management framework side, and continuous service improvement. Then there’s a conduit for information to flow between the two parties around business information, which can then feed into the use of the services that we provided.

I think we still have a long way to go, because there’s a certain maturity required. We’re essentially becoming a part of another company, and that’s difficult for people to swallow, even though they accept using a cloud services contract. We’re essentially saying, “Can we have a key to your data center, or the physical front door of your office?”

If that’s disconcerting for someone, well, it should be equally disconcerting that they’re moving to cloud, because we need access to those physical environments, the people face-to-face, the business plan, the innovation plan, and to how they manage risk in order to ensure that there is a successful adoption of cloud not just today, but also going forward.

Gardner: Clearly, the destiny of you and your clients is tied closely together. You need to make them successful, they need to let you show them the tools and the new flexible nature and you need to then rely on HPE to give you the means to create those dashboards and have that visibility. It really is a different kind of relationship, co-dependence, you might say.

Harris: The strength that TasmaNet will have going forward is the fact that we’re operating under a decentralized model. We work with HPE, so that we can have a workforce on the ground closer to the customer. The model of having all of your cloud services in one location, a thousand kilometers away from the customer, while technically capable, we don’t believe is the right mix in client-supplier relationships. We need to make sure that physically there are people on the ground to work hand-in-hand with the business management and others to ensure that we have a successful outcome. 

That’s one of the strong key parts to the relationship between HPE and TasmaNet. TasmaNet is now a certified services provider with HPE, which lets us use their workforce anywhere around Australia and work with companies that want to utilize TasmaNet services.

Gardner: Help our readers and listeners understand that your regional reach is primarily in Tasmania but you’re also in Australia and you have some designs and plans for an even  larger expansion. Tell us about your roadmap?

No Net is an Island – Tasmania and Beyond

Harris: Over the last few years, we’ve really been spending time gathering information from a couple of early contracts to understand the relationship between a cloud provider and a customer. In the last six months, we put that into a product that we actually call  TasmaNet Core, which is our new system for delivering digital services.

During the next 18 months we are working with some large contracts that we have won down here in Tasmania, having just signed one for the state government. We certainly have a number of opportunities and pathways to start deploying services and working with the state government on how cloud can deliver better business outcomes for them. We need to make sure we really understand and document clearly how we achieve success here in Tasmania.

Then, our plan is, as a company, to push this out to the national level. There are a lot of regional places throughout Australia that require cloud services, and more and more companies like TasmaNet will move into those regional areas. We think it’s important that they aren’t forgotten and we also think that for any business that can be developed in Tasmania and operate successfully, there is no reason why it can’t be replicated to regional areas around Asia-Pacific as required.

Gardner: Joel, let’s step back a moment and look at how to show, rather than tell, what we mean, in the new era of cloud, by a proper cloud adoption. Do you have any examples, either named or generic, where we can look at how this unfolded and what the business  benefits have been when it’s done well?

Solutions for
Hybrid and Private Cloud

IT Infrastructure

Harris: One of our customers, about three years ago, moved into a cloud services environment, which was very successful for the company. But what we found was that some of the contracts with their software services, while they enabled them to move into a cloud provider, added a level of complexity that make the platform very difficult to manage ongoing. 

Over a number of years, we worked with them to remove that key application from the cloud environment. It’s really important that, as a cloud provider, we understand what’s right for the customer. At the end of the day, if there’s something that’s not working for the customer, we must work with them to get results.

It worked out successfully. We have a very strong relationship with the company. There’s a local company down here called TT-Line, which operates some boat vessels for shipping between Tasmania and Mainland Australia, and because of the platform, we had to find the right mix. That’s really important and I know HPE uses it as a catch phrase. 

This is a real-world example of where it’s important to find the right mix between putting your workloads in the appropriate place. It has to work both ways. It’s easy to come in to a cloud provider. We need to make sure it’s also easy to step back out as well, if it doesn’t work.

Now, we’re working with that company to deeply understand the rest of the business to see what are the workloads that can come out of TasmaNet, and what are the workloads that need to even move internally or actually move to an application-specific hosting environment?

Gardner: Before we close out, Joel, I’d like to look a bit to the future. We spoke earlier about how private cloud and adjusting your business appropriately to the hosting models that we’ve described is a huge step, but of course, the continuum is beyond that. It goes to hybrid. There are public cloud options, data placement, and privacy concerns that people are adjusting to in terms of location of data, jurisdictions, and so forth. Tell me about where you see it going and how an organization like yours adjusts to companies as they start to further explore that hybrid-cloud continuum?

Hybrid Offspring

Harris: Going forward, the network will play probably one of the biggest roles in cloud services in the coming 10 years. More and more, we’re seeing software-defined network suppliers come into the marketplace. In Australia, we have a large data center, NEXTDC, which started up their own network to connect all of the data centers. We have Megaport, which is 100 percent software-defined, where you can buy a capacity for up to one hour or long term. As these types of networks become common, it enables more and more the fluid movement of the services on top.

When we start to cross over two of the other really big things happening, which are the Internet of Things (IoT) and 5G, you have, all of a sudden, this connectivity that means data services can be delivered anywhere and that means cloud services can be delivered anywhere.

More and more, you’re going to see the collection of data lakes, the collection of information even by small businesses that understand that they want to keep all the information, and analyze it. As they go to cloud service providers, they will demand these data services there, too, and the analysis capabilities will become very, very powerful.

In the short term, the network is going to be the key enabler for things such as IoT, which will then flow on to support a distributed model for cloud providers in the next 10 years, whereas traditionally we are seeing them centralized into key larger cities. That will change over in the coming years, because there is just too much data to centralize as people start gathering all of this information.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, managed services, professional services | Tagged , , , , , , , , | Leave a comment