How Imagine Communications leverages edge computing and HPC for live multiscreen IP video

The next BriefingsDirect Voice of the Customer HPC and edge computing strategies interview explores how a video delivery and customization capability has moved to the network edge — and closer to consumers — to support live, multi-screen Internet Protocol (IP) entertainment delivery.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn how hybrid technology and new workflows for IP-delivered digital video are being re-architected — with significant benefits to the end-user experience, as well as with new monetization values to the content providers.

Our guest is Glodina Connan-Lostanlen, Chief Marketing Officer at Imagine Communications in Frisco, Texas. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Your organization has many major media clients. What are the pressures they are facing as they look to the new world of multi-screen video and media?

Connan-Lostanlen: The number-one concern of the media and entertainment industry is the fragmentation of their audience. We live with a model supported by advertising and subscriptions that rely primarily on linear programming, with people watching TV at home.

Glodina Connan-Lostanlen

Connan-Lostanlen

And guess what? Now they are watching it on the go — on their telephones, on their iPads, on their laptops, anywhere. So they have to find the way to capture that audience, justify the value of that audience to their advertisers, and deliver video content that is relevant to them. And that means meeting consumer demand for several types of content, delivered at the very time that people want to consume it.  So it brings a whole range of technology and business challenges that our media and entertainment customers have to overcome. But addressing these challenges with new technology that increases agility and velocity to market also creates opportunities.

For example, they can now try new content. That means they can try new programs, new channels, and they don’t have to keep them forever if they don’t work. The new models create opportunities to be more creative, to focus on what they are good at, which is creating valuable content. At the same time, they have to make sure that they cater to all these different audiences that are either static or on the go.

Gardner: The media industry has faced so much change over the past 20 years, but this is a major, perhaps once-in-a-generation, level of change — when you go to fully digital, IP-delivered content.

As you say, the audience is pulling the providers to multi-screen support, but there is also the capability now — with the new technology on the back-end — to have much more of a relationship with the customer, a one-to-one relationship and even customization, rather than one-to-many. Tell us about the drivers on the personalization level.

Connan-Lostanlen: That’s another big upside of the fragmentation, and the advent of IP technology — all the way from content creation to making a program and distributing it. It gives the content creators access to the unique viewers, and the ability to really engage with them — knowing what they like — and then to potentially target advertising to them. The technology is there. The challenge remains about how to justify the business model, how to value the targeted advertising; there are different opinions on this, and there is also the unknown or the willingness of several generations of viewers to accept good advertising.

That is a great topic right now, and very relevant when we talk about linear advertising and dynamic ad insertion (DAI). Now we are able to — at the very edge of the signal distribution, the video signal distribution — insert an ad that is relevant to each viewer, because you know their preferences, you know who they are, and you know what they are watching, and so you can determine that an ad is going to be relevant to them.

But that means media and entertainment customers have to revisit the whole infrastructure. It’s not necessary rebuilding, they can put in add-ons. They don’t have to throw away what they had, but they can maintain the legacy infrastructure and add on top of it the IP-enabled infrastructure to let them take advantage of these capabilities.

Gardner: This change has happened from the web now all the way to multi-screen. With the web there was a model where you would use a content delivery network (CDN) to take the object, the media object, and place it as close to the edge as you could. What’s changed and why doesn’t that model work as well?

Connan-Lostanlen: I don’t know yet if I want to say that model doesn’t work anymore. Let’s let the CDN providers enhance their technology. But for sure, the volume of videos that we are consuming everyday is exponentially growing. That definitely creates pressure in the pipe. Our role at the front-end and the back-end is to make sure that videos are being created in different formats, with different ads, and everything else, in the most effective way so that it doesn’t put an undue strain on the pipe that is distributing the videos.

We are being pushed to innovate further on the type of workflows that we are implementing at our customers’ sites today, to make it efficient, to not leave storage at the edge and not centrally, and to do transcoding just-in-time. These are the things that are being worked on. It’s a balance between available capacity and the number of programs that you want to send across to your viewers – and how big your target market is.

The task for us on the back-end is to rethink the workflows in a much more efficient way. So, for example, this is what we call the digital-first approach, or unified distribution. Instead of planning a linear channel that goes the traditional way and then adding another infrastructure for multi-screen, on all those different platforms and then cable, and satellite, and IPTV, etc. — why not design the whole workflow digital-first. This frees the content distributor or provider to hold off on committing to specific platforms until the video has reached the edge. And it’s there that the end-user requirements determine how they get the signal.

This is where we are going — to see the efficiencies happen and so remove the pressure on the CDNs and other distribution mechanisms, like over-the-air.

Explore

High-Performance Computing

Solutions from HPE

Gardner: It means an intelligent edge capability, whereas we had an intelligent core up until now. We’ll also seek a hybrid capability between them, growing more sophisticated over time.

We have a whole new generation of technology for video delivery. Tell us about Imagine Communications. How do you go to market? How do you help your customers?

Education for future generations

Connan-Lostanlen: Two months ago we were in Las Vegas for our biggest tradeshow of the year, the NAB Show. At the event, our customers first wanted to understand what it takes to move to IP — so the “how.” They understand the need to move to IP, to take advantage of the benefits that it brings. But how do they do this, while they are still navigating the traditional world?

It’s not only the “how,” it’s needing examples of best practices. So we instructed them in a panel discussion, for example, on Over the Top Technology (OTT), which is another way of saying IP-delivered, and what it takes to create a successful multi-screen service. Part of the panel explained what OTT is, so there’s a lot of education.

There is also another level of education that we have to provide, which is moving from the traditional world of serial digital interfaces (SDIs) in the broadcast industry to IP. It’s basically saying analog video signals can be moved into digital. Then not only is there a digitally sharp signal, it’s an IP stream. The whole knowledge about how to handle IP is new to our own industry, to our own engineers, to our own customers. We also have to educate on what it takes to do this properly.

One of the key things in the media and entertainment industry is that there’s a little bit of fear about IP, because no one really believed that IP could handle live signals. And you know how important live television is in this industry – real-time sports and news — this is where the money comes from. That’s why the most expensive ads are run during the Super Bowl.

It’s essential to be able to do live with IP – it’s critical. That’s why we are sharing with our customers the real-life implementations that we are doing today.

We are also pushing multiple standards forward. We work with our competitors on these standards. We have set up a trade association to accelerate the standards work. We did all of that. And as we do this, it forces us to innovate in partnership with customers and bring them on board. They are part of that trade association, they are part of the proof-of-concept trials, and they are gladly sharing their experiences with others so that the transition can be accelerated.

Gardner: Imagine Communications is then a technology and solutions provider to the media content companies, and you provide the means to do this. You are also doing a lot with ad insertion, billing, in understanding more about the end-user and allowing that data flow from the edge back to the core, and then back to the edge to happen.

At the heart of it all

Connan-Lostanlen: We do everything that happens behind the camera — from content creation all the way to making a program and distributing it. And also, to your point, on monetizing all that with a management system. We have a long history of powering all the key customers in the world for their advertising system. It’s basically an automated system that allows the selling of advertising spots, and then to bill them — and this is the engine of where our customers make money. So we are at the heart of this.

We are in the prime position to help them take advantage of the new advertising solutions that exist today, including dynamic ad insertion. In other words, how you target ads to the single viewer. And the challenge for them is now that they have a campaign, how do they design it to cater both to the linear traditional advertising system as well as the multi-screen or web mobile application? That’s what we are working on. We have a whole set of next-generation platforms that allow them to take advantage of both in a more effective manner.

Gardner: The technology is there, you are a solutions provider. You need to find the best ways of storing and crunching data, close to the edge, and optimizing networks. Tell us why you choose certain partners and what are the some of the major concerns you have when you go to the technology marketplace?

Connan-Lostanlen: One fundamental driver here, as we drive the transition to IP in this industry, is in being able to rely on consumer-off-the-shelf (COTS) platforms. But even so, not all COTS platforms are born equal, right?

For compute, for storage, for networking, you need to rely on top-scale hardware platforms, and that’s why about two years ago we started to work very closely with Hewlett Packard Enterprise (HPE) for both our compute and storage technology.

Explore

High-Performance Computing

Solutions from HPE

We develop the software appliances that run on those platforms, and we sell this as a package with HPE. It’s been a key value proposition of ours as we began this journey to move to IP. We can say, by the way, our solutions run on HPE hardware. That’s very important because having high-performance compute (HPC) that scales is critical to the broadcast and media industry. Having storage that is highly reliable is fundamental because going off the air is not acceptable. So it’s 99.9999 percent reliable, and that’s what we want, right?

It’s a fundamental part of our message to our customers to say, “In your network, put Imagine solutions, which are powered by one of the top compute and storage technologies.”

Gardner: Another part of the change in the marketplace is this move to the edge. It’s auspicious that just as you need to have more storage and compute efficiency at the edge of the network, close to the consumer, the infrastructure providers are also designing new hardware and solutions to do just that. That’s also for the Internet of Things (IoT) requirements, and there are other drivers. Nonetheless, it’s an industry standard approach.

What is it about HPE Edgeline, for example, and the architecture that HPE is using, that makes that edge more powerful for your requirements? How do you view this architectural shift from core data center to the edge?

Optimize the global edge

Connan-Lostanlen: It’s a big deal because we are going to be in a hybrid world. Most of our customers, when they hear about cloud, we have to explain it to them. We explain that they can have their private cloud where they can run virtualized applications on-premises, or they can take advantage of public clouds.

Being able to have a hybrid model of deployment for their applications is critical, especially for large customers who have operations in several places around the globe. For example, such big names as Disney, Turner –- they have operations everywhere. For them, being able to optimize at the edge means that you have to create an architecture that is geographically distributed — but is highly efficient where they have those operations. This type of technology helps us deliver more value to the key customers.

Gardner: The other part of that intelligent edge technology is that it has the ability to be adaptive and customized. Each region has its own networks, its own regulation, and its own compliance, security, and privacy issues. When you can be programmatic as to how you design your edge infrastructure, then a custom-applications-orientation becomes possible.

Is there something about the edge architecture that you would like to see more of? Where do you see this going in terms of the capabilities of customization added-on to your services?

Connan-Lostanlen: One of the typical use-cases that we see for those big customers who have distributed operations is that they like to try and run their disaster recovery (DR) site in a more cost-effective manner. So the flexibility that an edge architecture provides to them is that they don’t have to rely on central operations running DR for everybody. They can do it on their own, and they can do it cost-effectively. They don’t have to recreate the entire infrastructure, and so they do DR at the edge as well.

We especially see this a lot in the process of putting the pieces of the program together, what we call “play out,” before it’s distributed. When you create a TV channel, if you will, it’s important to have end-to-end redundancy — and DR is a key driver for this type of application.

Gardner: Are there some examples of your cutting-edge clients that have adopted these solutions? What are the outcomes? What are they able to do with it?

Pop-up power

Connan-Lostanlen: Well, it’s always sensitive to name those big brand names. They are very protective of their brands. However, one of the top ones in the world of media and entertainment has decided to move all of their operations — from content creation, planning, and distribution — to their own cloud, to their own data center.

They are at the forefront of playing live and recorded material on TV — all from their cloud. They needed strong partners in data centers. So obviously we work with them closely, and the reason why they do this is simply to really take advantage of the flexibility. They don’t want to be tied to a restricted channel count; they want to try new things. They want to try pop-up channels. For the Oscars, for example, it’s one night. Are you going to recreate the whole infrastructure if you can just check it on and off, if you will, out of their data center capacity? So that’s the key application, the pop-up channels and ability to easily try new programs.

Gardner: It sounds like they are thinking of themselves as an IT company, rather than a media and entertainment company that consumes IT. Is that shift happening?

Connan-Lostanlen: Oh yes, that’s an interesting topic, because I think you cannot really do this successfully if you don’t start to think IT a little bit. What we are seeing, interestingly, is that our customers typically used to have the IT department on one side, the broadcast engineers on the other side — these were two groups that didn’t speak the same language. Now they get together, and they have to, because they have to design together the solution that will make them more successful. We are seeing this happening.

I wouldn’t say yet that they are IT companies. The core strength is content, that is their brand, that’s what they are good at — creating amazing content and making it available to as many people as possible.

They have to understand IT, but they can’t lose concentration on their core business. I think the IT providers still have a very strong play there. It’s always happening that way.

In addition to disaster recovery being a key application, multi-screen delivery is taking advantage of that technology, for sure.

Explore

High-Performance Computing

Solutions from HPE

Gardner: These companies are making this cultural shift to being much more technically oriented. They think about standard processes across all of what they do, and they have their own core data center that’s dynamic, flexible, agile and cost-efficient. What does that get for them? Is it too soon, or do we have some metrics of success for companies that make this move toward a full digitally transformed organization?

Connan-Lostanlen: They are very protective about the math. It is fair to say that the up-front investments may be higher, but when you do the math over time, you do the total cost of ownership for the next 5 to 10 years — because that’s typically the life cycle of those infrastructures – then definitely they do save money. On the operational expenditure (OPEX) side [of private cloud economics] it’s much more efficient, but they also have upside on additional revenue. So net-net, the return on investment (ROI) is much better. But it’s kind of hard to say now because we are still in the early days, but it’s bound to be a much greater ROI.

Another specific DR example is in the Middle East. We have a customer there who decided to operate the DR and IP in the cloud, instead of having a replicated system with satellite links in between. They were able to save $2 million worth of satellite links, and that data center investment, trust me, was not that high. So it shows that the ROI is there.

My satellite customers might say, “Well, what are you trying to do?” The good news is that they are looking at us to help them transform their businesses, too. So big satellite providers are thinking broadly about how this world of IP is changing their game. They are examining what they need to do differently. I think it’s going to create even more opportunities to reduce costs for all of our customers.

IT enters a hybrid world

Gardner: That’s one of the intrinsic values of a hybrid IT approach — you can use many different ways to do something, and then optimize which of those methods works best, and also alternate between them for best economics. That’s a very powerful concept.

Connan-Lostanlen: The world will be a hybrid IT world, and we will take advantage of that. But, of course, that will come with some challenges. What I think is next is the number-one question that I get asked.

Three years ago costumers would ask us, “Hey, IP is not going to work for live TV.” We convinced them otherwise, and now they know it’s working, it’s happening for real.

Secondly, they are thinking, “Okay, now I get it, so how do I do this?” We showed them, this is how you do it, the education piece.

Now, this year, the number-one question is security. “Okay, this is my content, the most valuable asset I have in my company. I am not putting this in the cloud,” they say. And this is where another piece of education has to start, which is: Actually, as you put stuff on your cloud, it’s more secure.

And we are working with our technology providers. As I said earlier, the COTS providers are not equal. We take it seriously. The cyber attacks on content and media is critical, and it’s bound to happen more often.

Initially there was a lack of understanding that you need to separate your corporate network, such as emails and VPNs, from you broadcast operations network. Okay, that’s easy to explain and that can be implemented, and that’s where most of the attacks over the last five years have happened. This is solved.

They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

However, the cyber attackers are becoming more clever, so they will overcome these initial defenses.They are going to get right into the servers, into the storage, and try to mess with it over there. So I think it’s super important to be able to say, “Not only at the software level, but at the hardware firmware level, we are adding protection against your number-one issue, security, which everybody can see is so important.”

Gardner: Sure, the next domino to fall after you have the data center concept, the implementation, the execution, even the optimization, is then to remove risk, whether it’s disaster recovery, security, right down to the silicon and so forth. So that’s the next thing we will look for, and I hope I can get a chance to talk to you about how you are all lowering risk for your clients the next time we speak.

Explore

High-Performance Computing

Solutions from HPE

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

 

Advertisements
Posted in application transformation, big data, Cloud computing, Cyber security, data analysis, disaster recovery, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, Internet of Things, machine learning, Security, server, storage | Tagged , , , , , , , , , , , | Leave a comment

How The Open Group Healthcare Forum and Health Enterprise Reference Architecture cures process and IT ills

The next BriefingsDirect healthcare thought leadership panel discussion examines how a global standards body, The Open Group, is working to improve how the healthcare industry functions.

We’ll now learn how The Open Group Healthcare Forum (HCF) is advancing best practices and methods for better leveraging IT in healthcare ecosystems. And we’ll examine the forum’s Health Enterprise Reference Architecture (HERA) initiative and its role in standardizing IT architectures. The goal is to foster better boundaryless interoperability within and between healthcare public and private sector organizations.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about improving the processes and IT that better supports healthcare, please welcome our panel of experts: Oliver Kipf, The Open Group Healthcare Forum Chairman and Business Process and Solution Architect at Philips, based in Germany; Dr. Jason Lee, Director of the Healthcare Forum at The Open Group, in Boston, and Gail Kalbfleisch, Director of the Federal Health Architecture at the US Department of Health and Human Services in Washington, D.C. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: For those who might not be that familiar with the Healthcare Forum and The Open Group in general, tell us about why the Healthcare Forum exists, what its mission is, and what you hope to achieve through your work.

Lee: The Healthcare Forum exists because there is a huge need to architect the healthcare enterprise, which is approaching 20 percent of the gross domestic product (GDP) of the economy in the US, and approaching that level in other developing countries in Europe.

Jason Lee (1)

Lee

There is a general feeling that enterprise architecture is somewhat behind in this industry, relative to other industries. There are important gaps to fill that will help those stakeholders in healthcare — whether they are in hospitals or healthcare delivery systems or innovation hubs in organizations of different sorts, such as consulting firms. They can better leverage IT to achieve business goals, through the use of best practices, lessons learned, and the accumulated wisdom of the various Forum members over many years of work. We want them to understand the value of our work so they can use it to address their needs.

Our mission, simply, is to help make healthcare information available when and where it’s needed and to accomplish that goal through architecting the healthcare enterprise. That’s what we hope to achieve.

Gardner: As the chairman of the HCF, could you explain what a forum is, Oliver? What does it consist of, how many organizations are involved?

Kipf: The HCF is made up of its members and I am really proud of this team. We are very passionate about healthcare. We are in the technology business, so we are more than just the governing bodies; we also have participation from the provider community. That makes the Forum true to the nature of The Open Group, in that we are global in nature, we are vendor-neutral, and we are business-oriented. We go from strategy to execution, and we want to bridge from business to technology. We take the foundation of The Open Group, and then we apply this to the HCF.

Oliver Matthias Kipf (1)

Kipf

As we have many health standards out there, we really want to leverage [experience] from our 30 members to make standards work by providing the right type of tools, frameworks, and approaches. We partner a lot in the industry.

The healthcare industry is really a crowded place and there are many standard development organizations. There are many players. It’s quite vital as a forum that we reach out, collaborate, and engage with others to reach where we want to be.

Gardner: Gail, why is the role of the enterprise architecture function an important ingredient to help bring this together? What’s important about EA when we think about the healthcare industry?

Kalbfleisch: From an EA perspective, I don’t really think that it matters whether you are talking about the healthcare industry or the finance industry or the personnel industry or the gas and electric industry. If you look at any of those, the organizations or the companies that tend to be highly functioning, they have not just architecture — because everyone has architecture for what they do. But that architecture is documented and it’s available for use by decision-makers, and by developers across the system so that each part can work well together.

Gail Kalbfleisch (1)

Kalbfleisch

We know that within the healthcare industry it is exceedingly complicated, and it’s a mixture of a lot of different things. It’s not just your body and your doctor, it’s also your insurance, your payers, research, academia — and putting all of those together.

If we don’t have EA, people new to the system — or people who were deeply embedded into their parts of the system — can’t see how that system all works together usefully. For example, there are a lot of different standards organizations. If we don’t see how all of that works together — where everybody else is working, and how to make it fit together – then we’re going to have a hard time getting to interoperability quickly and efficiently.

It’s important that we get to individual solution building blocks to attain a more integrated approach.

Kipf: If you think of the healthcare industry, we’ve been very good at developing individual solutions to specific problems. There’s a lot of innovation and a lot of technology that we use. But there is an inherent risk of producing silos among the many stakeholders who, ultimately, work for the good of the patient. It’s important that we get to individual solution building blocks to attain a more integrated approach based on architecture building blocks, and based on common frameworks, tools and approaches.

Gardner: Healthcare is a very complex environment and IT is very fast-paced. Can you give us an update on what the Healthcare Forum has been doing, given the difficulty of managing such complexity?

Bird’s-eye view mapping

Lee: The Healthcare Forum began with a series of white papers, initially focusing on an information model that has a long history in the federal government. We used enterprise architecture to evaluate the Federal Health Information Model (FHIM).  People began listening and we started to talk to people outside of The Open Group, and outside of the normal channels of The Open Group. We talked to different types of architects, such as information architects, solution architects, engineers, and initially settled on the problem that is essential to The Open Group — and that is the problem of boundaryless information flow.

It can be difficult to achieve boundaryless information flow to enable information to travel digitally, securely and quickly.

We need to get beyond the silos that Oliver mentioned and that Gail alluded to. As I mentioned in my opening comments, this is a huge industry, and Gail illustrated it by naming some of the stakeholders within the health, healthcare and wellness enterprises. If you think of your hospital, it can be difficult to achieve boundaryless information flow to enable your information to travel digitally, securely, quickly, and in a way that’s valid, reliable and understandable by those who send it and by those who receive it.  But if that is possible, it’s all to the betterment of the patient.

Initially, in our focus on what healthcare folks call interoperability — what we refer to as boundaryless information flow — we came to realize through discussions with stakeholders in the public sector, as well as the private sector and globally, that understanding how the different pieces are linked together is critical. Anybody who works in an organization or belongs to a church, school or family understands that sometimes getting the right message communicated from point A to point B can be difficult.

To address that issue, the HCF members have decided to create a Health Enterprise Reference Architecture (HERA) that is essentially a framework and a map at the highest level. It helps people see that what they do relates to what others do, regardless of their position in their company. You want to deliver value to those people, to help them understand how their work is interconnected, and how IT can help them achieve their goals.

Gardner: Oliver, who should be aware of and explore engaging with the HCF?

Kipf: The members of The Open Group themselves, many of them are players in the field of healthcare, and so they are the natural candidates to really engage with. In that healthcare ecosystem we have providers, payers, governing bodies, pharmaceuticals, and IT companies.

Those who deeply need planning, management and architecting — to make big thinking a reality out there — those decision-makers are the prime candidates for engagement in the Healthcare Forum. They can benefit from the kinds of products we produce, the reference architecture, and the white papers that we offer. In a nutshell, it’s the members, and it’s the healthcare industry, and the healthcare ecosystem that we are targeting.

Gardner: Gail, perhaps you could address the reference architecture initiative? Why do you see that as important? Who do you think should be aware of it and contribute to it?

Shared reference points

Kalbfleisch: Reference architecture is one of those building block pieces that should be used. You can call it a template. You can have words that other people can relate to, maybe easier than the architecture-speak.

If you take that template, you can make it available to other people so that we can all be designing our processes and systems with a common understanding of our information exchange — so that it crosses boundaries easily and securely. If we are all running on the same template, that’s going to enable us to identify how to start, what has to be included, and what standards we are going to use.

A reference architecture is one of those very important pieces that not only forms a list of how we want to do things, and what we agreed to, but it also makes it so that every organization doesn’t have to start from scratch. It can be reused and improved upon as we go through the work. If someone improves the architecture, that can come back into the reference architecture.

Who should know about it? Decision makers, developers, medical device innovators, people who are looking to improve the way information flows within any health sector.

Who should know about it? Decision makers, developers, medical device innovators, people who are looking to improve the way information flows within any health sector — whether it’s Oliver in Europe, whether it’s someone over in California, Australia, it really doesn’t matter. Anyone who wants to make interoperability better should know about it.

My focus is on decision-makers, policymakers, process developers, and other people who look at it from a device-design perspective. One of the things that has been discussed within the HCF’s reference architecture work is the need to make sure that it’s all at a high-enough level, where we can agree on what it looks like. Yet it also must go down deeply enough so that people can apply it to what they are doing — whether it’s designing a piece of software or designing a medical device.

Gardner: Jason, The Open Group has been involved with standards and reference architectures for decades, with such recent initiatives as the IT4IT approach, as well as the longstanding TOGAF reference architecture. How does the HERA relate to some of these other architectural initiatives?

Building on a strong foundation

Lee: The HERA starts by using the essential components and insights that are built into the TOGAF ArchitecturalDevelopment Model (ADM) and builds from there. It also uses the ArchiMate language, but we have never felt restricted to using only those existing Open Group models that have been around for some time and are currently being developed further.

We are a big organization in terms of our approach, our forum, and so we want to draw from the best there is in order to fill in the gaps. Over the last few decades, an incredible amount of talent has joined The Open Group to develop architectural models and standards that apply across multiple industries, including healthcare. We reuse and build from this important work.

In addition, as we have dug deeper into the healthcare industry, we have found other issues – gaps — that need filling. There are related topics that would benefit. To do that, we have been working hard to establish relationships with other organizations in the healthcare space, to bring them in, and to collaborate. We have done this with the Health Level Seven Organization (HL7), which is one of the best-known standards organizations in the world.

We are also doing this now with an organization called Healthcare Services Platform Consortium (HSPC), which involves academic, government and hospital organizations, as well as people who are focused on developing standards around terminology.

IT’s getting better all the time

Kipf: If you think about reference architecture in a specific domain, such as in the healthcare industry, you look at your customers and the enterprises — those really concerned with the delivery of health services. You need to ask yourself the question: What are their needs?

And the need in this industry is a focus on the person and on the service. It’s also highly regulatory, so being compliant is a big thing. Quality is a big thing. The idea of lifetime evolution — that you become better and better all the time — that is very important, very intrinsic to the healthcare industry.

When we are looking into the customers out there that we believe that the HERA could be of value, it’s the small- to mid-sized and the large enterprises that you have to think of, and it’s really across the globe. That’s why we believe that the HERA is something that is tuned into the needs of our industry.

And as Jason mentioned, we build on open standards and we leverage them where we can. ArchiMate is one of the big ones — not only the business language, but also a lot of the concepts are based on ArchiMate. But we need to include other standards as well, obviously those from the healthcare industry, and we need to deviate from specific standards where this is of value to our industry.

Gardner: Oliver, in order to get this standard to be something that’s used, that’s very practical, people look to results. So if you were to take advantage of such reference architectures as HERA, what should you expect to get back? If you do it right, what are the payoffs?

Capacity for change and collaboration

Kipf: It should enable you to do a better job, to become more efficient, and to make better use of technology. Those are the kinds of benefits that you see realized. It’s not only that you have a place where you can model all the elements of your enterprise, where you can put and manage your processes and your services, but it’s also in the way you are architecting your enterprise.

The HERA gives you the tools to get where you want to be, to define where you want to be — and also how to get there.

It gives you the ability to change. From a transformation management perspective, we know that many healthcare systems have great challenges and there is this need to change. The HERA gives you the tools to get where you want to be, to define where you want to be — and also how to get there. This is where we believe it provides a lot of benefits.

Gardner: Gail, similar question, for those organizations, both public and private sector, that do this well, that embrace HERA, what should they hope to get in return?

Kalbfleisch: I completely agree with what Oliver said. To add, one of the benefits that you get from using EA is a chance to have a perspective from outside your own narrow silos. The HERA should be able to help a person see other areas that they have to take into consideration, that maybe they wouldn’t have before.

Another value is to engage with other people who are doing similar work, who may have either learned lessons, or are doing similar things at the same time. So that’s one of the ways I see the effectiveness and of doing our jobs better, quicker, and faster.

Also, it can help us identify where we have gaps and where we need to focus our efforts. We can focus our limited resources in much better ways on specific issues — where we can accomplish what we are looking to — and to gain that boundaryless information flow.

Reaching your goals

We show them how they can follow a roadmap to accomplish their self-defined goals more effectively.

Lee: Essentially, the HERA will provide a framework that enables companies to leverage IT to achieve their goals. The wonderful thing about it is that we are not telling organizations what their goals should be. We show them how they can follow a roadmap to accomplish their self-defined goals more effectively. Often this involves communicating the big picture, as Gail said, to those who are in siloed positions within their organizations.

There is an old saying: “What you see depends on where you sit.” The HERA helps stakeholders gain this perspective by helping key players understand the relationships, for example, between business processes and engineering. So whether a stakeholder’s interest is increasing patient satisfaction, reducing error, improving quality, and having better patient outcomes and gaining more reimbursement where reimbursement is tied to outcomes — using the product and the architecture that we are developing helps all of these goals.

Gardner: Jason, for those who are intrigued by what you are doing with HERA, tell us about its trajectory, its evolution, and how that journey unfolds. Who can they learn more or get involved?

Lee: We have only been working on the HERA per se for the last year, although its underpinnings go back 20 years or more. Its trajectory is not to a single point, but to an evolutionary process. We will be producing products, white papers, as well as products that others can use in a modular fashion to leverage what they already use within their legacy systems.

We encourage anyone out there, particularly in the health system delivery space, to join us. That can be done by contacting me at j.lee@opengroup.org and at www.opengroup.org/healthcare.

It’s an incredible time, a very opportune time, for key players to be involved because we are making very important decisions that lay the foundation for the HERA. We collaborate with key players, and we lay down the tracks from which we will build increasing levels of complexity.

But we start at the top, using non-architectural language to be able to talk to decision-makers, whether they are in the public sector or private sector. So we invite any of these organizations to join us.

Learn from others’ mistakes

Kalbfleisch: My first foray into working with The Open Group was long before I was in the health IT sector. I was with the US Air Force and we were doing very non-health architectural work in conjunction with The Open Group.

The interesting part to me is in ensuring boundaryless information flow in a manner that is consistent with the information flowing where it needs to go and who has access to it. How does it get from place to place across distinct mission areas, or distinct business areas where the information is not used the same way or stored in the same way? Such dissonance between those business areas is not a problem that is isolated just to healthcare; it’s across all business areas.

We don’t have to make the same mistakes. We can take what people have learned and extend it much further.

That was exciting. I was able to take awareness of The Open Group from a previous life, so to speak, and engage with them to get involved in the Healthcare Forum from my current position.

A lot of the technical problems that we have in exchanging information, regardless of what industry you are in, have been addressed by other people, and have already been worked on. By leveraging the way organizations have already worked on it for 20 years, we can leverage that work within the healthcare industry. We don’t have to make the same mistakes that were made before. We can take what people have learned and extend it much further. We can do that best by working together in areas like The Open Group HCF.

Kipf: On that evolutionary approach, I also see this as a long-term journey. Yes, there will be releases when we have a specification, and there will guidelines. But it’s important that this is an engagement, and we have ongoing collaboration with customers in the future, even after it is released. The coming together of a team is what really makes a great reference architecture, a team that places the architecture at a high level.

We can also develop distinct flavors of the specification. We should expect much more detail. Those implementation architectures then become spin-offs of reference architectures such as the HERA.

Lee: I can give some concrete examples, to bookend the kinds of problems that can be addressed using the HERA. At the micro end, a hospital can use the HERA structure to implement a patient check-in to the hospital for patients who would like to bypass the usual process and check themselves in. This has a number of positive value outcomes for the hospital in terms of staffing and in terms of patient satisfaction and cost savings.

At the other extreme, a large hospital system in Philadelphia or Stuttgart or Oslo or in India finds itself with patients appearing at the emergency room or in the ambulatory settings unaffiliated with that particular hospital. Rather than have that patient come as a blank sheet of paper, and redo all the tests that had been done prior, the HERA will help these healthcare organizations figure out how to exchange data in a meaningful way. So the information can flow digitally, securely, and it means the same thing to those who get it as much as it does to those who receive it, and everything is patient-focused, patient-centric.

Gardner: Oliver, we have seen with other Open Group standards and reference architectures, a certification process often comes to bear that helps people be recognized for being adept and properly trained. Do you expect to have a certification process with HERA at some point?

Certifiable enterprise expertise

Kipf: Yes, the more we mature with the HERA, along with the defined guidelines and the specifications and the HERA model, the more there will be a need and demand for health enterprise-focused employees in the marketplace. They can show how consulting services can then use HERA.

And that’s a perfect place when you think of certification. It helps make sure that the quality of the workforce is strong, whether it’s internal or in the form of a professional services role. They can comply with the HERA.

Gardner: Clearly, this has applicability to healthcare payer organizations, provider organizations, government agencies, and the vendors who supply pharmaceuticals or medical instruments. There are a great deal of process benefits when done properly, so that enterprise architects could become certified eventually.

My question then is how do we take the HERA, with such a potential for being beneficial across the board, and make it well-known? Jason, how do we get the word out? How can people who are listening to this or reading this, help with that?

Spread the word, around the world

Lee: It’s a question that has to be considered every time we meet. I think the answer is straightforward. First, we build a product [the HERA] that has clear value for stakeholders in the healthcare system. That’s the internal part.

Second—and often, simultaneously—we develop a very important marketing/collaboration/socialization capability. That’s the external part. I’ve worked in healthcare for more than 30 years, and whether it’s public or private sector decision-making, there are many stakeholders, and everybody’s focused on the same few things: improving value, enhancing quality, expanding access, and providing security.

All companies must plan, build, operate and improve.

We will continue developing relationships with key players to ensure them that what they’re doing is key to the HERA. At the broadest level, all companies must plan, build, operate and improve.

There are immense opportunities for business development. There are innumerable ways to use the HERA to help health enterprise systems operate efficiently and effectively. There are opportunities to demonstrate to key movers and shakers in healthcare system how what we’re doing integrates with what they’re doing. This will maximize the uptake of the HERA and minimize the chances it sits on a shelf after it’s been developed.

Gardner: Oliver, there are also a variety of regional conferences and events around the world. Some of them are from The Open Group. How important is it for people to be aware of these events, maybe by taking part virtually online or in person? Tell us about the face-time opportunities, if you will, of these events, and how that can foster awareness and improvement of HERA uptake.

Kipf: We began with the last Open Group event. I was in Berlin, presenting the HERA. As we see more development, more maturity, we can then show more. The uptake will be there and we also need to include things like cyber security, things like risk compliance. So we can bring in a lot of what we have been doing in various other initiatives within The Open Group. We can show how it can be a fusion, and make this something that is really of value.

I am confident that through face-to-face events, such as The Open Group events, we can further spread the message.

Lee: And a real shout-out to Gail and Oliver who have been critical in making introductions and helping to share The Open Group Healthcare Forum’s work broadly. The most recent example is the 2016 HIMSS conference, a meeting that brings together more than 40,000 people every year. There is a federal interoperability showcase there, and we have been able to introduce and discuss our HERA work there.

We’ve collaborated with the Office of the National Coordinator where the Federal Heath Architecture sits, with the US Veterans Administration, with the US Department of Defense, and with the Centers for Medicare and Medicaid (CMS). This is all US-centered, but there are lots of opportunities globally to not just spread the word in public for domains and public venues, but also to go to those key players who are moving the industry forward, and in some cases convince them that enterprise architecture does provide that structure, that template that can help them achieve their goals.

Future forecast

Gardner: I’m afraid we are almost out of time. Gail, perhaps a look into the crystal ball. What do you expect and hope to see in the next few years when it comes to improvements initiatives like HERA at The Open Group Forum can provide? What do you hope to see in the next couple of years in terms of improvement?

Kalbfleisch: What I would like to see happen in the next couple of years as it relates to the HERA, is the ability to have a place where we can go from anywhere and get a glimpse of the landscape. Right now, it’s hard to find anywhere where someone in the US can see the great work that Oliver is doing, or the people in Norway, or the people in Australia are doing.

Reference architecture is great to have, but it has no power until it’s used.

It’s really important that we have opportunities to communicate as large groups, but also the one-on-one. Yet when we are not able to communicate personally, I would like to see a resource or a tool where people can go and get the information they need on the HERA on their own time, or as they have a question. Reference architecture is great to have, but it has no power until it’s used.

My hope for the future is for the HERA to be used by decision-makers, developers, and even patients. So when an organizations such as some hospital wants to develop a new electronic health record (EHR) system, they have a place to go and get started, without having to contact Jason or wait for a vendor to come along and tell them how to solve a problem. That would be my hope for the future.

Lee: You can think of the HERA as a soup with three key ingredients. First is the involvement and commitment of very bright people and top-notch organizations. Second, we leverage the deep experience and products of other forums of The Open Group. Third, we build on external relationships. Together, these three things will help make the HERA successful as a certifiable product that people can use to get their work done and do better.

Gardner: Jason, perhaps you could also tee-up the next Open Group event in Amsterdam. Can you tell us more about that and how to get involved?

Lee: We are very excited about our next event in Amsterdam in October. You can go to www.opengroup.org and look under Events, read about the agendas, and sign up there. We will have involvement from experts from the US, UK, Germany, Australia, Norway, and this is just in the Healthcare Forum!

The Open Group membership will be giving papers, having discussions, moving the ball forward. It will be a very productive and fun time and we are looking forward to it. Again, anyone who has a question or is interested in joining the Healthcare Forum can please send me, Jason Lee, an email at j.lee@opengroup.org.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in big data, Cloud computing, Cyber security, data analysis, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Government, Information management, Platform 3.0, Security, The Open Group | Tagged , , , , , , , , | Leave a comment

Hybrid cloud ecosystem readies for impact from arrival of Microsoft Azure Stack

The next BriefingsDirect cloud deployment strategies interview explores how hybrid cloud ecosystem players such as PwC and Hewlett Packard Enterprise (HPE) are gearing up to support the Microsoft Azure Stack private-public cloud continuum.

We’ll now learn what enterprises can do to make the most of hybrid cloud models and be ready specifically for Microsoft’s solutions for balancing the boundaries between public and private cloud deployments.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to explore the latest approaches for successful hybrid IT, we’re joined by Rohit “Ro” Antao, a Partner at PwC, and Ken Won, Director of Cloud Solutions Marketing at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Ro, what are the trends driving adoption of hybrid cloud models, specifically Microsoft Azure Stack? Why are people interested in doing this?

Antao: What we have observed in the last 18 months is that a lot of our clients are now aggressively pushing toward the public cloud. In that journey there are a couple of things that are becoming really loud and clear to them.

Journey to the cloud

Number one is that there will always be some sort of a private data center footprint. There are certain workloads that are not appropriate for the public cloud; there are certain workloads that perform better in the private data center. And so the first acknowledgment is that there is going to be that private, as well as public, side of how they deliver IT services.

Now, that being said, they have to begin building the capabilities and the mechanisms to be able to manage these different environments seamlessly. As they go down this path, that’s where we are seeing a lot of traction and focus.

The other trend in conjunction with that is in the public cloud space where we see a lot of traction around Azure. They have come on strong. They have been aggressively going after the public cloud market. Being able to have that seamless environment between private and public with Azure Stack is what’s driving a lot of the demand.

Won: We at HPE are seeing that very similarly, as well. We call that “hybrid IT,” and we talk about how customers need to find the right mix of private and public — and managed services — to fit their businesses. They may put some services in a public cloud, some services in a private cloud, and some in a managed cloud. Depending on their company strategy, they need to figure out which workloads go where.

Ken Won

Won

We have these conversations with many of our customers about how do you determine the right placement for these different workloads — taking into account things like security, performance, compliance, and cost — and helping them evaluate this hybrid IT environment that they now need to manage.

Gardner: Ro, a lot of what people have used public cloud for is greenfield apps — beginning in the cloud, developing in the cloud, deploying in the cloud — but there’s also an interest in many enterprises about legacy applications and datasets. Is Azure Stack and hybrid cloud an opportunity for them to rethink where their older apps and data should reside?

Antao: Absolutely. When you look at the broader market, a lot of these businesses are competing today in very dynamic markets. When companies today think about strategy,

Rohit Antao

Antao

it’s no longer the 5- and 10-year strategy. They are thinking about how to be relevant in the market this year, today, this quarter. That requires a lot of flexibility in their business model; that requires a lot of variability in their cost structure.

When you look at it from that viewpoint, a lot of our clients look at the public cloud as more than, “Is the app suitable for the public cloud?” They are also seeking certain cost advantages in terms of variability in that cost structure that they can take advantage of. And that’s where we are seeing them look at the public cloud beyond just applications in terms that are suitable for public cloud.

Public and/or private power

Won: We help a lot of companies think about where the best place is for their traditional apps. Often they don’t want to restructure them, they don’t want to rewrite them, because they are already an investment; they don’t want to spend a lot of time refactoring them.

If you look at these traditional applications, a lot of times when they are dealing with data – especially if they are dealing with sensitive data — those are better placed in a private cloud.

Antao: One of the great things about Microsoft Azure Stack is it gives the data center that public cloud experience — where developers have the similar experience as they would in a public cloud. The only difference is that you are now controlling the costs as well. So that’s another big advantage we see.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: Yeah, absolutely, it’s giving the developers the experience of a public cloud, but from the IT standpoint of also providing the compliance, the control, and the security of a private cloud. Allowing applications to be deployed in either a public or private cloud — depending on its requirements — is incredibly powerful. There’s no other environment out there that provides that API-compatibility between private and public cloud deployments like Azure Stack does. 

Gardner: Clearly Microsoft is interested in recognizing that skill sets, platform affinity, and processes are all really important. If they are able to provide a private cloud and public cloud experience that’s common to the IT operators that are used to using Microsoft platforms and frameworks — that’s a boon. It’s also important for enterprises to be able to continue with the skills they have.

Ro, is such a commonality of skills and processes not top of mind for many organizations? 

Antao: Absolutely! I think there is always the risk when you have different environments having that “swivel chair” approach. You have a certain set of skills and processes for your private data center. Then you now have a certain set of skills and processes to manage your public cloud footprint.

One of the big problems and challenges that this solves is being able to drive more of that commonality across consistent sets of processes. You can have a similar talent pool, and you have similar kinds of training and awareness that you are trying to drive within the organization — because you now can have similar stacks on both ends.

Won: That’s a great point. We know that the biggest challenge to adopting new concepts

The biggest challenge to adopting new concepts is not the technology; it’s really the people and process issues.

is not the technology; it’s really the people and process issues. So if you can address that, which is what Azure Stack does, it makes it so much easier for enterprises to bring on new capabilities, because they are leveraging the experience that they already have using Azure public cloud.

Gardner: Many IT organizations are familiar with Microsoft Azure Stack. It’s been in technical preview for quite some time. As it hits the market in September 2017, in seeking that total-solution, people-and-process approach, what is PwC bringing to the table to help organizations get the best value and advantage out of Azure Stack?

Hybrid: a tectonic IT shift

Antao: Ken made the point earlier in this discussion about hybrid IT. When you look at IT pivoting to more of the hybrid delivery mode, it’s a tectonic shift in IT’s operating model, in their architecture, their culture, in their roles and responsibilities – in the fundamental value proposition of IT to the enterprise.

When we partner with HPE in helping organizations drive through this transformation, we work with HPE in rethinking the operating model, in understanding the new kinds of roles and skills, of being able to apply these changes in the context of the business drivers that are leading it. That’s one of the typical ways that we work with HPE in this space.

Won: It’s a great complement. HPE understands the technology, understands the infrastructure, combined with the business processes, and then the higher level of thinking and the strategy knowledge that PwC has. It’s a great partnership.

Gardner: Attaining hybrid IT efficiency and doing it with security and control is not something you buy off the shelf. It’s not a license. It seems to me that an ecosystem is essential. But how do IT organizations manage that ecosystem? Are there ways that you all are working together, HPE in this case with PwC, and with Microsoft to make that consumption of an ecosystem solution much more attainable?

Won: One of the things that we are doing is working with Microsoft on their partnerships so that we can look at all these companies that have their offerings running on Azure public cloud and ensuring that those are all available and supported in Azure Stack, as well as running in the data center.

We are spending a lot of time with Microsoft on their ecosystem to make sure those services, those companies, or those products are available on Azure Stack — as well fully supported on Azure Stack that’s running on HPE gear.

Gardner: They might not be concerned about the hardware, but they are concerned about the total value — and the total solution. If the hardware players aren’t collaborating well with the service providers and with the cloud providers — then that’s not going to work.

Quick collaboration is key

Won: Exactly! I think of it like a washing machine. No one wants to own a washing machine, but everyone wants clean clothes. So it’s the necessary evil, it’s super important, but you just as soon not have to do it.

Gardner: I just don’t know what to take to the dry cleaner or not, right?

Won: Yeah, there you go!

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: From a consulting standpoint, clients no longer have the appetite for these five- to six-year transformations. Their businesses are changing at a much faster pace. One of the ways that we are working the ecosystem-level solution — again much like the deep and longstanding relationship we have had with HPE – is we have also been working with Microsoft in the same context.

And in a three-way fashion, we have focused on being able to define accelerators to deploying these solutions. So codifying a lot of our experiences, the lessons learned, a deep understanding of both the public and the private stack to be able to accelerate value for our customers — because that’s what they expect today.

Won: One of the things, Ro, that you brought up, and I think is very relevant here, is these three-way relationships. Customers don’t want to have to deal with all of these different vendors, these different pieces of stack or different aspects of the value chain. They instead expect us as vendors to be working together. So HPE, PwC, Microsoft are all working together to make it easier for the customers to ultimately deliver the services they need to drive their business.

Low risk, all reward

Gardner: So speed-to-value, super important; common solution cooperation and collaboration synergy among the partners, super important. But another part of this is doing it at low risk, because no one wants to be in a transition from a public to private or a full hybrid spectrum — and then suffer performance issues, lost data, with end customers not happy.

PwC has been focused on governance, risk management and compliance (GRC) in trying to bring about better end-to-end hybrid IT control. What is it that you bring to this particular problem that is unique? It seems that each enterprise is doing this anew, but you have done it for a lot of others and experience can be very powerful that way.

Antao: Absolutely! The move to hybrid IT is a fundamental shift in governance models, in how you address certain risks, the emergence of new risks, and new security challenges. A lot of what we have been doing in this space has been in helping that IT organizations accelerate that shift — that paradigm shift — that they have to make.

In that context, we have been working very closely with HPE to understand what the requirements of that new world are going to look like. We can build and bring to the table solutions that support those needs.

Won: It’s absolutely critical — this experience that PwC has is huge. We always come up with new technologies; every few years you have something new. But it’s that experience that PwC has to bring to the table that’s incredibly helpful to our customer base.

There’s this whole journey getting to that hybrid IT state and having the governing mechanisms around it.

Antao: So often when we think of governance, it’s more in terms of the steady state and the runtime. But there’s this whole journey between getting from where we today to that hybrid IT state — and having the governing mechanisms around it — so that they can do it in a way that doesn’t expose their business to too much risk. There is always risk involved in these large-scale transformations, but how do you manage and govern that process through getting to that hybrid IT state? That’s where we also spend a lot of time as we help clients through this transformation.

Gardner: For IT shops that are heavily Microsoft-focused, is there a way for them to master Azure Stack, the people, process and technology that will then be an accelerant for them to go to a broader hybrid IT capability? I’m thinking of multi-cloud, and even being able to develop with DevOps and SecOps across a multiple cloud continuum as a core competency.

Is Azure Stack for many companies a stepping-stone to a wider hybrid capability, Ro?

Managed multi-cloud continuum

Antao: Yes. And I think in many cases that’s inevitable. When you look at most organizations today, generally speaking, they have at least two public cloud providers that they use. They consume several Software as a service (SaaS) applications. They have multiple data center locations.  The role of IT now is to become the broker and integrator of multi-cloud environments, among and between on-premise and in the public cloud. That’s where we see a lot of them evolve their management practices, their processes, the talent — to be able to abstract these different pools and focus on the business. That’s where we see a lot of the talent development.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Won: We see that as well at HPE as this whole multi-cloud strategy is being implemented. More and more, the challenge that organizations are having is that they have these multiple clouds, each of which is managed by a different team or via different technologies with different processes.

So as a way to bring these together, there is huge value to the customer, by bringing together, for example, Azure Stack and Azure [public cloud] together. They may have multiple Azure Stack environments, perhaps in different data centers, in different countries, in different locales. We need to help them align their processes to run much more efficiently and more effectively. We need to engage with them not only from an IT standpoint, but also from the developer standpoint. They can use those common services to develop that application and deploy it in multiple places in the same way.

Antao: What’s making this whole environment even more complex these days is that a couple of years ago, when we talked about multi-cloud, it was really the capability to either deploy in one public cloud versus another.

Within a given business workflow, how do you leverage different clouds, given their unique strengths and weaknesses?

Few years later, it evolved into being able to port workloads seamlessly from one cloud to another. Today, as we look at the multi-cloud strategy that a lot of our clients are exploring this: Within a given business workflow, depending on the unique characteristics of different parts of that business process, how do you leverage different clouds given their unique strengths and weaknesses?

There might be portions of a business process that, to your point earlier, Ken, are highly confidential. You are dealing with a lot of compliance requirements. You may want to consume from an internal private cloud. There are other parts of it that you are looking for, such as immense scale, to deal with the peaks when that particular business process gets impacted. How do you go back to where the public cloud has a history with that? In a third case, it might be enterprise-grades workloads.

So that’s where we are seeing multi-cloud evolve, into where in one business process could have multiple sources, and so how does an IT organization manage that in a seamless way?

Gardner: It certainly seems inevitable that the choice of such a cloud continuum configuration model will vary and change. It could be one definition in one country or region, another definition in another country and region. It could even be contextual, such as by the type of end user who’s banging on the app. As the Internet of Things (IoT) kicks in, we might be thinking about not just individuals, but machine-to-machine (M2M), app-to-app types of interactions.

So quite a bit of complexity, but dealt with in such a way that the payoff could be monumental. If you do hybrid cloud and hybrid IT well, what could that mean for your business in three to five years, Ro?

Nimble, quick and cost-efficient

Antao: Clearly there is the agility aspect, of being able to seamlessly leverage these different clouds to allow IT organizations to be much more nimble in how they respond to the business.

From a cost standpoint, and this is actually a great example we had for a large-scale migration that we are currently doing to the public cloud. What the IT organization found was they consumed close to 70 percent of their migration budget for only 30 percent of the progress that they made.

And a larger part of that was because the minute you have your workloads sitting on a public cloud — whether it is a development workload or you are still working your way through it, but technically it’s not yet providing value — the clock is ticking. Being able to allow for a hybrid environment, where you a do a lot of that development, get it ready — almost production-ready — and then when the time is right to drive value from that application — that’s when you move to a public cloud. Those are huge cost savings right there.

Clients that have managed to balance those two paradigms are the ones who are also seeing a lot of economic efficiencies.

Won: The most important thing that people see value in is that agility. The ability to respond much faster to competitive actions or to new changes in the market, the ability to bring applications out faster, to be able to update applications in months — or sometimes even weeks — rather than the two years that it used to take.

It’s that agility to allow people to move faster and to shift their capabilities so much quicker than they have ever been able to do – that is the top reason why we’re seeing people moving to this hybrid model. The cost factor is also really critical as they look at whether they are doing CAPEX or OPEX and private cloud or public cloud.

One of the things that we have been doing at HPE through our Flexible Capacity program is that we enable our customers who were getting hardware to run these private clouds to actually pay for it on a pay-as-you-go basis. This allows them to better align their usage — the cost to their usage. So taking that whole concept of pay-as-you-go that we see in the public cloud and bringing that into a private cloud environment.

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: That’s a great point. From a cost standpoint, there is an efficiency discussion. But we are also seeing in today’s world that we are depending on edge computing a lot more. I was talking to the CIO of a large park the other day, and his comment to me was, yes, they would love to use the public cloud but they cannot afford for any kind of latency or disruption of services because that means he’s got thousands of visitors and guests in his park, because of the amount of dependency on technology he can afford that kind of latency.

And so part of it is also the revenue impact discussion, and using public cloud in a way that allows you to manage some of those risks in terms of that analytical power and that computing power you need closer to the edge — closer to your internal systems.

Gardner: Microsoft Azure Stack is reinforcing the power and capability of hybrid cloud models, but Azure Stack is not going to be the same for each individual enterprise. How they differentiate, how they use and take advantage of a hybrid continuum will give them competitive advantages and give them a one-up in terms of skills.

It seems to me that the continuum of Azure Stack, of a hybrid cloud, is super-important. But how your organization specifically takes advantage of that is going to be the key differentiator. And that’s where an ecosystem solutions approach can be a huge benefit.

Let’s look at what comes next. What might we be talking about a year from now when we think about Microsoft Azure Stack in the market and the impact of hybrid cloud on businesses, Ken?

Look at clouds from both sides now

You will see that as a break in the boundary of private cloud versus public cloud, so think of it as a continuum.

Won: You will see organizations shifting from a world of using multiple clouds and having different applications or services on clouds to having an environment where services are based on multiple clouds. With the new cloud-native applications you’ll be running different aspects of those services in different locations based on what are the requirements of that particular microservice.

So a service may be partially running in Azure, part of it may be running in Azure Stack. You will certainly see that as a kind of break in the boundary of private cloud versus public cloud, and so think of it as a continuum, if you will, of different environments able to support whatever applications they need.

Gardner: Ro, as people get more into the weeds with hybrid cloud, maybe using Azure Stack, how will the market adjust?

Antao: I completely agree with Ken in terms of how organizations are going to evolve their architecture. At PwC we have this term called the Configurable Enterprise, which essentially focuses on how the IT organization consumes services from all of these different sources to be able to ultimately solve business problems.

To that point, where we see the market trends is in the hybrid IT space, the adoption of that continuum. One of the big pressures IT organizations face is how they are going to evolve their operating model to be successful in this new world. CIOs, especially the forward-thinking ones, are starting to ask that question. We are going to see in the next 12 months a lot more pressure in that space.

Gardner: These are, after all, still early days of hybrid cloud and hybrid IT. Before we sign off, how should organizations that might not yet be deep into this prepare themselves? Are there some operations, culture, and skills? How might you want to be in a good position to take advantage of this when you do take the plunge?

Plan to succeed with IT on board

Won: One of the things we recommend is a workshop where we sit down with the customer and think through their company strategy. What is their IT strategy? How does that relate or map to the infrastructure that they need in order to be successful?

This makes the connection between the value they want to offer as a company, as a business, to the infrastructure. It puts a plan in place so that they can see that direct linkage. That workshop is one of the things that we help a lot of customers with.

We also have innovation centers that we’ve built with Microsoft where customers can come in and experience Azure Stack firsthand. They can see the latest versions of Azure Stack, they can see the hardware, and they can meet with experts. We bring in partners such as PwC to have a conversation in these innovation centers with experts.

Gardner: Ro, how to get ready when you want to take the plunge and make the best and most of it?

Hybrid Cloud Solutions

for Microsoft Azure Stack

Antao: We are at a stage right now where these transformations can no longer be done to the IT organization; the IT organization has to come along on this journey. What we have seen is, especially in the early stages, the running of pilot projects, of being able to involve the developers, the infrastructure architects, and the operations folks in pilot workloads, and learn how to manage it going forward in this new model.

You want to create that from a top-down perspective, being able to tie in to where this adds the most value to the business. From a grassroots effort, you need to also create champions within the trenches that are going to be able to manage this new environment. Combining those two efforts has been very successful for organizations as they embark on this journey.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, cloud messaging, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, Hewlett Packard Enterprise, Microsoft, PwC | Tagged , , , , , , , , , , | Leave a comment

Advanced IoT systems provide analysis catalyst for the petrochemical refinery of the future

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) technology trends interview explores how IT combines with IoT to help create the refinery of the future.

We’ll now learn how a leading-edge petrochemical company in Texas is rethinking data gathering and analysis to foster safer environments and greater overall efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To help us define the best of the refinery of the future vision is Doug Smith, CEO of Texmark Chemicals in Galena Park, Texas, and JR Fuller, Worldwide Business Development Manager for Edgeline IoT at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top trends driving this need for a new refinery of the future? Doug, why aren’t the refinery practices of the past good enough?

Smith: First of all, I want to talk about people. People are the catalysts who make this refinery of the future possible. At Texmark Chemicals, we spent the last 20 years making capital investments in our infrastructure, in our physical plant, and in the last four years we have put together a roadmap for our IT needs.

Through our introduction to HPE, we have entered into a partnership that is not just a client-customer relationship. It’s more than that, and it allows us to work together to discover IoT solutions that we can bring to bear on our IT challenges at Texmark. So, we are on the voyage of discovery together — and we are sailing out to sea. It’s going great.

Gardner: JR, it’s always impressive when a new technology trend aids and abets a traditional business, and then that business can show through innovation what should then come next in the technology. How is that back and forth working? Where should we expect IoT to go in terms of business benefits in the not-to-distant future?

Fuller: One of powerful things about the partnership and relationship we have is that we each respect and understand each other’s “swim lanes.” I’m not trying to be a chemical company. I’m trying to understand what they do and how I can help them.

JR FullerAnd they’re not trying to become an IT or IoT company. Their job is to make chemicals; our job is to figure out the IT. We’re seeing in Texmark the transformation from an Old World economy-type business to a New World economy-type business.

This is huge, this is transformational. As Doug said, they’ve made huge investments in their physical assets and what we call Operational Technology (OT). They have done that for the past 20 years. The people they have at Texmark who are using these assets are phenomenal. They possess decades of experience.

Learn From Customers Who

Realize the IoT Advantage

 Read More

Yet IoT is really new for them. How to leverage that? They have said, “You know what? We squeezed as much as we can out of OT technology, out of our people, and our processes. Now, let’s see what else is out there.”

And through introductions to us and our ecosystem partners, we’ve been able to show them how we can help squeeze even more out of those OT assets using this new technology. So, it’s really exciting.

Gardner: Doug, let’s level-set this a little bit for our audience. They might not all be familiar with the refinery business, or even the petrochemical industry. You’re in the process of processing. You’re making one material into another and you’re doing that in bulk, and you need to do it on a just-in-time basis, given the demands of supply chains these days.

You need to make your business processes and your IT network mesh, to reach every corner. How does a wireless network become an enabler for your requirements?

The heart of IT 

Smith: In a large plant facility, we have different pieces of equipment. One piece of equipment is a pump — the analogy would be the heart of the process facility of the plant.

So your question regarding the wireless network, if we can sensor a pump and tie it into a mesh network, there are incredible cost savings for us. The physical wiring of a pump runs anywhere from $3,000 to $5,000 per pump. So, we see a savings in that.

Doug Smith (1)Being able to have the information wirelessly right away — that gives us knowledge immediately that we wouldn’t have otherwise. We have workers and millwrights at the plant that physically go out and inspect every single pump in our plant, and we have 133 pumps. If we can utilize our sensors through the wireless network, our millwrights can concentrate on the pumps that they know are having problems.

To have the information wirelessly right away — that gives us knowledge immediately that we wouldn’t have otherwise.

Gardner: You’re also able to track those individuals, those workers, so if there’s a need to communicate, to locate, to make sure that they hearing the policy, that’s another big part of IoT and people coming together.

Safety is good business

Smith: The tracking of workers is more of a safety issue — and safety is critical, absolutely critical in a petrochemical facility. We must account for all our people and know where they are in the event of any type of emergency situation.

Gardner: We have the sensors, we can link things up, we can begin to analyze devices and bring that data analytics to the edge, perhaps within a mini data center facility, something that’s ruggedized and tough and able to handle a plant environment.

Given this scenario, JR, what sorts of efficiencies are organizations like Texmark seeing? I know in some businesses, they talk about double digit increases, but in a mature industry, how does this all translate into dollars?

Fuller: We talk about the power of one percent. A one percent improvement in one of the major companies is multi-billions of dollars saved. A one percent change is huge, and, yes, at Texmark we’re able to see some larger percentage-wise efficiency, because they’re actually very nimble.

It’s hard to turn a big titanic ship, but the smaller boat is actually much better at it. We’re able to do things at Texmark that we are not able to do at other places, but we’re then able to create that blueprint of how they do it.

You’re absolutely right, doing edge computing, with our HPE Edgeline products, and gathering the micro-data from the extra compute power we have installed, provides a lot of opportunities for us to go into the predictive part of this. It’s really where you see the new efficiencies.

Recently I was with the engineers out there, and we’re walking through the facility, and they’re showing us all the equipment that we’re looking at sensoring up, and adding all these analytics. I noticed something on one of the pumps. I’ve been around pumps, I know pumps very well.

I saw this thing, and I said, “What is that?”

“So that’s a filter,” they said.

I said, “What happens if the filter gets clogged?”

“It shuts down the whole pump,” they said.

“What happens if you lose this pump?” I asked.

“We lose the whole chemical process,” they explained.

“Okay, are there sensors on this filter?”

“No, there are only sensors on the pump,” they said.

There weren’t any sensors on the filter. Now, that’s just something that we haven’t thought of, right? But again, I’m not a chemical guy. So I can ask questions that maybe they didn’t ask before.

So I said, “How do you solve this problem today?”

“Well, we have a scheduled maintenance plan,” they said.

They don’t have a problem, but based on the scheduled maintenance plan that filter gets changed whether it needs to or not. It just gets changed on a regular basis. Using IoT technology, we can tell them exactly when to change that filter. Therefore IoT saves on the cost of the filter and the cost of the manpower — and those types of potential efficiencies and savings are just one small example of the things that we’re trying to accomplish.

Continuous functionality

Smith: It points to the uniqueness of the people-level relationship between the HPE team, our partners, and the Texmark team. We are able to have these conversations to identify things that we haven’t even thought of before. I could give you 25 examples of things just like this, where we say, “Oh, wow, I hadn’t thought about that.” And yet it makes people safer and it all becomes more efficient.

Learn From Customers Who

Realize the IoT Advantage
Read More

Gardner: You don’t know until you have that network in place and the data analytics to utilize what the potential use-cases can be. The name of the game is utilization efficiency, but also continuous operations.

How do you increase your likelihood or reduce the risk of disruption and enhance your continuous operations using these analytics?

Smith: To answer, I’m going to use the example of toll processing. Toll processing is when we would have a customer come to us and ask us to run a process on the equipment that we have at Texmark.

Normally, they would give us a recipe, and we would process a material. We take samples throughout the process, the production, and deliver a finished product to them. With this new level of analytics, with the sensoring of all these components in the refinery of the future vision, we can provide a value-add to the customers by giving them more data than they could ever want. We can document and verify the manufacture and production of the particular chemical that we’re toll processing for them.

Fuller: To add to that, as part of the process, sometimes you may have to do multiple runs when you’re tolling, because of your feed stock and the way it works.

By using advanced analytics and the predictive benefits of having all that data, we’re looking to gain efficiencies.

By usingadvanced analytics, and some of the predictive benefits of having all of that data available, we’re looking to gain efficiencies to cut down the number of additional runs needed. If you take a process that would have taken three runs and we can knock that down to two runs — that’s a 30 percent decrease in total cost and expense. It also allows them produce more products, and to get it out to people a lot faster

Smith: Exactly. Exactly!

Gardner: Of course, the more insight that you can obtain from a pump, and the more resulting data analysis, that gives you insight into the larger processes. You can extend that data and information back into your supply chain. So there’s no guesswork. There’s no gap. You have complete visibility — and that’s a big plus when it comes to reducing risk in any large, complex, multi-supplier undertaking.

Beyond data gathering, data sharing

Smith: It goes back to relationships at Texmark. We have relationships with our neighbors that are unique in the industry, and so we would be able to share the data that we have.

Fuller: With suppliers.

Smith: Exactly, with suppliers and vendors. It’s transformational.

Gardner: So you’re extending a common standard industry-accepted platform approach locally into an extended process benefit. And you can share that because you are using common, IT-industry-wide infrastructurefrom HPE.

Fuller: And that’s very important. We have a three-phase project, and we’ve just finished the first two phases. Phase 1 was to put ubiquitous WiFi infrastructure in there, with the location-based services, and all of the things to enable that. The second phase was to upgrade the compute infrastructure with our Edgeline compute and put in our HPE Micro Datacenter in there. So now they have some very robust compute.

Learn From Customers Who

Realize the IoT Advantage

Read More

With that infrastructure in place, it now allows us to do that third phase, where we’re bringing in additional IoT projects. We will create a data infrastructure with data storage, and application programming interfaces (APIs), and things like that. That will allow us to bring in a specialty video analytic capability that will overlay on top of the physical and logical infrastructure. And it makes it so much easier to integrate all that.

Gardner: You get a chance to customize the apps much better when you have a standard IT architecture underneath that, right?

Trailblazing standards for a new workforce

Smith: Well, exactly. What are you saying, Dana is – and it gives me chills when I start thinking about what we’re doing at Texmark within our industry – is the setting of standards, blazing a new trail. When we talk to our customers and our suppliers and we tell them about this refinery of the future project that we’re initiating, all other business goes out the window. They want to know more about what we’re doing with the IoT — and that’s incredibly encouraging.

Gardner: I imagine that there are competitive advantages when you can get out in front and you’re blazing that trail. If you have the experience, the skills of understanding how to leverage an IoT environment, and an edge computing capability, then you’re going to continue to be a step ahead of the competition on many levels: efficiency, safety, ability to customize, and supply chain visibility.

Smith: It surely allows our Texmark team to do their jobs better. I use the example of the millwrights going out and inspecting pumps, and they do that everyday. They do it very well. If we can give them the tools, where they can focus on what they do best over a lifetime of working with pumps, and only work on the pumps that they need to, that’s a great example.

I am extremely excited about the opportunities at the refinery of the future to bring new workers into the petrochemical industry. We have a large number of people within our industry who are retiring; they’re taking intellectual capital with them. So to be able to show young people that we are using advanced technology in new and exciting ways is a real draw and it would bring more young people into our industry.

Gardner: By empowering that facilities edge and standardizing IT around it, that also gives us an opportunity to think about the other part of this spectrum — and that’s the cloud. There are cloud services and larger data sets that could be brought to bear.

How does the linking of the edge to the cloud have a benefit?

Cloud watching

Fuller: Texmark Chemicals has one location, and they service the world from that location as a global leader in dicyclopentadiene (DCPD) production. So the cloud doesn’t have the same impact as it would for maybe one of the other big oil or big petrochemical companies. But there are ways that we’re going to use the cloud at Texmark and rally around it for safety and security.

Utilizing our location-based services, and our compute, if there is an emergency — whether it’s at Texmark or a neighbor — using cloud-based information like weather, humidity, and wind direction — and all of these other things that are constantly changing — we can provide better directed responses. That’s one way we would be using cloud at Texmark.

When we start talking about the larger industry — and connecting multiple refineries together or upstream, downstream and midstream kinds of assets together with a petrochemical company — cloud becomes critical. And you have to have hybrid infrastructure support.

You don’t want to send all your video to the cloud to get analyzed. You want to do that at the edge. You don’t want to send all of your vibration data to the cloud, you want to do that at the edge. But, yes, you do want to know when a pump fails, or when something happens so you can educate and train and learn and share that information and institutional knowledge throughout the rest of the organization.

Gardner: Before we sign off, let’s take a quick look into the crystal ball. Refinery of the future, five years from now, Doug, where do you see this going?

Learn From Customers Who

 Realize the IoT Advantage

Read More

Smith: The crystal ball is often kind of foggy, but it’s fun to look into it. I had mentioned earlier opportunities for education of a new workforce. Certainly, I am focused on the solutions that IoT brings to efficiencies, safety, and profitability of Texmark as a company. But I am definitely interested in giving people opportunities to find a job to work in a good industry that can be a career.

Gardner: JR, I know HPE has a lot going on with edge computing, making these data centers more efficient, more capable, and more rugged. Where do you see the potential here for IoT capability in refineries of the future?

Future forecast: safe, efficient edge

Fuller: You’re going to see the pace pick up. I have to give kudos to Doug. He is a visionary. Whether he admits that or not, he is actually showing an industry that has been around for many years how to do this and be successful at it. So that’s incredible. In that crystal ball look, that five-year look, he’s going to be recognized as someone who helped really transform this industry from old to new economy.

As far as edge-computing goes, what we’re seeing with our converged Edgeline systems, which are our first generation, and we’ve created this market space for converged edge systems with the hardening of it. Now, we’re working on generation 2. We’re going to get faster, smaller, cheaper, and become more ubiquitous. I see our IoT infrastructure as having a dramatic impact on what we can actually accomplish and the workforce in five years. It will be more virtual and augmented and have all of these capabilities. It’s going to be a lot safer for people, and it’s going to be a lot more efficient.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, big data, Business intelligence, Cloud computing, Cyber security, data analysis, data center, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, HP, Information management, Internet of Things, Security | Tagged , , , , , , , , , , | Leave a comment

Get ready for the post-cloud world

Just when cloud computing seems inevitable as the dominant force in IT, it’s time to move on because we’re not quite at the end-state of digital transformation. Far from it.

Now’s the time to prepare for the post-cloud world.

It’s not that cloud computing is going away. It’s that we need to be ready for making the best of IT productivity once cloud in its many forms become so pervasive as to be mundane, the place where all great IT innovations must go.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Cloud computing, data analysis, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning | Tagged , , , , , , , | Leave a comment

India Smart Cities Mission shows IoT potential for improving quality of life at vast scale

The next BriefingsDirect Voice of the Customer Internet-of-Things (IoT) transformation discussion examines the potential impact and improvement of low-power edge computing benefits on rapidly modernizing cities.

These so-called smart city initiatives are exploiting open, wide area networking (WAN) technologies to make urban life richer in services, safer, and far more responsive to residences’ needs. We will now learn how such pervasively connected and data-driven IoT architectures are helping cities in India vastly improve the quality of life there.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

Here to share how communication service providers have become agents of digital urban transformation are VS Shridhar, Senior Vice President and Head of the Internet-of-Things Business Unit at Tata Communications in Chennai area, India, and Nigel Upton, General Manager of the Universal IoT Platform and Global Connectivity Platform and Communications Solutions Business at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about India’s Smart Cities mission. What are you up to and how are these new technologies coming to bear on improving urban quality of life?

Shridhar: The government is clearly focusing on Smart Cities as part of their urbanization plan, as they believe Smart Cities will not only improve the quality of living, but also generate employment, and take the whole country forward in terms of technologically embracing and improving the quality of life.

So with that in mind, the Government of India has launched 100 Smart Cities initiatives. It’s quite interesting because each of the cities that aspire to belong had to make a plan and their own strategy around how they are going to evolve and how they are going to execute it, present it, and get selected. There was a proper selection process.

Many of the cities made it, and of course some of them didn’t make it. Interestingly, some of the cities that didn’t make it are developing their own plans.

IoT Solutions for Communications Service Providers and Enterprises from HPE

Learn More

There is lot of excitement and curiosity as well as action in the Smart Cities project. Admittedly, it’s a slow process, it’s not something that you can do at the blink of the eye, and Rome wasn’t built overnight, but I definitely see a lot of progress.

Gardner: Nigel, it seems that the timing for this is auspicious, given that there are some foundational technologies that are now available at very low cost compared to the past, and that have much more of a pervasive opportunity to gather information and make a two-way street, if you will, between the edge and central administration. How is the technology evolution synching up with these Smart Cities initiatives in India?

Upton: I am not sure whether it’s timing or luck, or whatever it happens to be, but adoption of the digitization of city infrastructure and services is to some extent driven by economics. While I like to tease my colleagues in India about their sensitivity to price, the truth of the matter is that the economics of digitization — and therefore IoT in smart cities — needs to be at the right price, depending on where it is in the world, and India has some very specific price points to hit. That will drive the rate of adoption.

And so, we’re very encouraged that innovation is continuing to drive price points down to the point that mass adoption can then be taken up, and the benefits realized to a much more broad spectrum of the population. Working with Tata Communications has really helped HPE understand this and continue to evolve as technology and be part of the partner ecosystem because it does take a village to raise an IoT smart city. You need a lot of partners to make this happen, and that combination of partnership, willingness to work together and driving the economic price points to the point of adoption has been absolutely critical in getting us to where we are today.

Balanced Bandwidth

Gardner: Shridhar, we have some very important optimization opportunities around things like street lighting, waste removal, public safety, water quality; of course, the pervasive need for traffic and parking, monitoring and improvement.

How do things like a low-power specification Internet and network gateways and low-power WANs (LPWANs) create a new foundation technically to improve these services? How do we connect the services and the technology for an improved outcome?

Shridhar: If you look at human interaction to the Internet, we have a lot of technology coming our way. We used to have 2G, that has moved to 3G and to 4G, and that is a lot of bandwidth coming our way. We would like to have a tremendous amount of access and bandwidth speeds and so on, right?

VS Shridhar

Shridhar

So the human interaction and experience is improving vastly, given the networks that are growing. On the machine-to-machine (M2M) side, it’s going to be different. They don’t need oodles of bandwidth. About 80 to 90 percent of all machine interactions are going to be very, very low bandwidth – and, of course, low power. I will come to the low power in a moment, but it’s going to be very low bandwidth requirement.

In order to switch off a streetlight, how much bandwidth do you actually require? Or, in order to sense temperature or air quality or water and water quality, how much bandwidth do you actually require?

When you ask these questions, you get an answer that the machines don’t require that much bandwidth. More importantly, when there are millions — or possibly billions — of devices to be deployed in the years to come, how are you going to service a piece of equipment that is telling a streetlight to switch on and switch off if the battery runs out?

Machines are different from humans in terms of interactions. When we deploy machines that require low bandwidth and low power consumption, a battery can enable such a machine to communicate for years.

Aside from heavy video streaming applications or constant security monitoring, where low-bandwidth, low-power technology doesn’t work, the majority of the cases are all about low bandwidth and low power. And these machines can communicate with the quality of service that is required.

When it communicates, the network has to be available. You then need to establish a network that is highly available, which consumes very little power and provides the right amount of bandwidth. So studies show that less than 50 kbps connectivity should suffice for the majority of these requirements.

Now the machine interaction also means that you collect all of them into a platform and basically act on them. It’s not about just sensing it, it’s measuring it, analyzing it, and acting on it.

Low-power to the people

So the whole stack consists not just of connectivity alone. It’s LPWAN technology that is emerging now and is becoming a de facto standard as more-and-more countries start embracing it.

At Tata Communications we have embraced the LPWAN technology from the LoRa Alliance, a consortium of more than 400 partners who have gotten together and are driving standards. We are creating this network over the next 18 to 24 months across India. We have made these networks available right now in four cities. By the end of the year, it will be many more cities — almost 60 cities across India by March 2018.

Gardner: Nigel, how do you see the opportunity, the market, for a standard architecture around this sort of low-power, low-bandwidth network? This is a proof of concept in India, but what’s the potential here for taking this even further? Is this something that has global potential?

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

Upton: The global potential is undoubtedly there, and there is an additional element that we didn’t talk about which is that not all devices require the same amount of bandwidth. So we have talked about video surveillance requiring higher bandwidth, we have talked about devices that have low-power bandwidth and will essentially be created once and forgotten when expected to last 5 or 10 years.

Nigel Upton

Upton

We also need to add in the aspect of security, and that really gave HPE and Tata the common ground of understanding that the world is made up of a variety of network requirements, some of which will be met by LPWAN, some of which will require more bandwidth, maybe as high as 5G.

The real advantage of being able to use a common architecture to be able to take the data from these devices is the idea of having things like a common management, common security, and a common data model so that you really have the power of being able to take information, take data from all of these different types of devices and pull it into a common platform that is based on a standard.

In our case, we selected the oneM2M standard, it’s the best standard available to be able to build that common data model and that’s the reason why we deployed the oneM2M model within the universal IoT platform to get that consistency no matter what type of device over no matter what type of network.

Gardner: It certainly sounds like this is an unprecedented opportunity to gather insight and analysis into areas that you just really couldn’t have measured before. So going back to the economics of this, Shridhar, have you had any opportunity through these pilot projects in such cities as Jamshedpur to demonstrate a return on investment, perhaps on street lighting, perhaps on quality of utilization and efficiency? Is there a strong financial incentive to do this once the initial hurdle of upfront costs is met?

Data-driven cost reduction lights up India

Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions.

Shridhar: Unless the customer sees that there is a scope for either reducing the cost or increasing the customer experience, they are not going to buy these kinds of solutions. So if you look at how things have been progressing, I will give you a few examples of how the costs have started constructing and playing out. One of course is to have devices, meeting at certain price point, we talked about how in India — we talked that Nigel was remarking how constant still this Indian market is, but it’s important, once we delivered to a certain cost, we believe we can now deliver globally to scale. That’s very important, so if we build something in India it would deliver to the global market as well.

The streetlight example, let’s take that specifically and see what kind of benefits it would give. When a streetlight operates for about 12 hours a day, it costs about Rs.12, which is about $0.15, but when you start optimizing it and say, okay, this is a streetlight that is supported currently on halogen and you move it to LED, it brings a little bit of cost saving, in some cases significant as well. India is going through an LED revolution as you may have read in the newspapers, those streetlights are being converted, and that’s one distinct cost advantage.

Now they are looking and driving, let’s say, the usage and the electricity bills even lower by optimizing it. Let’s say you sync it with the astronomical clock, that 6:30 in the evening it comes up and let’s say 6:30 in the morning it shuts down linking to the astronomical clock because now you are connecting this controller to the Internet.

The second thing that you would do is during busy hours keep it at the brightest, let’s say between 7:00 and 10:00, you keep it at the brightest and after that you start minimizing it. You can control it down in 10 percent increments.

The point I am making is, you basically deliver intensity of light to the kind of requirement that you have. If it is busy, or if there is nobody on the street, or if there is a safety requirement — a sensor will trigger up a series of lights, and so on.

So your ability to play around with just having streetlight being delivered to the requirement is so high that it brings down total cost. While I was telling you about $0.15 that you would spend per streetlight, that could be brought down to $0.05. So that’s the kind of advantage by better controlling the streetlights. The business case builds up, and a customer can save 60 to 70 percent just by doing this. Obviously, then the business case stands out.

The question that you are asking is an interesting one because each of the applications has its own way of returning the investment back, while the optimization of resources is being done. There is also a collateral positive benefit by saving the environment. So not only do I gain a business savings and business optimization, but I also pass on a general, bigger message of a green environment. Environment and safety are the two biggest benefits of implementing this and it would really appeal to our customers.

Gardner: It’s always great to put hard economic metrics on these things, but Shridhar just mentioned safety. Even when you can’t measure in direct economics, it’s invaluable when you can bring a higher degree of safety to an urban environment.

It opens up for more foot traffic, which can lead to greater economic development, which can then provide more tax revenue. It seems to me that there is a multiplier effect when you have this sort of intelligent urban landscape that creates a cascading set of benefits: the more data, the more efficiency; the more efficiency, the more economic development; the more revenue, the more data and so on. So tell us a little bit about this ongoing multiplier and virtuous adoption benefit when you go to intelligent urban environments?

Quality of life, under control

Upton: Yes, also it’s important to note that it differs almost by country to country and almost within region to region within countries. The interesting challenge with smart cities is that often you’re dealing with elected officials rather than hard-nosed businessman who are only interested in the financial return. And it’s because you’re dealing with politicians and they are therefore representing the citizens in their area, either their city or their town or their region, their priorities are not always the same.

There is quite a variation of one of the particular challenges, particular social challenges as well as the particular quality of life challenges in each of the areas that they work in. So things like personal safety are a very big deal in some regions. I am currently in Tokyo and here there is much more concern around quality of life and mobility with a rapidly aging population and their challenges are somewhat different.

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

But in India, the set of opportunities and challenges that are set out, they are in that combination of economic as well as social, and if you solve them and you essentially give citizens more peace of mind, more ability to be able to move freely, to be able to take part in the economic interaction within that area, then undoubtedly that leads to greater growth, but it is worth bearing in mind that it does vary almost city by city and region by region.

Gardner: Shridhar, do you have any other input into a cascading ongoing set of benefits when you get more data, more network opportunity. I guess I am trying to understand for a longer-term objective that being intelligent and data-driven has an ongoing set of benefits, what might those be? How can this be a long-term data and analytics treasure trove when you think about it in terms of how to provide better urban experiences?

Home/work help

Shridhar: From our perspective, when we looked at the customer benefits there is a huge amount of focus around the smart cities and how smart cities are benefiting from a network. If you look at the enterprise customers, they are also looking at safety, which is an overlapping application that a smart city would have.

So the enterprise wants to provide safety to its workers, for example, in mines or in difficult terrains, environments where they are focusing on helping them. Or women’s safety, which is as you know in India is a big thing as well — how do you provide a device which is not very obvious and it gives the women all the safety that is there.

So all this in some form is providing data. One of the things that comes to my mind when you ask about how data-driven resources can be and what kind of quality it would give is if you action your mind to some of the customer services devices, there could be applications or let’s say a housewife could have a multiple button kind of a device where she can order a service.

Depending on the service she presses and an aggregate of households across India, you would know the trends and direction of a certain service, and mind you, it could be as simple as a three-button device which says Service A, Service B, Service C, and it could be a consumer service that gets extended to a particular household that we sell it as a service.

So you could get lots of trends and patterns that are emerging from that, and we believe that the customer experience is going to change, because no longer is a customer going to retain in his mind what kind of phone numbers or your, let’s say, apps and all to order, you give them the convenience of just a button-press service. That immediately comes to my mind.

Feedback fosters change

The second one is in terms of feedback. You use the same three-button service to say, how well have you used utility — or rather how — what kind of quality of service that you rate multiple utilities that you are using, and there is toilet revolution in India. For example, you put these buttons out there, they will tell you at any given point of time what’s the user satisfaction and so on.

So these are all data that is getting gathered and I believe that while it is early days for us to go on and put out analytics and give you distinct kind of benefits that are there, but some of the things that customers are already looking at is which geographies, which segment, who are my biggest — profile of the customers using this and so on. That kind of information is going to come out very, very distinctly.

The Smart Cities is all about experience. The enterprises are now looking at the data that is coming out and seeing how they can use it to better segment, and provide better customer experience which would obviously mean both adding to their top line as well as helping them manage their bottom line. So it’s beyond safety, it’s getting into the customer experience – the realm of managing customer experience.

Gardner: From a go-to-market perspective, or a go-to-city’s perspective, these are very complex undertakings, lots of moving parts, lots of different technologies and standards. How are Tata and HPE are coming together — along with other service providers, Pointnext for example? How do you put this into a package that can then actually be managed and put in place? How do we make this appealing not only in terms of its potential but being actionable as well when it comes to different cities and regions?

Upton: The concept of Smart Cities has been around for a while and various governments around the world have pumped money into their cities over an extended period of time.

We now have the infrastructure in place, we have the price points and we have IoT becoming mainstream.

As usual, these things always take more time than you think, and I do not believe today that we have a technology challenge on our hands. We have much more of a business model challenge. Being able to deploy technology to be able to bring benefits to citizens, I think that is finally getting to the point where it is much better understood where innovation of the device level, whether it’s streetlights, whether it’s the ability to measure water quality, sound quality, humidity, all of these metrics that we have available to us now. There has been very rapid innovation at that device level and at the economics of how to produce them, at a price that will enable widespread deployment.

All that has been happening rapidly over the last few years getting us to the point where we now have the infrastructure in place, we have the price points in place, and we have IoT becoming mainstream enough that it is entering into the manufacturing process of all sorts of different devices, as I said, ranging from streetlights to personal security devices through to track and trace devices that are built into the manufacturing process of goods.

That is now reaching mainstream and we are now able to take advantage of this massive data that’s now being produced to be able to produce even more efficient and smarter cities, and make them safer places for our citizens.

Gardner: Last word to you, Shridhar. If people wanted to learn more about the pilot proof of concept (PoC) that you are doing there at Jamshedpur and other cities, through the Smart Cities Mission, where might they go, are there any resources, how would you provide more information to those interested in pursuing more of these technologies?

Pilot projects take flight

Shridhar: I would be very happy to help them look at the PoCs that we are doing. I would classify the PoCs that we are doing is as far as safety is concerned, we talked of energy management in one big bucket that is there, then the customer service I spoke about, the fourth one I would say is more on the utility side. Gas and water are two big applications where customers are looking at these PoCs very seriously.

And there is very one interesting application in that one customer wanted for pest control, where he wanted his mouse traps to have sensors so that they will at any point of time know if there is a rat trap at all, which I thought was a very interesting thing.

IoT Solutions for Communications Service Providers and Enterprises from HPE
Learn More

There are multiple streams that we have, we have done multiple PoCs, we will be very happy as Tata Communications team [to provide more information], and the HPE folks are in touch with us.

You could write to us, to me in particular for some period of time. We are also putting information on our website. We have marketing collateral, which describes this. We will do some of the joint workshops with HPE as well.

So there are multiple ways to reach us, and one of the best ways obviously is through our website. We are always there to provide more important help, and we believe that we can’t do it all alone; it’s about the ecosystem getting to know and getting to work on it.

While we have partners like HPE on the platform level, we also have partners such as Semtech, who established Center of Excellence in Mumbai along with us. So the access to the ecosystem from HPE side as well as our other partners is available, and we are happy to work and co-create the solutions going forward.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business networks, Cloud computing, data analysis, data center, Data center transformation, enterprise architecture, Enterprise transformation, Government, Hewlett Packard Enterprise, HP, Information management, Internet of Things, machine learning, Mobile apps, mobile computing, Networked economy, Security, storage, User experience | Tagged , , , , , , , , , | Leave a comment

How confluence of cloud, UC and data-driven insights newly empowers contact center agents

The next BriefingsDirect customer experience insights discussion explores how Contact center-as-a-service (CCaaS) capabilities are becoming more powerful as a result of leveraging cloud computing, multi-mode communications channels, and the ability to provide optimized and contextual user experiences.

More than ever, businesses have to make difficult and complex decisions about how to best source their customer-facing services. Which apps and services, what data and resources should be in the cloud or on-premises — or in some combination — are among the most consequential choices business leaders now face. As the confluence of cloud and unified communications (UC) — along with data-driven analytics — gain traction, the contact center function stands out.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. 

We’ll now hear why traditional contact center technology has become outdated, inflexible and cumbersome, and why CCaaS is becoming more popular in meeting the heightened user experience requirements of today.

Here to share more on the next chapter of contact center and customer service enhancements, is Vasili Triant, CEO of Serenova in Austin, Texas. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the new trends reshaping the contact center function?

Triant: What’s changed in the world of contact center and customer service is that we’re seeing a generational spread — everything from baby boomers all the way now to Gen Z.

With the proliferation of smartphones through the early 2000s, and new technologies and new channels — things like WeChat and Viber — all these customers are now potential inbound discussions with brands. And they all have different mediums that they want to communicate on. It’s no longer just phone or e-mail: It’s phone, e-mail, web chat, SMS, WeChat, Facebook, Twitter, LinkedIn, and there are other channels coming around the corner that we don’t even know about yet.

Vasili Triant

Vasili Triant

When you take all of these folks — customers or brands — and you take all of these technologies that consumers want to engage with across all of these different channels – it’s simple, they want to be heard. It’s now the responsibility of brands to determine what is the best way to respond and it’s not always one-to-one.

So it’s not a phone call for a phone call, it’s maybe an SMS to a phone call, or a phone call to a web chat — whatever those [multi-channels] may be. The complexity of how we communicate with customers has increased. The needs have changed dramatically. And the legacy types of technologies out there, they can’t keep up — that’s what’s really driven the shift, the paradigm shift, within the contact center space.

Gardner: It’s interesting that the new business channels for marketing and capturing business are growing more complex. They still have to then match on the back end how they support those users, interact with them, and carry them through any sort of process — whether it’s on-boarding and engaging, or it’s supporting and servicing them.

What we’re requiring then is a different architecture to support all of that. It seems very auspicious that we have architectural improvements right along with these new requirements.

Triant: We have two things that have collided at the same time – cloud technologies and the growth of truly global companies.

Most of the new channels that have rolled out are in the cloud. I mean, think about it — Facebook is a cloud technology, Twitter is a cloud technology. WeChat, Viber, all these things, they are all cloud technologies. It’s becoming a Software-as-a-Service (SaaS)-based world. The easiest and best way to integrate with these other cloud technologies is via the cloud — versus on-premises. So what began as the shift of on-premises technology to cloud contact center — and that really began in 2011-2012 – has rapidly picked up speed with the adoption of multi-channels as a primary method of communication.

The only way to keep up with the pace of development of all these channels is through cloud technologies because you need to develop an agile world, you need to be able to get the upgrades out to customers in a quick fashion, in an easy fashion, and in an inexpensive fashion. That’s the core difference between the on-premises world and the cloud world.

At the same time, we are no longer talking about a United States company, an Australia company, or a UK company — we are talking about everything as global brands, or global businesses. Customer service is global now, and no one cares about borders or countries when it comes to communication with a brand.

Customer service is global now, and no one cares about borders or countries when it comes to communications with a brand.

Gardner: We have been speaking about this through the context of the end-user, the consumer. But this architecture and its ability to leverage cloud also benefits the agent, the person who is responsible for keeping that end-user happy and providing them with the utmost in intelligent services. So how does the new architecture also aid and abet the agent.

Triant: The agent is frankly one of the most important pieces to this entire puzzle. We talk a lot about channels and how to engage with the customer, but that’s really what we call listening. But even in just simple day-to-day human interactions, one of the most important things is how you communicate back. There has been a series of time-and-motion studies done within contact centers, within brands — and you can even look at your personal experiences. You don’t have to read reports to understand this.

The baseline for how an interaction will begin and end and whether that will be a happy or a poor interaction with the brand, is going to be dependent on the agents’ state of mind. If I call up and I speak to “Joe,” and he starts the conversation, he is in a great mood and he is having a great day, then my conversation will most likely end in a positive interaction because it started that way.

But if someone is frustrated, they had a rough day, they can’t find their information, their computers have been crashing or rebooting, then the interaction is guaranteed to end up poor. You hear this all the time, “Oh, can you wait a moment, my systems are loading. Oh, I can’t get you an answer, that screen is not coming up. I can’t see your account information.” The agents are frustrated because they can’t do their job, and that frustration then blends into your conversation.

So using the technology to make it easy for the agent to do their job is essential. If they have to go from one screen to another screen to conduct one interaction with the customer — they are going to be frustrated, and that will lead to a poor experience with the customer.

The cloud technologies like Serenova, which is web-based, are able to bring all those technologies into one screen. The agent can have all the information brought to them easily, all in one click, and then be able to answer all the customer needs. The agent is happy and that adds to the customer satisfaction. The conclusion of the call is a happy customer, which is what we all want. That’s a great scenario and you need cloud technology to do that because the on-premises world does not deliver a great agent experience.

One-stop service

Gardner: Another thing that the older technologies don’t provide is the ability to have a flexible spectrum to move across these channels. Many times when I engage with an organization I might start with an SMS or a text chat, but then if that can’t satisfy my needs, I want to get a deeper level of satisfaction. So it might end up going to a phone call or an interaction on the web, or even a shared desktop, if I’m in IT support, for example.

The newer cloud technology allows you to intercept via different types of channels, but you can also escalate and vary between and among them seamlessly. Why is that flexibility both of benefit to the end-user as well as the agent?

Triant: I always tell companies and customers of ours that you don’t have to over-think this; all you have to do is look to your personal life. Most common things that we as users deal with — such as cell phone companies, cable companies, airlines, — you can get onto any of these websites and begin chatting, but you can find that your interaction isn’t going well. Before I started at Serenova, I had these experiences where I was dealing with the cable company and — chat, chat, chat, — trying to solve my problem. But we couldn’t get there, and so then we needed to get on the phone. But they said, “Here is our 800 number, call in.” I’d call in, but I’d have to start a whole new interaction.

Basically, I’d have to re-explain my entire situation. Then, I am talking with one person, and they have to turn around and send me an email, but I am not going to get that email for 30 to 45 minutes because they have to get off the phone, and get into another system and send it off. In the meantime, I am frustrated, I am ticked off — and guess what I have done now? I have left that brand. This happens across the board. I can even have two totally different types of interactions with the company.

You can use a major airline brand as an example. One of our employees called on the phone trying to resolve an issue that was caused by the airline. They basically said, “No, no, no.” It made her very frustrated. She decided she’s going to fly with a different airline now. She then sent a social post [to that effect], and the airline’s VP of Customer Service answered it, and within minutes they had resolved her issue. But they already spent three hours on the phone trying to push her off through yet another channel because it was a totally different group, a totally different experience.

By leveraging technologies where you can pivot from one channel to another, everyone will get answers quicker. I can be chatting with you, Dana, and realize that we need to escalate to a voice conversation, for example, and I as the agent; I can then turn that conversation into a voice call. You don’t have to re-explain yourself and you are like, “Wow, that’s cool! Now I’m on the phone with a facility,” and we are able to handle our business.

As agent, I can also pivot simultaneously to an email channel to send you something as simple as a user guide or a series of knowledge-based articles that I may have at my fingertips as an agent. But you and I are still on the phone call. Even better yet, after-the-fact, as a business, I have all the analytics and the business intelligence to say that I had one interaction with Dana that started out as a web chat, pivoted to a phone call, and I simultaneously then sent a knowledge-based article of “X” around this issue and I can report on it all at once. Not three separate interactions, not three separate events — and I have made you a happy customer.

Gardner: We are clearly talking about enabling the agent to be a super-agent, and they can, of course, be anywhere. I think this is really important now because the function of an agent — we are already seeing the beginnings of this — but it’s going to certainly include and increase having more artificial intelligence (AI) and machine learning and associated data analytics benefits. The agent then might be a combination of human and AI functions and services.

So we need to be able to integrate at a core communications basis. Without going too far down this futuristic route, isn’t it important for that agent to be an assimilation of more assets and more services over time?

Artificial Intelligence plus human support

Triant: I‘m glad you brought up AI and these other technologies. The reality is that we’ve been through a number of cycles around what this technology is going to do and how it is going to interact with an agent. In my view, and I have been in this world for a while, the agent is the most important piece of customer service and brand engagement. But you have to be able to bring information to them, and you have to be able to give information to your customers so that if there is something simple, get it to them as quick as possible — but also bring all the relevant information to the agent.

AI has had multiple forms; it has existed for a long time. Sometimes people get confused because of marketing schemes and sales tactics [and view AI] as a way for cost avoidance, to reduce agents and eliminate staff by implementing these technologies. Really the focus is how to create a better customer experience, how to create a better agent experience.

We have had AI in our product for last three years, and we are re-releasing some components that will bring business intelligence to the forefront around the end of the year. What it essentially does is alIow you to see what you’re doing as a user out on the Internet and within these technologies. I can see that you have been looking for knowledge-based articles around, for example, “why my refrigerator keeps freezing up and how can I defrost it.” You can see such things on Twitter and you can see these things on Facebook. The amount of information that exists out there is phenomenal and in real-time. I can now gather that information … and I can proactively, as a business, make decisions about what I want to do with you as a potential consumer.

I can even identify you as a consumer within my business, know how many products you have acquired from me, and whether you’re a “platinum” customer or even a basic customer, and then make a decision.

For example, I have TVs, refrigerators, washer-dryers and other appliances all from the same manufacturer. So I am a large consumer to that one manufacturer because all of my components are there. But I may be searching a knowledge-based article on why the refrigerator continues to freeze up.

Now I may call in about just the refrigerator, but wouldn’t it be great for that agent to know that I own 22 other products from that same company? I’m not just calling about the refrigerator; I am technically calling about the entire brand. My experience around the refrigerator freaking out may change my entire brand decision going forward. That information may prompt me to decide that I want to route that customer to a different pool of agents, based on what their total lifetime value is as a brand-level consumer.

Through AI, by leveraging all this information, I can be a better steward to my customer and to the agent, because I will tell you, an agent will act differently if they understand the importance of that customer or to know that I, Vasili, have spent the last two hours searching online for information, which I posted on Facebook and I posted on Twitter.

Through AI, by leveraging all this information, I can be a better steward to the customer and to the agent.

At that point, the level of my frustration already has reached a certain height on a scale. As an agent, if you knew that, you might treat me differently because you already know that I am frustrated. The agent may be able to realize that you have been looking for some information on this, realize you have been on Facebook and Twitter. They can then say: “I am really sorry, I’m not able to get you answers. Let me see how I can help you, it seems that you are looking online about how to keep the refrigerator from freezing up.”

If I start the conversation that way, I’ve now diffused a lot of the frustration of the customer. The agent has already started that interaction better. Bringing that information to that person, that’s powerful, that’s business intelligence — and that’s creating action from all that information.

Keep your cool

Gardner: It’s fascinating that that level of sentiment analysis brings together the best of what AI and machine learning can do, which is to analyze all of these threads of data and information and determine a temperature, if you will, of a person’s mood and pass that on to a human agent who can then have the emotional capacity to be ready to help that person get to a lower temperature, be more able to help them overall.

It’s becoming clear to me, Vasili, that this contact center function and CCaaS architectural benefits are far more strategic to an organization than we may have thought, that it is about more than just customer service. This really is the best interface between a company — and all the resources and assets it has across customer service, marketing, and sales interactions. Do you agree that this has become far more strategic because of these new capabilities?

Triant: Absolutely, and as brands begin to realize the power of what the technology can do for their overall business, it will continue to evolve, and gain pace around global adoption.

As brands begin to realize the power of what the technology can do for their overall businesses, it will continue to evolve and gain global adoption.

We have only scratched the surface on adoption of these cloud technologies within organizations. A majority of brands out there look at these interactions as a cost of doing business. They still seek to reduce that cost versus the lifetime value of both the consumer, as well as the agent experience. This will shift, it is shifting, and there are companies that are thriving by recognizing that entire equation and how to leverage the technologies.

Technology is nothing without action and result. There have been some really cool things that have existed for a while, but they don’t ever produce any result that’s meaningful to the customer so they never get adopted and deployed and ultimately reach some type of a mass proliferation of results.

Gardner: You mentioned cost. Let’s dig into that. For organizations that are attracted to the capabilities and the strategic implications of CCaaS, how do we evaluate it in terms of cost? The old CapEx approach often had a high upfront cost, and then high operating costs, if you have an inefficient call center. Other costs involve losing your customers, losing brand affinity, losing your perception in the market. So when you talk to a prospect or customer, how do you help them tease out the understanding of a pay-as-you-go service as highly efficient? Does the highly empowered agent approach save money, or even make money, and CCaaS becomes not a cost center but a revenue generator?

Cost consciousness

Triant: Interesting point, Dana. When I started at Serenova about five years ago, customers all the time would say, “What’s the cost of owning the technology?” And, “Oh, my, on-premises stuff has already depreciated and I already own it, so it’s cheaper for me to keep it.” That was the conversation pretty much every day. Beginning in 2013, it rapidly started shifting. This shift was mainly driven by the fact that organizations started realizing that consumers want to engage on different channels, and the on-premises guys couldn’t keep up with this demand.

The cost of ownership no longer matters. What matters is that the on-premises guys just literally could not deliver the functionality. And so, whether that’s Cisco, Avaya, or Shoretel, they quickly started falling away in consideration for technology companies that were looking to deploy applications for their business to meet these needs.

The cost of ownership quickly disappeared as the main discussion point. Instead it came around to, “What is the solution that you’re going to deliver?” Customers that are looking for contact center technologies are beginning to take a cloud-first approach. And once they see the power of CCaaS through demonstration and through some trials of what an agent can do – and it’s all browser-based, there is no client install, there is no equipment on-premises – then it takes on a life of its own. It’s about, “What is the experience going to be? Are these channels all integrated? Can I get it all from one manufacturer?”

Following that, organizations focus on other intricacies around – Can it scale? Can it be redundant? Is it global? But those become architectural concerns for the brands themselves. There is a chunk of the industry that is not looking at these technologies, and they are stuck in brand euphoria or have to stay with on-premises infrastructure, or with a certain vendor because of their name or that they are going to get there someday.

As we have seen, Avaya has declared bankruptcy. Avaya does not have cloud technologies despite their marketing message. So the customers that are in those technologies now realize they have to find a path to keep up with the basic customer service at a global scale. Unfortunately, those customers have to find a path forward and they don’t have one right now.

It’s less about cost of ownership and it’s more about the high cost of not doing anything. If I don’t do anything, what’s going to be the cost? That cost ultimately becomes – I’m not going to be able to have engagement with my customers because the consumers are changing.

It’s less about cost of ownership and it’s more about the high cost of not doing anything.

Gardner: What about this idea of considering your contact center function not just as a cost center, but also as a business development function? Am I being too optimistic.

It seems to me that as AI and the best of what human interactions can do combine across multichannels, that this becomes no longer just a cost center for support, a check-off box, but a strategic must-do for any business.

Multi-channel customer interaction

Triant: When an organization reaches the pinnacle of happiness within what these technologies can do, they will realize that no longer do you need to have delineation between a marketing department that answers social media posts, an inside sales department that is only taking calls for upgrades and renewals, and a customer service department that’s dealing with complaints or inbound questions. They will see that you can leverage all the applications across a pool of agents with different skills.

I may have a higher skill around social media than over voice, or I may have a higher skill level around a sales activity, or renewal activity, over customer service problems. I should be able to do any interaction. And potentially one day it’ll just be customer interaction department and the channels are just a medium of inbound and outbound choice for a brand.

But you can now take information from whatever you see the customer doing. Each of their actions have a leading indicator, everything has a predictive action prior to the inbound touch, everything does. Now that a brand can see that, it will be able to have “consumer interaction departments,” and it will be properly routed to the right person based on that information. You’ll be able to bring information to that agent that will allow them to answer the customer’s questions.

Gardner: I can see how that agent’s job would be very satisfying and fulfilling when you are that important, when you have that sort of a key role in your organization that empowers people. That’s good news for people that are trying to find those skills and fill those positions.

Vasili, we only have a few minutes left, but I’d love to hear about a couple of examples. It’s one thing to tell, it’s another thing to show. Do we have some examples of organizations that have embraced this concept of a strategic contact center, taken advantage of those multi-channels, added perhaps some intelligence and improved the status and capability of the agents — all to some business benefit? Walk us through a couple of actual use cases where this has all come together.

Cloud communication culture shift

Triant: No one has reached that level of euphoria per se, but there are definitely companies that are moving in that direction.

It is a culture change, so it takes time. I know as well as anybody what it takes to shift a culture, and it doesn’t happen overnight. As an example, there is a ride-hailing company that engages in a different way with their consumer, and their consumer might be different than what you think from the way I am describing it. They use voice systems and SMS and often want to pivot between the two. Our technology actually allows the agent to make that decision even if they aren’t even physically in the same country. They are dynamically spread across multiple countries to answer any question they may need to answer based on time and day.

But they can pivot from what’s predominantly an SMS inbound and outbound communication into a voice interaction, and then they can also follow up with an e-mail, and that’s already happened. Now, it initially started with some SMS inbound and outbound, then they added voice – an interesting move as most people think adding voice is what people are getting away from. What everyone has begun to realize is that live communication ultimately is what everybody looks for in the end to solve the more complex problems.

What everyone has begun to realize is that live communication ultimately is what everybody looks for in the end to solve the more complex problems.

That’s one example. Another company that provides the latest technology in food order and delivery initially started with voice-only to order and deliver food. Now they’ve added SMS confirmations automatically, and e-mail as well for confirmation or for more information from the inbound voice call. And now, once they are an existing customer, they can even start an order from an SMS, and pivot back to a voice call for confirmation — all within one interaction. They are literally one of the fastest growing alternative food delivery companies, growing at a global scale.

They are deploying agents globally across one technology. They would not be able to do this with legacy technologies because of the expense. When you get into these kinds of high-volume, low-margin businesses, cost matters. When you can have an OpEx model that will scale, you are adding better customer service to the applications, and you are able to allow them to build a profitable model because you are not burning them with high CapEx processes.

Gardner: Before we sign off, you had mentioned your pipeline about your products and services, such as engaging more with AI capabilities toward the end of the year. Could give us a level-set on your roadmap? Where are your products and services now? Where do you go next?

A customer journey begins with insight

Triant: We have been building cloud technologies for 16 years in the contact center space. We released our latest CCaaS platform in March 2016 called CxEngage. We then had a major upgrade to the platform in March of this year, where we take that agent experience to the next level. It’s really our leapfrog in the agent interface and making it easier, bringing in more information to them.

Where we are going next is around the customer journey — predictive interactions. Some people call it AI, but I will call it “customer journey mapping with predictive action insights.” That’s going to be a big cornerstone in our product, including business analytics. It’s focused around looking at a combination of speech, data and text — all simultaneously creating predictive actions. This is another core area we are going in an and continue to expand the reach of our platform from a global scale.

At this point, we are a global company. We have the only global cloud platform built on a single software stack with one data pipeline. We now have more users on a pure cloud platform than any of our competitors globally. I know that’s a big statement, but when you look at a pure cloud infrastructure, you’re talking in a whole different realm of what services you are able to offer to customers. Our ability to provide a broad reach including to Europe, South Africa, Australia, India, and Singapore — and still deliver good cloud quality at a reasonable cost and redundant fashion –  we are second to none in that space.

Gardner: I’m afraid we will have to leave it there. We have been listening to a sponsored BriefingsDirect discussion on how CCaaS capabilities are becoming more powerful as a result of cloud computing, multimode communications channels, and the ability to provide optimized and contextual user experiences.

And we’ve learned how new levels of insight and intelligence are now making CCaaS approaches able to meet the highest user experience requirements of today and tomorrow. So please join me now in thanking our guest, Vasili Triant, CEO of Serenova in Austin, Texas.

Triant: Thank you very much, Dana. I appreciate you having me today.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing series of BriefingsDirect discussions. A big thank you to our sponsor, Serenova, as well as to you, our audience. Do come back next time and thanks for listening.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or  download a copy. Sponsor: Serenova.

Transcript of a discussion on how contact center-as-a-service capabilities are becoming more powerful to provide optimized and contextual user experiences for agents and customers. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, contact center, data analysis, data center, Enterprise transformation, Help desk, machine learning, managed services, professional services, User experience | Tagged , , , , , , , | Leave a comment