Happy employees equal happy customers — and fans. Can IT deliver for them all?

The next BriefingsDirect workplace productivity discussion explores how businesses are using the latest digital technologies to re-imagine the employee experience — and to transform their operations and results.

Employee experience isn’t just a buzz term. Research shows that engaged employees are happier, more productive, and deliver a superior customer experience, all of which translates into bottom line results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn how, our panel will now explore how IT helps deliver a compelling experience that enables employees to work when, where, and how they want — and to perform at their best. Joining us are Adam Jones, Chief Revenue Officer, who oversees IT for the Miami Marlins Major League Baseball team and organization, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, when it comes to employee experience, Citrix has been at the forefront of the conversation and of the technology shaping it. In fact, I remember covering one of the first press conferences that Citrix had, and this is going back about 30 years, and the solutions were there for people to work remotely. It seemed crazy at the time, delivering apps over the wire, over the Internet.

But you are still innovating. You’re at it again. About a year ago, you laid out an aggressive plan to help companies power their way to even better ways to work. So, it begs the question: Tim, what’s wrong with the way people are working today and the way that employees are experiencing work today?

From daily grind to digital growth 

Minahan: That topic is top of mind both for C-level and board members around the globe. We are entering an era of a new talent crisis. What’s driving it is, number one, there are just too few workers. Demographically McKinsey estimates that in the next few years we will be short by 95 million medium- to high-skilled workers around the globe.

Minahan

And that’s being frustrated by our traditional work models, which tend to organize around physical hubs. I build an office building, call center, or manufacturing facility and I do my best to hire the best talent around that hub. But the talent isn’t always located there.

The second thing is, as more companies become digital businesses — trying to develop new digital business lines, engage customers through new digital business channels, develop new digital business revenue streams — oftentimes they lack the right skills. They lack skills to help drive to this next level of transformation. If companies are fortunate enough to identify employees with those skills, there is a huge likelihood that they will be disengaged at work. 

In fact, the latest Gallup study finds that globally 85 percent of workers are disengaged at work. A key component of that frustration has to do with their work environment.

We spend a lot of time talking about vision alignment and career development — and all of that is important. But a key gap that many companies are overlooking is that they have a frustrating work environment. They are not giving their employees the tools or resources they need to do their jobs effectively.

In fact, all the choice we have around our applications and our devices has actually begun to create noise that distracts us from doing our core jobs in the best way possible.

Gardner: Is this a case of people being distracted by the interfaces? Is there too much information and overload? Are we not adding enough intelligence to provide a contextual approach? All of the above? 

Minahan: It is certainly “all of the above,” Dana. First off, there are just too many applications. The typical enterprise IT manager is responsible for more than 500 applications. At the individual employee level, a typical worker uses more than a dozen applications through the course of their day, and oftentimes needs to traverse four or five different applications just to do a single business process. That could be something as simple as finding the change in a deal status, or even executing one particular transaction.

Work Isn’t Working for Your Employees.

Find Out How Technology Can Help

And that would be bad enough, except consider that oftentimes we are distracted by apps that aren’t even core to our jobs. Last time I checked, Dana, neither you nor I, nor Adam were hired to spend our day approving expense reports in something like SAP Concur, which is a great application. But it’s not core to my job. Or, we are approving performance reviews in Workday, or a purchase request in SAP Ariba. Certainly, these distract from our day. By doing so, we need to constantly navigate via new application interfaces. We need to learn new applications that aren’t even core to our jobs. 

To your point around disruption and context switching, today — because we have all of these different channels, and not just e-mail, but Slack and Microsoft Teams and all of these applications – just finding information consumes a large part of our day. We can’t remember where we stored something, or we can’t remember the change in that deal status. So we have to spend about 20 percent of our day switching between all of these different contexts, just to get the information or insight we need to do our jobs.

Gardner: Clearly too much of a good thing. And to a large degree, IT has brought about that good thing. Has IT created this problem?

Minahan: In part. But I think employees share a bit of responsibility themselves. As an employee, I know I’m always pushing IT by saying, “Hey, absolutely, this is the one tool we need to do a more effective job at marketing, strategy, or what-have-you.”

We keep adding to the top of what we already have. And IT is in a tough position of either saying, “No,” or finding a way to layer on more and more choices. And that has the unintended side effect of what we have just mentioned — which is the complexity that frustrates today’s employee experience.

Workspace unity and security 

Gardner: Now, the IT people have faced complexity issues before, and many times they have come up with solutions to mitigate the complexity. But we also have to remember that you can’t just give employees absolute freedom. There have to be guardrails, and rules, compliance, and regulatory issues must be addressed. 

So, security and digital freedom need to be in balance. How do we get to the point, Tim, where we can create that balance, and give freedom — but not so much that they are at risk?

Minahan: You’re absolutely right. At Citrix, we firmly believe this problem needs to be solved. We are making the investments, working with our customers and our partners, to go out and solve it. We believe the right way to solve it is through a digital workspace that unifies everything your employees need to be productive in one, unified experience that wrappers those applications and content, and makes them available across any device or platform, no matter where you are.

A workspace that’s just unified but not secure doesn’t fully address the needs of the enterprise. We believe the workspace should wrapper in a layer of contextual security policies that know who you are.

If you are in the office, on the corporate network using your laptop, perfect. You also need to have access to the same content and applications to do your job on the train ride home, on your smartphone, and maybe while visiting a friend. You need to be able to log on through a web interface. You want your work to travel with you, so you can work anytime, anywhere. 

But such a workspace that’s just unified — but not secure — doesn’t fully address the needs of the enterprise. The second attribute of what’s required for a true digital workspace is that it needs to be secure. When you have those applications and content within the workspace, we believe the workspace should wrapper that in a layer of contextual security policies that know who you are, what you typically have access to, and how you typically access it. The security must know if you do your work through one device or another, and then apply the right policies when there are anomalies outside of that realm. 

For example, maybe you are logging in from a different place. If so, we are going to turn off certain capabilities within your applications, such as the capability to download, print, or screen-capture. Maybe we need to require a second layer of authentication, if you are logging on from a new device. 

And so, this approach brings together the idea of employee experience and balances it with the security that the enterprise needs. 

Gardner:We are also seeing more intelligence brought into this process. We are seeing more integration end-to-end, and we are anticipating the best worker experience. But companies, of course, are looking for productivity improvements to help their bottom line and their top line.

Want Employees to Perform at Their Best?

Let Them Thrive Using

An Intelligent Workspace

Is there a way to help businesses understand the economic benefits of the best digital workspace? How do we prove that this is the right way to go?

Minahan: Dana, you hit the nail on the head. I mentioned there are three attributes required for an effective digital workspace. We talked about the first two, unifying everything an employee needs to be productive with one unified experience, and secondly securing that to ensure that applications’ content is more secure in the workspace than when native. So that organizes your workday, and that’s a phenomenal head start. 

Work smart, with intelligence 

But, to your point, we can do better by building on that foundation and injecting intelligence into the workspace. You can then begin to help employees work better. You can help employees remove that noise from their day by using things such as machine learning (ML)artificial intelligence (AI), simplified workflows, and what we call micro appsto guide an employee through their workdays. The workspace is not just someplace they go to launch an application, but it is someplace they go to get work done.

We have begun providing capabilities that literally reach into your enterprise applications and extract out the key insights and tasks that are personal to each employee. So when you log into the workspace, Dana, it would say, “Hey, Dana, it’s time for you to approve that expense report.”

You don’t need to log-in to the app again. You just quickly open a review. If you want, you can click “approve” and move on, saving yourself minutes. And you multiply that throughout the course of the day. We estimate you can give an employee back 10 to 20 percent of their workweek. So, an added day each week of improved productivity. 

But it’s not just about streamlined tasks. It’s also about improved insights, of making sure that you understand the information you need. Maybe it’s that change in a deal status and presenting that up to you so you don’t need to log-in to Salesforce and check on that dashboard. It’s presented to you because the workspace knows it’s of interest to you. 

To your point, this could dramatically improve the overall productivity for an employee, improve their overall experience at work, and by extension allow them to serve their customers in a much better way. They have the resources, tools, and the information at their fingertips to deliver a superior customer experience. 

The Miami Marlins have a very sophisticated approach to user experience. They look at heir employees and their fan base across multiple ways of making the experience exceptional.

Gardner: We are entering an age, Tim, where we let the machines do what they do best and know the difference, so that then allows people to do what they can do best, creatively, and most productively. It’s an exciting time. 

Let’s look at a compelling use case. The Miami Marlins have a very sophisticated approach to user experience. And they are not just looking at their employees, they are looking at the end-users — their fan base across multiple different ways of entertainment and for intercepting the baseball experience. 

Baseball, in a sense, was hibernating over the winter, and now the new season has played out well in 2019. And your fans in Miami are getting treated to a world-class experience. 

Tell me, Adam, what went on behind-the-scenes that allows you in IT to make this happen? What is the secret sauce for providing such great experiences?

Marlins’ Major League IT advantage 

Jones: The Marlins is a 25-year-old franchise. We find ourselves in build mode coming into the mid-2019 season, following a change in ownership and leadership. We have elevated the standards and vision for the organization.

We are becoming a world-class sports and entertainment enterprise, and so are building a next-generation IT infrastructure to enable the 300-plus employees who operate across our lines of business and the various assets of the organization. We are very pleased to have our longtime partner, Citrix, deploy their digital workspace solutions to enable our employees to deliver against the higher standards that we have set. 

Gardner: Is it difficult to create a common technological approach for different types of user experience requirements? You have fans, scouts, and employees. There are a lot of different endpoints. How does a common technological approach work under those circumstances?

Jones: The diversity within our enterprise necessitates having tools and solutions that have a lot of agility and can be flexible across the various requirements of an organization such as ours. We are operating a very robust baseball operation — as well as a sophisticated business. We are looking to scale and engage a very diverse audience. We need to have the resources available to invest and develop talent on the baseball side. So, what we have within the Citrix environment is the capability to enable that very diverse set of activities within one environment.

Gardner: And we have become used to, in our consumer lives, having a sort of seamless segue between different things that we are doing. Are you approaching that same seamless integration when it comes to how people encounter your content across multiple channels? Is there a way for you to present yourselves in such a way that the technology takes over and allows people to feel like they are experiencing the same Miami Marlins experience regardless of how they actually intercept your organization and your sport?

Want Employees to Perform at Their Best?

Let Them Thrive Using

An Intelligent Workspace

Jones: Like many of our peers, we are looking to establish more robust, rounded relationships with our fans and community. And that means going beyond our home schedule to more of a 365-day relationship, with a number of touch points and a variety of content.

The mobility of our workforce to get out into the community — but not lose productivity — is incredibly important as we evolve into a more sophisticated and complex set of activities and requirements.

Gardner: Controlling your content, making sure you can make choices about who gets to see what, to protect your franchise, is important. Are you reaching a balance between offering a full experience of interesting content and technology, but at the same time protecting and securing your assets and your franchise?

Safe! at digital content distribution 

Jones: Security is our highest priority, particularly as we continue to develop more content and more intellectual property. What we have within the Citrix environment is very robust controls, with the capability to facilitate fairly broad collaboration among our workforce. So again, we are able to disseminate that content in near real-time so that we are creating impactful and timely moments with our fans.

Gardner: Tim, this sounds like a world-class use case for advanced technology. We have scale, security, omni-channel distribution, and a dynamic group of people who want to interact as much as they can. Why is the Miami Marlins such a powerful and interesting use-case from your perspective?

Minahan: The Marlins are a fantastic example of a world champion organization now moving into the digital edge. They are rethinking the fan experience, not just at the stadium but in how they engage across their digital properties and in the community. Adam and the other leadership there are looking across the board to figure out how the sport of baseball and fan experience evolve. They are exploring the linkage between the fan experience, or customer experience, and the employee experience, and they are learning about the role that technology plays in connecting the two.

Work Isn’t Working for Your Employees.

Find Out How Technology Can Help

They are a great example of a customer at the forefront of looking at these new digital channels and how they impact customer relationships — and how they impact values for employees as well.

Gardner: Tim, we have heard over the past decade about how data and information are so integral to making a baseball team successful. It’s a data-driven enterprise as much as any. How will the intelligence you are baking into more of the Citrix products help make the Miami Marlins baseball team a more intelligent organization? What are the tools behind the intelligent baseball future?

Minahan: A lot of the same intelligence capabilities we are incorporating into the workspace for our customers — around ML, AI, and micro apps — will ensure that the Marlins organization — everyone from the front office to the field manager — has the right insights and tasks presented to them at the right time. As a result, they can deliver the best experience, whether that is recruiting the best talent for the team or delivering the best experience for the fans. 

We are going to learn a lot, as we always have from our customers, from the Miami Marlins about how we can continue to adapt that to help them deliver that superior employee experience and, hence, the superior fan experience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix

You may also be interested in:

Advertisements
Posted in application transformation, Citrix, Cyber security, Data center transformation, enterprise architecture, Mobile apps, mobile computing, Security, User experience, vdi, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

How enterprises like McKesson digitize procurement and automate spend management to slash waste

The next BriefingsDirect intelligent enterprise innovations discussion explores new ways that leading enterprises like McKesson Corp. are digitizing procurement and automating spend management

We’ll now examine how new intelligence technologies and automation methods like robotic process automation (RPA) help global companies reduce inefficiencies, make employees happier, cut manual tasks, and streamline the entire source-to-pay process.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the role and impact of automation in business-to-business (B2B) finance, please welcome Michael Tokarz, Senior Director of Source to Pay Processes and Systems at McKesson, in Alpharetta, Georgia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There’s never been a better time to bring efficiency and intelligence to end-to-end, source-to-pay processes. What is it about the latest technologies and processes that provides a step-change improvement?

Tokarz: Our internal customers are asking us to move faster and engage deeper in our supplier conversations. By procuring intelligently, we are able to shift where resources are allocated so that we can better support our internal costumers.

Gardner: Is there a sense of urgency here? If you don’t do this, and others do, is there a competitive disadvantage? 

Tokarz: There’s a strategic advantage to first-movers. It allows you to set the standard within an industry and provide greater feedback and value to your internal customers.

Gardner: There are some major trends driving this. As far as new automation and the use of artificial intelligence (AI), why are they so important?

The AI advantage 

Tokarz

Tokarz: AI is important for a couple of reasons. Number one, we want to process transactions as cost-effectively as we possibly can. Leveraging a “bot” to do that, versus a human, is strategically advantageous to us. It allows us to write protocols that process automatically without any human touch, which, in turn is extremely valuable to the organization.

AI also allows workers to change their value-quotient within the organization. You can go from someone doing manual processes to working at a much higher level for the organization. They now work on things that are change-driven and that bring much more value, which is really important to the organization.

Gardner: What do you mean by bots? Is that the same as robotic process automation (RPA), or they overlapping? What’s the relationship?

Tokarz: I consider them the same technology, RPA and bots. It’s essentially a computer algorithm that’s written to help process transactions that meet a certain set of circumstances.

Gardner: E-sourcing technology is also a big trend and an enabler these days. Why is it important to you, particularly across your supplier base?

Tokarz: E-sourcing helps us drive conversations internally in the organization. It forces the businesses to pause. Everyone’s always in a hurry, and when they’re in a hurry they want to get something published for the organization and out on the street. Having the e-sourcing tool forces people to think about what they really need from the marketplace and to structure it in a format so that they can actually go faster.

E-Sourcing, while you have to do a little bit of work on the front end, you enable the speed of the transaction on the back end because you have everything aligned from all of the suppliers in one central place, so that you can easily compare and make solid business decisions.

Gardner: Another important thing for large organizations like McKesson is the ability to extend and scale globally. Rather than region-by-region there is standardization. Why is that important?

Tokarz: First and foremost, getting to one technology across the board allows us to have a global standard. And what does a global standard mean? It doesn’t mean that we’re going to do the same things the same way in every country. But it gives us a common platform to build our processes on.

It gives us a way to unify our organization so that we can have more informed conversations within the organization. It becomes really important when you begin managing global relationships with large suppliers.

Gardner: Tell us about McKesson and your role within vendor operations and management.

Tokarz: McKesson is a global provider of healthcare solutions — from pharmaceuticals to medical supplies to services. We’re mainly in the United States, Canada, and Europe.

I’m responsible for indirect sourcing here in the United States, but I also oversee the global implementations of solutions in Ireland, Europe, and Canada in the near future. Currently in the United States, we process about $1.6 billion in direct transactions. That’s more than 60,000 transactions on our SAP Ariba system. We also leverage other vendor management solutions to help us process our services transactions.

Gardner: A lot of people like you are interested in becoming touchless – of leveraging automation, streamlining processes, and using data to apply analytics and create virtuous adoption cycles. How might others benefit from your example of using bots and why that works well for you?

Bots increase business 

Tokarz: The first thing we did was leverage SAP Ariba Guided Buying. We also then reformatted our internal website to put Guided Buying forefront for all of our end users. We actually tag it for novice users because Guided Buying works similar to a tablet interface. It gives you smart icons that you can tap to begin and make decisions for your organization. It now drives purchasing behavior. 

The next thing we did is push as much buying through catalogs and indirect spend that we possibly could. We’ve implemented enough catalogs in the United States that we now have 80 percent of our transactions fully automated through catalogs. It provides people really nice visual cues and point-and-click accessibility. Some of my end users tell me they can find what they need within three minutes, and then they can go about their day, which is really powerful. Instead of focusing on buying or purchasing, it allows them to do their jobs, their specialty, which brings more value to the organization.

We use the RPA and bot technology to take the entire organization to the next level. We’re always striving to get to 90 touchless transactions. If we are at 80 percent, that means an additional 50 percent reduction in the touch transactions that we’re currently processing, which is very significant.

The last thing we’ve done is taken it to the next level. We use the RPA and bot technology to take the entire organization to the next level. We’re always striving to get to 90 percent touchless transactions. If we are at 80 percent, that means an additional 50 percent reduction in the touch transactions that we’re currently processing, which is very significant. 

That has allowed me to refocus some of my efforts with my business process outsourcing (BPO) providers where they’re not having to touch the transactions. I can have them instead focus on acquisitions, integrations, and doing different work that might have been at a cost increase. This all saves me money from an operations standpoint.

Gardner: And we all know how important user experience is — and also adoption. Sometimes you can bring a horse to water and they don’t necessarily drink.

So it seems to me that there is a double-benefit here. If you have a good interface like Guided Buying, using that as a front end, that can improve user satisfaction and therefore adoption. But by also using bots and automation, you are taking away the rote, manual processes and thereby making life more exciting. Tell us about any cultural and human capital management benefits.

Smarts, speed, and singular focus 

Tokarz: It allows my procurement team to focus differently. Before they were focused on the transactions in the queue and how fast to get them processed, all to keep the internal customers happy. Now I have a bot that processes that three times a day, it looks at the queue, and so we don’t have to worry about those any more. The team is only watching the bot to make sure it isn’t kicking out any errors.

From an acquisition integration standpoint, when I need to add suppliers to the network I don’t have to go for a change request to my management team and request more money. I can operate within the original budget with my BPO providers. If there’s another 300 suppliers that I need added to the network, for example, I can process them more effectively and efficiently.

Gardner: What have been some challenges with establishing the e-sourcing technology? What have you had to overcome to make e-sourcing more prevalent and to get as digital as possible?

Tokarz: Anytime I begin working on a project, I focus not only on the technology component, but also the process, organization, and policy components. I try to focus on all of them.

So first, we hired someone to manage the e-sourcing via an e-sourcing administrator role. It becomes really important. We have a single point of contact. Everyone knows where to go within the organization to make things happen as people learn the technology, and what the technology is actually capable of. Instead of having to train 50 people, I have one expert that can help guide them through the process.

From a policy standpoint, we’ve also taken the policies and dictated that. People are supposed to be leveraging the technology. We all know that not all policies are adhered to, but it sets the right framework for discussion internally. We can now go to a category manager and access the right technology to do the jobs better, faster, cheaper.

As a result, you have a more intriguing job versus doing administrative work, which ultimately leads to more value to the organization. They’re acting more as a business consultant to our internal customers to drive value — not just about price but on how to create value using innovations, new technology, and new solutions in the marketplace.

To me, it’s not just about the technology — it’s about developing the ecosystem of the organization.

Gardner: Is there anything about Guided Buying and the added intelligence that helps with e-sourcing – of getting the right information to the right person in the right format at the right time?

Seamless satisfaction for employee

Tokarz: The beautiful thing about Guided Buying is it’s seamless. People don’t know how the application works and that they are using SAP Ariba. It’s interesting. They see Guided Buying and they don’t realize it’s basically a looking glass into the architecture that is SAP Ariba behind the scenes.

That helps with transparency for them to understand what they are buying and get to it as quickly as possible. It allows them to process a transaction via a really nice, simple checkout screen. Everyone knows what it costs, and it just routes seamlessly across the organization.

Gardner: So what do you get when you do e-sourcing right? Are there any metrics or impacts that you can point to such as savings, efficiencies, employee satisfaction?

The biggest impact is employee satisfaction. Instead of having a category manager working in Microsoft Outlook, sending e-mails to 30 different suppliers on a particular event, they have a simple dashboard where they can combine all of the answers and push all of that information out seamlessly across all the participants.

Tokarz: The biggest impact is employee satisfaction. Instead of having a category manager working in Microsoft Outlook, sending e-mails to 30 different suppliers on a particular event, they have a simple dashboard where they can combine all of the answers, or questions, and develop all of the answers and push all of that information out seamlessly across all the participants. Instead of working administratively, they’re working strategically with internal customers. They are asking the hard questions about how to solve business problems at hand and creating value for the organization.

Gardner: Let’s dig deeper into the need for extensibility for globalization. To me this includes seeking a balance between the best of centralized and the best of distributed. You can take advantage of regional particulars, but also leverage and exploit the repeatability and standard methods of centralization.

What have you been doing in procurement using SAP Ariba that helps get to that balance?

Global insights grow success 

Tokarz: We’re in the process of rolling out SAP Ariba globally. We have different regions, and they all have different requirements. What we’ve learned is that our EMEA region wants to do some things differently than we were doing them. It forces us to answer the question, “Why were we doing things the way we were doing them, and should we be changing? Are their insights valuable?”

We learned that their insights are valuable, whether it be the partners that they are working with, from an integration standpoint, or the people on the ground. They have valuable insights. We’re beginning to work with our Canadian colleagues as well, and they’ve done a tremendous amount of work around change management. We want to capitalize on that, and we want to leverage it. We want to learn so that we can be better here in the United States at how we implement our systems.

Gardner: Let’s look to the future. What would you like to see improved, not only in terms of the technology but the way the procurement is going? Do you see more AI, ML, and bots progressing in terms of their contribution to your success?

Tokarz: The bots’ technology is really interesting, and I think it’s going to change pretty dramatically the way we work. It’s going to take a lot of the manual work that we do in processing transactions and it’s going to alleviate that.

And it’s not just about the transactions. You can leverage the bot technology or RPA technology to do manual work and then just have people do the audit. You’re eliminating three to five hours’ worth of work so that the workers can go focus their time on higher value-add.

For my organization, I’d like us to extend the utilization of the solutions that we currently own. I think we can do a better job of rolling out the technology broadly across the organization and leverage key features to make our business more powerful.

Gardner: We have been hearing quite a bit from SAP Ariba and SAP at-large about integrating more business applications and data sets to find process efficiencies across different types of spend and getting a better view of total spend. Does that fit into your future vision? 

Tokarz: Yes, it does. Data is really important. It’s a huge initiative at McKesson. We have teams that are specifically focused on data and integrating the data so that we can have meaningful information to make more broad decisions. They can be made not by, “Hey, I think I have the right knowledge.” Instead insights are based on the concrete details that guide you to making smart business decisions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, Business intelligence, electronic medical records, ERP, healthcare, Internet of Things, machine learning, procurement, RPA, Spot buying, supply chain | Tagged , , , , , , , , , , , , , , , | Leave a comment

CEO Henshall on Citrix’s 30-year journey to make workers productive, IT stronger, and partners more capable

The next BriefingsDirect intelligent workspaces discussion explores how for 30 years Citrix has pioneered ways to make workers more productive, IT operators stronger, and a vast partner ecosystem more capable.

We will now hear how Citrix is by no means resting on its laurels by charting a new future of work that abstracts productivity above apps, platforms, data, and even clouds. The goal: To empower, energize, and enlighten disaffected workers while simplifying and securing anywhere work across any deployment model.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To hear more about Citrix’s evolution and ambitious next innovations, please welcome David Henshall, President and CEO of Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: To me Citrix is unique in that for 30 years it has been consistently disruptive, driven by vision, and willing to take on both technology and culture — which are truly difficult things to do. And you have done it over and over again.

As Citrix was enabling multiuser remote access — or cloud before there was even a word for it — you knew that changing technology for delivering apps necessitated change in how users do their jobs. What’s different now, 30 years later? How has your vision of work further changed from delivery of apps?

Do your best work

Henshall: I think you said it well. For 30 years, we have focused on connecting people and information on-demand. That has allowed us to let people be productive on their terms. The fundamental challenge of people is to have access to the tools and resources necessary to get their jobs done — or as we describe it, to do their best work.

We look at that as an absolute necessity. It’s one of the things that makes people feel empowered, feel accomplished, and it allows them to drive better productivity and output. It allows engagement at the highest levels possible. All of these have been great contributing factors.

What’s changed? The technology landscape continues to evolve as applications have evolved over the years – and so have we. You referred to the fact that we’ve reinvented ourselves many times in the last three decades. All great companies go through the same regeneration against a common idea, over-and-over again. We are now in what I would describe as the cloud-mobile era, which has created unprecedented flexibility from the way people used to manage IT. Everything from new software-as-a-service (SaaS) services are being consumed with much less effort, all the way to distributed edge services that allow us to compute in new ways that we’ve never imagined.

And then, of course, on the device side, the choices are frankly nearly infinite. Being able to support the device of your choice is a critical part of what we do — and we believe that matters.

Gardner: I was fortunate enough to attend a press conference back in 1995 when Citrix WinFrame, as it was called at that time, was delivered. The late Citrix cofounder Ed Iacobucci was leading the press conference. And to me, looking back, that set the stage for things like desktop as a service (DaaS)virtual desktop infrastructure (VDI), multi-tenancy, and later SaaS. We all think of these as major mainstream technologies.

Do you feel that what you’re announcing about the future of work, and of inserting intelligence in context to what people do at work, will similarly set off a new era in technology? Are we repeating the past in terms of the scale and magnitude of what you are biting off?

Future productivity goes beyond products 

Henshall: The interesting thing about the future is that it keeps changing. Against that backdrop we are rethinking the way people work. It’s the same general idea about just giving people the tools to be productive on their terms.

Henshall

A few years back that was about location, of being able to work outside of a traditional office. Today more than half the people do not work in a typical corporate headquarters environment. People are more distributed than ever before.

The challenge we are now trying to solve takes it another step forward. We think about it from a productivity standpoint and an engagement template. The downside of technology is that it does make everything possible. So therefore the level of complexity has gone up dramatically. The level of interruptions — and what we call context shifting— has gone up dramatically. And so, we are looking for ways to help simplify, automate common workflows, and modernize the way people engage with applications. All of these point toward the same common outcome of, “How do we make people more productive on their terms?”

Gardner: To solve that problem of location flexibility years ago, Citrix had to deal with the network, servers, performance and capacity, and latency — all of which were invisible. End users didn’t know that it was Citrix behind-the-scenes.

Will people know the Citrix name and associate it with workspaces now that you are elevating your value above IT?

Henshall: We are solving broader challenges. We have moved gradually over the years from being a behind-the-scenes infrastructure technology. People have actually used the company’s name as a verb. “I have Citrixed into my environment,” for example. That will slowly evolve into still leveraging Citrix as a verb, but meaning something like, “I Citrixed to get my job done.” That takes on an even broader definition around productivity and simplification, and it allows us more degrees of freedom.

We are working with ecosystem partners across the infrastructure landscape, all types of application vendors. We therefore are a bridge between all of those. It doesn’t mean we necessarily have to have our name front and center, but Citrix is still a verb for most people in the way they think about getting their jobs done.

Gardner: I commend you for that because a lot of companies can’t resist making their name part-and-parcel of a solution. Perhaps that’s why you’ve been such a good partner over the years. You’ve been supplying a lot of the workhorses to get jobs done, but without necessarily having to strut your stuff.

Let’s get back to the issues around worker talent, productivity, and worker user experience. It seems to me we have lot of the bits and parts for this. We have great apps, great technology, and cloud distribution. We are seeing interactivity via chatbots, and robotic programming automation (RPA).

Why do you think being at the middle is the right place to pull this all together? How can Citrix uniquely help, whereas none of the other individual parts can?

Empower the people, manage the tech

Henshall: It’s a problem they are all focused on solving. So take a SaaS application, for example. You have applications that are incredibly powerful, best of breed, and they allow for infinite flexibility. Therein lies part of the challenge. The vast majority of people are not power users. They are not looking for every single bell and whistle across a workflow. They are looking for the opportunity to get something done, and it’s usually something fairly simple.

We are designing an interface to help abstract away a lot of complexity from the end user so they can focus on the task more than the technology itself. It’s an interesting challenge because so much technology is focused on the tech and how great and powerful and inflexible it is, and they lose sight of what people are trying to accomplish.

We start by working backward. We start with the end user, understand what they need to be productive, empowered, and engaged. We let that be a guiding principle behind our roadmap. That gives us flexibility to empathize, to understand more about customers.

We start by working backward. We start with the end user, understand what they need to be productive, empowered, and engaged. We let that be a guiding principle behind our roadmap. That gives us flexibility to empathize, to understand more about customers and end users more effectively than if we were building something purely for technology’s sake.

Gardner: For younger workers who have grown up all-digital all the time, they are more culturally attuned to being proactive. They want to go out and do things with choice. So culturally, time is on your side.

On the other hand, getting people to change their behaviors can be very challenging. They don’t know that it could be any better, so they can be resistant. This is more than working with an IT department on infrastructure. We are talking about changing people’s thinking and how they relate to technology.

How do you propose to do that? Do you see yourself working in an ecosystem in such a way that this is not just, “If we build it, they will come,” affair, but evangelizing to the point where cognitive patterns can be changed?

Henshall: A lot of our relationships and conversations have been evolving over the last few years. We’ve been moving further up what I would call “the IT hierarchy.” We’re having conversations with CIOs now about broad infrastructure, ways that we can help address the use cases of all their employees, not just those that historically needing all the power of virtualization.

But as we move forward, there is a large transformation going on. Whether we use terms like digital transformation and others, those are less technology conversations and more about business outcomes – more than any time in my 30-year-career.

Because of that, you’re not only engaging the CIO, you may have the same conversation with a line of business executive, a chief people officer, the chief financial officer (CFO), or someone in another functional organizations. And this is because they’re all trying to accomplish a specific outcome more than focusing on the technology itself.

And that allows us to elevate the discussion in a way that is much more interesting. It allows us to think about the human and business outcomes more so than ever before. And again, it’s just one more extension of how we are getting out of the “technology for technology’s sake” view and much more into the, “What is it that we are actually trying to accomplish” view.

Gardner: David, as we tackle these issues, elevate the experience, and let people work the way they want, it seems we are also opening up the floodgates for addition of more intelligence.

Whether you call it artificial intelligence (AI)machine learning (ML), or augmented intelligence, the fact is that we are able to deal with more data, derive analytics from it, learn patterns, reapply those learning lessons, and repeat. So injecting that into work, and how people get their jobs done, is the big question these days. People are trying to tackle it from a variety of different directions.

You have said an advantage Citrix has, is in access to data. What kind of data are we talking about, and why is that going to put Citrix in a leadership position?

Soup to nuts supervision of workflow 

Henshall: We have a portfolio that spans everything from the client device through the application, files, and the network. We are able to instrument many different parts of the entire workflow. We can capture information about how people are using technologies, what their usage patterns look like, where they are coming in from, and how the files are being used.

In most cases, we take that and apply it into contextual outcomes. For example, in the case of security, we have an analytics platform and we use those security analytics. We can create a risk score that’s very similar to your credit score for an individual user’s behavior if something anomalous happens. For example, you’re here with me and you’re in front of your computer, but you also tried to log on from another part of the globe at the same time.

Things like that can be flagged almost instantaneously and allows the organization to identify and — in many cases — automatically address those types of scenarios. In that case, it may immediately ask for two-factor authentication.

We are not capturing personally identifiable information (PII) and other types of broader data that fall under a privacy umbrella. We access a lot of anonymized things that provide the insights.

Citrix operates in about 100 countries around the world. We are already very familiar with local compliance and data privacy regulations. We are making sure that we can operate within those and give our customers in those markets the tools to make sure they are operating within those constraints as well.

Every company has [had privacy discussions] and will continue to evolve over time as technology evolves because the underlying platforms are becoming very powerful. Citrix operates in about 100 countries around the world. We are already very familiar with local compliance and data privacy regulations. We are making sure that we can operate within those and certainly give our customers in those markets the tools to make sure that they are operating effectively within the constraints as well.

Gardner: The many resources people rely on to do their jobs come from different places — public clouds, private clouds, a hybrid between them, different SaaS providers, and different legacy systems of record.

You are in a unique position in the middle of that. You can learn from it and begin to suggest how people can improve. Those patterns can be really powerful. It’s not something we’ve been able to do before.

What do we call that? Is it AI? Or a valet or digital assistant to help in your work while protective of privacy and adhering to all the laws along the way? And where do you see that going in terms of having an impact on the economy and on companies?

AI, ML to assist and automate tasks

Henshall: Two very broad questions. From the future standpoint, AI and ML capabilities are helping turn all the data we have into more useful or actionable information. And in our case, you mentioned virtual assistance. We will be using intelligent assistance to help you automate simple tasks.

And many of those could be tasked between applications. For example, you could ask your assistant to move a meeting to next Thursday or any time your other meeting participants happen to be available. The bots will go out, search for that optimal time, and take those actions. Those are the types of things that we envision more for the virtual assistants going forward, and I think those will be interesting.

Beyond that, it becomes a learning mechanism whereby we can identify that your bot came back and told you you’ve had the same conflict two meetings in a row. Do you want to change all future meetings so that this doesn’t happen again? It can become much more predictive.

And so, this journey that Citrix has been on for many years started with helping to simplify IT so that it became easier to deliver the infrastructure. The second part of that journey was making it easier for people to consume those resources across the complexities we have talked about.

Now, the products we announced at our May 2019 Citrix Synergy Conference are more about guiding work to help simplify the workflows. We will be doing more in this last space on how to anticipate what you will need so that we can automate it ahead of time. And that’s an interesting journey. It will take a few years to get there, but it’s going to be pretty powerful when we do.

Gardner: As you’re conducting product development, I assume you’re reflecting these capabilities back to your own workforce, the Citrix global talent pool. Do you drink your own champagne? What are you finding? Does it give you a sense as the CEO that your workforce has an advantage by employing these technologies? Do we have any proof points that the productivity is in fact enhanced?

Henshall: It’s still early days. A lot of these are brand-new technologies that don’t have enough of a base of learning yet.

But some of the early learnings can identify areas where you’re multitasking too much, or are in an inefficient process, or in my case, I tend to look at automating opportunities for how much I am multitasking inside of a meeting. That helps me understand whether I should be in that meeting in the first place, whether I am a 100 percent focused and committed — or have I been distracted by other elements.

Those are interesting learnings that are more about personal productivity and how we can optimize from that respect.

More broadly speaking, our workforce is globally distributed. We absolutely drink our own champagne when it comes to engaging a global team. We have teams now in about 40 countries around the world and we are very, very virtual. In fact, among my leadership team, I am the only member that lives full-time in [Citrix’s headquarters] in South Florida. We make that work because we embrace all of our own technology, stay on top of common projects, communicate across all the various mediums, and collaborate where need be.

That allows us to tap into nontraditional workforce populations, to differentiate, and enable folks who need different types of flexibility for their own lifestyles. You miss great talent if you are far too rigid. Personally, I believe the days are gone when everybody is expected to work inside a corporate headquarters. It’s just not practical anymore.

Gardner: For those businesses that recognize there is tremendous change afoot, are using new models like cloud, and don’t want complexity to outstrip productivity – what advice do you have for them as they start digital transformation efforts? What should they be putting in place now to take advantage of what companies like Citrix will be providing them in a few years?

Business-first supports global collaboration 

Henshall: The number one thing on any digital transformation project is to be currently clear about what the outcome is you are trying to achieve. Start with the outcome and work backward. You can leverage platforms like Citrix, for example, to look across multiple technologies, focus on those business outcomes, and leave the technology decision in many cases to last. It shouldn’t be the other way around because if you do, you will self-limit what those outcomes should be. 

Make sure you have buy-in across all stakeholders. As I talked about earlier, have a conversation with the CFO, head of marketing, head of human resources, and many others. Look for breadth of outcomes, because you don’t want to solve problems for one small team, you want to solve problems across the enterprise. That’s where you get the best leverage. It allows you the best opportunity to simplify the complexity that has built up over the last 30 to 40 years. This will help people get out from under that problem.

Gardner: Lastly, for IT departments specifically, the people who have been most aware of Citrix as a brand, how should IT be thinking about entering this new era of focusing on work and productivity? What should IT be thinking about to transform themselves to be in the best position to attain these better business outcomes?

Henshall: I have already seen the transformation happening. Most IT administrators want to focus on larger business problems, more than just maintaining the existing infrastructure. Unfortunately, the budgets have been relatively limited for innovation because of all the complexity we have talked about.

But my advice for everyone is, take a step back, understand how to be the champion of the business, to be the hero by providing great outcomes, great experiences, and higher productivity. That’s not a technology conversation first and foremost. Obviously it has a technology element but understand and be empathetic of the needs of the business. Then work backward, and Citrix will help you get there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, artificial intelligence, Citrix, Cloud computing, Cyber security, Data center transformation, Enterprise architect, Enterprise transformation, machine learning, multicloud, Security, User experience, Virtualization | Tagged , , , , , , , , , , | Leave a comment

How real-time data streaming and integration set the stage for AI-driven DataOps

The next BriefingsDirect business intelligence (BI) trends discussion explores the growing role of data integration in a multi-cloud world.

Just as enterprises seek to gain more insights and value from their copious data, they’re also finding their applications, services, and raw data spread across a hybrid and public clouds continuum. Raw data is also piling up closer to the edge — on factory floors, in hospital rooms, and anywhere digital business and consumer activities exist.

Stay with us now as we examine the latest strategies for uniting and governing data wherever it resides. By doing so, businesses are enabling rapid and actionable analysis — as well as entirely new levels of human-to-augmented-intelligence collaboration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the foundational capabilities that lead to a total data access exploitation, we’re now joined by Dan Potter, Vice President of Product Marketing at Attunity, a Division of Qlik. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dan, what are the business trends forcing a new approach to data integration?

Potter: It’s all being driven by analytics. The analytics world has gone through some very interesting phases of late: Internet of Things (IoT), streaming data from operational systems, artificial intelligence (AI) and machine learning (ML), predictive and preventative kinds of analytics, and real-time streaming analytics.

So, it’s analytics driving data integration requirements. Analytics has changed the way in which data is being stored and managed for analytics. Things like cloud data warehouses, data lakes, streaming infrastructure like Kafka — these are all a response to the business demand for a new style of analytics.

Potter

As analytics drives data management changes, the way in which the data is being integrated and moved needs to change as well. Traditional approaches to data integration – such as batch processes, more ETL, and scripted-oriented integration – are no longer good enough. All of that is changing. It’s all moving to a much more agile, real-time style of integration that’s being driven by things like the movement to the cloud and the need to move more data in greater volume, and in greater variety, into data lakes, and how do I shape that data and make it analytics-ready.

With all of these movements, there have been new challenges and new technologies. The pace of innovation is accelerating, and the challenges are growing. The demand for digital transformation and the move to the cloud has changed the landscape dramatically. With that came great opportunities for us as a modern data integration vendor, but also great challenges for companies that are going through this transition.

Gardner: Companies have been doing data integration since the original relational database (RDB) was kicked around. But it seems the core competency of managing the integration of data is more important than ever.

Innovation transforms data integration

Potter: I totally agree, and if done right, in the future, you won’t have to focus on data integration. The goal is to automate as much as possible because the data sources are changing. You have a proliferation of NoSQL databases, graph databases; it’s no longer just an Oracle database or RDB. You have all kinds of different data. You have different technologies being used to transform that data. Things like Spark have emerged along with other transformation technologies that are real-time-oriented. And there are different targets to where this data is being transformed and moved to.

It’s difficult for organizations to maintain the skills set — and you don’t want them to. We want to move to an automated process of data integration. The more we can achieve that, the more valuable all of this becomes. You don’t spend time with mundane data integration; you spend time on the analytics — and that’s where the value comes from.

Gardner: Now that Attunity is part of Qlik, you are an essential component of a larger undertaking, of moving toward DataOps. Tell me why automated data migration and integration translates into a larger strategic value when you combine it with Qlik?

Potter: DataOps resonates well for the pain we’re setting out to address. DataOps is about bringing the same discipline that DevOps has brought to software development. Only now we’re bringing that to data and data integration for analytics.

How do we accelerate and remove the gap between IT, which is charged with providing analytics-ready data to the business, and all of the various business and analytics requirements? That’s where DataOps comes in. DataOps is technology, but that’s just a part of it. It’s as much or more about people and process — along with enabling technology and modern integration technology like Attunity.

We’re trying to solve a problem that’s been persistent since the first bit of data hit a hard drive. Data integration challenges will always be there, but we’re getting smarter about the technology that you apply and gaining the discipline to not boil the ocean with every initiative.

The new goal is to get more collaboration between what business users need and to automate the delivery of analytics-ready data, knowing full-well that the requirements are going to change often. You can be much more responsive to those business changes, bring in additional datasets, and prepare that data in different ways and in different formats so it can be consumed with different analytics technologies.

That’s the big problem we’re trying to solve. And now, being part of Qlik gives us a much broader perspective on these pains as relates to the analytics world. It gives us a much broader portfolio of data integration technologies. The Qlik Data Catalyst product is a perfect complement to what Attunity does.

Our role in data integration has been to help organizations move data in real-time as that data changes on source systems. We capture those changes and move that data to where it’s needed — like a cloud, data lake, or data warehouse. We prepare and shape that data for analytics.

Our role in data integration has been to help organizations move data in real-time as that data changes on source systems. We capture those changes and move that data to where it’s needed — like a cloud, data lake, or data warehouse. We prepare and shape that data for analytics.

Qlik Data Catalyst then comes in to catalog all of this data and make it available to business users so they can discover and govern that data. And it easily allows for that data to be further prepared, enriched, or to create derivative datasets.

So, it’s a perfect marriage in that the data integration world brings together the strength of Attunity with Qlik Data Catalyst. We have the most purpose-fit, modern data integration technology to solve these analytics challenges. And we’re doing it in a way that fits well with a DataOps discipline.

Gardner: We not only have the different data types, we have another level of heterogeneity to contend with and that’s cloud, hybrid cloud, multi-cloud, and edge. We don’t even know what more is going to be coming in two or three years. How does an organization stay agile given that level of dynamic complexity?

Real-time analytics deliver agility 

Potter: You need a different approach for a different style of integration technology to support these topologies that are themselves very different. And what the ecosystem looks like today is going to be radically different two years from now.

The pace of innovation just within the cloud platform technologies is very rapid. Just the new databases, transformation engines, and orchestration engines — it’s just proliferates. And now you have multiple cloud vendors. There are great reasons for organizations to use multiple clouds, to use the best of the technologies or approaches that work for your organization, your workgroup, your division. So you need that. You need to prepare yourself for that, and modern integration approaches definitely help.

One of the interesting technologies to help organizations provide ongoing agility is Apache Kafka. Kafka is a way to move data in real-time and make the data easy to consume even as it’s flowing. We see that as an important piece of the evolving data infrastructure fabric.

At Attunity we create data streams from systems like mainframes, SAP applications, and RDBs. These systems weren’t built to stream data, but we stream-enable that data. We publish it into a Kafka stream and that provides great flexibility for organizations to, for example, process that data in real time for real-time analytics such as fraud detection. It’s an efficient way to publish that data to multiple systems. But it also provides the agility to be able to deliver that data widely and have people find and consume that data easily.

Such new, evolving approaches enable a mentality that says, “I need to make sure that whatever decision I make today is going to future-proof me.” So, setting yourself up right and thinking about that agility and building for agility on day one is absolutely essential.

Gardner: What are the top challenges companies have for becoming masterful at this ongoing challenge — of getting control of data so that they can then always analyze it properly and get the big business outcomes payoff?

Potter: The most important competency is on the enterprise architecture (EA) level, more than on the people who traditionally build ETL scripts and integration routines. I think those are the piece you want to automate.

The real core competency is to define a modern data architecture and build it for agility so you can embrace the changing technologies and requirements landscape. It may be that you have all of your eggs in one cloud vendor today. But you certainly want to set yourself up so you can evolve and push processing to the most efficient place, and to attain the best technology for the kinds of analytics or operational workloads you want.

That’s the top competency that organizations should be focused on. As an integration vendor, we are trying to reduce the reliance on technical people to do all of this integration work in a manual way. It’s time-consuming, error-prone, and costly. Let’s automate as much as we can and help companies build the right data architecture for the future.

Gardner: What’s fascinating to me, Dan, in this era of AI, ML, and augmented intelligence is that we’re not just creating systems that will get you to that analytic opportunity for intelligence. We are employing that intelligence to get there. It’s tactical and strategic. It’s a process, and it’s a result.

How do AI tools help automate and streamline the process of getting your data lined up properly?

Automated analytics advance automation 

Potter: This is an emerging area for integration technology. Our focus initially has been on preparing data to make it available for ML initiatives. We work with vendors such as Databricks at the forefront of processing, using a high performance Spark engine and processing data for data science, ML, and AI initiatives.

We need to ask, “How do we apply cognitive engines, things like Qlik, to the fore within our own technology and get smarter about the patterns of integration that organizations are deploying so we can further automate?” That’s really the next way for us.

Gardner: You’re not just the president, you’re a client.

Potter: Yeah, that’s a great way to put it.

Gardner: How should people prepare for such use of intelligence?

Potter: If it’s done right — and we plan on doing it right — it should be transparent to the users. This is all about automation done right. It should just be intuitive. Going back 15 years when we first brought out replication technology at Attunity, the idea was to automate and abstract away all of the complexity. You could literally drag your source, your target, and make it happen. The technology does the mapping, the routing, and handles all the errors for me. It’s that same elegance. That’s where the intelligence comes in, to make it so intuitive that you are not seeing all the magic that’s happening under the covers.

This is all about automation done right. It should just be intuitive. When we first brought out replication technology at Attunity, the idea was to automate and abstract away all of the complexity. That’s now where the intelligence comes in, to make it so intuitive that you are not seeing all the magic under the covers.

We follow that same design principle in our product. As the technologies get more complex, it’s harder for us to do that. Applying ML and AI becomes even more important to us. So that’s really the future for us. You’ll continue to see, as we automate more of these processes, all of what is happening under the covers.

Gardner: Dan, are there any examples of organizations on the bleeding edge? They understand the data integration requirements and core competencies. They see this through the lens of architecture.

Automation insures insights into data 

Potter: Zurich Insurance is one of the early innovators in applying automation to their data warehouse initiatives. Zurich had been moving to a modern data warehouse to better meet the analytics requirements, but they realized they needed a better way to do it than in the past.

Traditional enterprise data warehousing employs a lot of people, building a lot of ETL scripts. It tends to be very brittle. When source systems change you don’t know about it until the scripts break or until the business users complain about holes in their graphs. Zurich turned to Attunity to automate the process of integrating, moving it to real-time, and automatically structuring their data warehouse.

Their capability to respond to business users is a fraction of what it was. They reduced 45-day cycles to two-day cycles for updating and building out new data marts for users. Their agility is off the charts compared to the traditional way of doing it. They can now better meet the needs of the business users through automation.

As organizations move to the cloud to automate processes, a lot of customers are embracing data lakes. It’s easy to put data into a data lake, but it’s really hard to derive value from the data lake and reconstruct the data to make it analytics-ready.

For example, you can take transactions from a mainframe and dump all of those things into a data lake, which is wonderful. But how do I create any analytic insights? How do I ensure all those frequently updated files I’m dumping into the lake can be reconstructed into a queryable dataset? The way people have done it in the past is manually. I have scriptures using Pig and other languages try to reconstruct it. We fully automate that process. For companies using Attunity technology, our big investments in data lakes has had a tremendous impact on demonstrating value.

Gardner: Attunity recently became part of Qlik. Are there any clients that demonstrate the combination of two-plus-two-equals-five effect when it comes to Attunity and the Qlik Catalyst catalog?

DataOps delivers the magic 

Potter: It’s still early days for us. As we look at our installed base — and there is a lot of overlap between who we sell to — the BI teams and the data integration teams in many cases are separate and distinct. DataOps brings them together. 

In the future, as we take the Qlik Data Catalyst and make that the nexus of where the business side and the IT side come together, the DataOps approach leverages that catalog and extends it with collaboration. That’s where the magic happens.

So business users can more easily find the data. They can send the requirements back to the data engineering team as they need them. By, again, applying AI and ML to the patterns that we are seeing from the analytics side will help better apply that to the data that’s required and automate the delivery and preparation of that data for different business users.

That’s the future, and it’s going to be very interesting. A year from now, after being part of the Qlik family, we’ll bring together the BI and data integration side from our joint customers. We are going to see some really interesting results.

Gardner: As this next, third generation of BI kicks in, what should organizations be doing to get prepared? What should the data architect, who is starting to think about DataOps, do to put them in an advantageous position to exploit this when the market matures?

Potter: First they should be talking to Attunity. We get engaged early and often in many of these organizations. The hardest job in IT right now is [to be an] enterprise architect, because there are so many moving parts. But we have wonderful conversations because at Attunity we’ve been doing this for a long time, we speak the same language, and we bring a lot of knowledge and experience from other organizations to bear. It’s one of the reasons we have deep strategic relationships with many of these enterprise architects and on the IT side of the house.

They should be thinking about what’s the next wave and how to best prepare for that. Foundationally, moving to more real-time streaming integration is an absolute requirement. You can take our word for it. You can go talk to analysts and other peers around the need for real-time data and streaming architectures, and how important that is going to be in the next wave.

Data integration is strategic, it unlocks the value of the data. If you do it right, you’re going to set yourself up for long-term success. 

So, preparing for that and again thinking about the agility in the automation that’s going to get them the desired results because if they’re not preparing for that now, they are going to be left behind, and if they are left behind the business is left behind, and it is a very competitive world and organizations are competing on data and analytics. So the faster that you can deliver the right data and make it analytic-ready, the faster and better decisions you can make and the more successful you’ll be.

So it really is a do-or-die kind of proposition and that’s why data integration, it’s strategic, it’s unlocking the value of this data, and if you do it right, you’re going to set yourself up for long-term success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Qlik

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, data analysis, DevOps, Information management, machine learning, MySQL, Qlik, storage | Tagged , , , , , , , , , , , | Leave a comment

How HCI forms a simple foundation for hybrid cloud, edge, and composable infrastructure

The next BriefingsDirect Voice of the Innovator podcast discussion explores the latest insights into hybrid cloud and hyperconverged infrastructure (HCI) strategies.

Speed to business value and simplicity in deployments have been top drivers of the steady growth around HCI solutions. IT operators are now looking to increased automation, built-in intelligence, and robust security as they seek such turnkey appliance approaches for both cloud and traditional workloads.

Stay with us now as we examine the rapidly evolving HCI innovation landscape, which is being shaped just as much by composability, partnerships, and economics, as it is new technology.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the next chapter of automated and integrated IT infrastructure solutions is Thomas Goepel, Chief Technologist for Hyperconverged Infrastructure at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Thomas, what are the top drivers now for HCI as a business tool? What’s driving the market now, and how has that changed from a few years ago?

Goepel

Goepel: HCI has gone through a really big transformation in the last few years. When I look at how it originally started, it was literally people looking for a better way of building virtual desktop infrastructure (VDI) solutions. They wanted to combine servers and storage in a single device and make it easier to operate.

What I am seeing now is HCI spreading throughout datacenters and becoming one of the core elements of a lot of the datacenters around the world. The use cases have significantly been expanded. It started out with VDI, but now people are running all kinds of business applications on HCI — all the way to critical databases  like SAP HANA.

Gardner: People are using HCI in new ways. They are innovating in the market, and that often means they do things with HCI that were not necessarily anticipated. Do you see that happening with HCI?

Ease of use encourages HCI expansion 

Goepel: Yes, it’s happened with HCI quite a bit. The original use cases were very much focused on VDI and end-user computing. It was just a convenient way of having a platform for all of your virtual desktops and an easy way of managing them.

But people saw that ease of management can actually be expanded into other use cases. They then began to bring in some core business applications, such as Microsoft Exchange or SharePoint, logged onto the platform and saw there are more and more things they can put on there, and gain the entire simplicity that hyperconverged brings to operating in this environment.

How Hyperconverged Infrastructure Delivers

Unexpected Results for VDI Users

You no longer had to build a separate server farm, separate storage farm, or even manage your network independently. You could now do all of that from a single interface, a single-entry point, and gain a single point of management. Then people said, “Well, this ease makes it so beneficial for me, why don’t we bring the other things in here?” And then we saw it spread out in the data centers.

What we now have is people saying, “Hey, let me take this a step further. If I have remote offices, branch offices, or edge use-cases where I also need compute resources, why not try to take HCI there? Because typically on the edge I don’t even have system administrators, so I can take this entire simplicity down to this point, too.”

And the nice thing with hyperconvergence is that — at least in the HPE version of hyperconvergence, which is HPE SimpliVity — it’s not only simple to manage, it has also built in all of the enterprise features such as high availability and data efficiency, so it makes it really a robust solution. It has come a very long way on this journey.

Gardner: Thomas, you mentioned the role of HCI at the edge gaining traction and innovation. What’s a typical use case for this sort of micro datacenter at the edge? How does that work?

Losing weight with HCI wins the race

Goepel: Let me give you a really good example of a super-fast-paced industry: Formula One car racing. It really illustrates how edge is having an impact — and also how this has a business impact.

One of our customers, Aston Martin Red Bull Racing, has been very successful in Formula One racing. The rules of the International Automobile Federation (FIA), the governing board of Formula One racing, say that each race team can only bring a certain amount of weight to a racetrack during the races.

This is obviously a high-tech race. They are adjusting the car during the race, lap by lap, making adjustments based on the real-time performance of the car to get the last inch possible out of the car to win that race. All of these cars are very close to each other from a performance perspective.

Traditionally, they shipped racks and racks of IT gear to the racetrack to calculate the performance of the car and make adjustments during the race. They have now replaced all of these racks with HPE SimpliVity HCI gear and significantly reduced the amount of gear. It means having significantly less weight to bring to the racetrack.

How Hyperconvergence Plays

Pivotal Role at Red Bull

There are two benefits. First, reducing the weight of the IT gear allows them to bring additional things to the racetrack because what counts is the total weight – and that includes the car, spare parts, people, equipment — everything. There is a certain mandated limit.

By taking that weight out, having less IT equipment on the racetrack, the HCI allows them to bring extra personnel and spare parts. They can perform better in the races.

The other benefit is that HCI performs significantly better than traditional IT infrastructure. They can now make adjustments within one lap of the race versus before, when it took them three laps before they could make adjustments to the car.

This is a huge competitive advantage. When you look at the results, they are doing great when it comes to Formula One racing, especially for being a smaller team compared to the big teams out there.

From that perspective, at the edge, HCI is making some big improvements, not only in a high-end industry like Formula One racing, but in all kinds of other industries, including manufacturing and retail. They are seeing similar benefits.

Gardner: I wrote a research paper about four years ago, Thomas, that laid out the case that HCI will become a popular on-ramp to private clouds and ultimately hybrid cloud. Was I ahead of my time?

HCI on-ramp to the clouds

Goepel: Yes, I think you were a little bit ahead of your time. But you were also a visionary to lay out that groundwork. When you look at the industry, hyperconvergence is a fast-growing industry segment. When it comes to server and data center infrastructure, HCI has the highest growth rate across the entire IT industry.

I don’t see an end anytime soon. HCI continues to grow as people discover new use cases. The edge is one new element, but we are just scratching the surface.

What you were foreseeing four years ago is exactly what we now have, and I don’t see an end anytime soon. HCI continues to grow as people discover new use cases. The edge is one new element, but we are just scratching the surface.

Edge use cases are a fascinating new world in general — from such distributed environments as smart cities and smart manufacturing. We are just starting to get into this world. There’s a huge opportunity for innovation and this will become an attractive area for hyperconvergence. 

Gardner: How does HCI innovation align with other innovations at HPE around automation, composability, and intelligence derived to make IT behave as total solutions? Is there a sense that the whole is greater than the sum of the parts?

HCI innovations prevent problems

Goepel: Absolutely there is. We have leveraged a lot of innovation in the broader HPE ecosystem, including the latest generation of the ProLiant DL380 Server, the most secure server in the industry. All of these elements flew into the HPE SimpliVity HCI platform, too.

But we are not stopping there. A lot of other innovations in the HPE ecosystem are being brought into hyperconvergence. A perfect example is HPE InfoSight, a management platform that allows you to operate your infrastructure better by understanding what’s going on in a very efficient way. It uses artificial intelligence (AI) to detect when something is going wrong in your IT environment so you can proactively take action and don’t end up with a disaster.

How to Tell if Your Network

Is Really Aware of Your Infrastructure

HPE InfoSight originally started out in storage, but we are now taking it into the full HPE SimpliVity HCI ecosystem. It’s not just a support portal, it gives you intelligence to understand what’s going on before you run into problems. Those problems can be solved so your environment keeps running at top performance. You’ll have what you need to run any mission-critical business on HCI. 

More and more of these innovations in our ecosystem will be brought into the hyperconverged world. Another example is around composability. We have been developing a lot of platform capabilities around composability and we are now bringing HPE SimpliVity and composability together. This allows customers to actually change the infrastructure’s personality depending on the workload, including bringing on HPE SimpliVity. You can get the best of these two worlds.

This leads to building a private cloud environment that can be easily connected to a public cloud or clouds. You will ultimately build out a hybrid IT environment in such a way that your private cloud environment, or your on-premise environment, runs in the most optimized way for your business and for your specific needs as a company.

Gardner: You are also opening up that HCI ecosystem with new partners. Tell us how innovation around hyperconverged is broadening and making it more ecumenical for the IT operations consumer.

Welcome to the hybrid world

Goepel: HPE has always been an open player. We never believed in locking down an environment or making it proprietary and basically locking out everyone else. We have always been a company that listens to what our customers want, what our customers need, and then give them the best solution.

Now, customers are looking to run their HCI environment on HPE equipment and infrastructure because they know that this is reliable infrastructure. It is working, and they feel comfortable with it, and they trust it. But we also have customers who say, “Hey, you know, I want to run this piece of software or that solution on this HPE environment. Can you make sure this runs and works perfectly?”

We are in a hybrid world. And in a hybrid world there is not a single vendor that can cover the entire hybrid market. We need to innovate in such a way that we allow an ecosystem of partners to all come together and work collaboratively and jointly to provide new solutions.

We have recently announced new partnerships with other software vendors, and that includes HPE GreenLake Flex Capacity. With that, instead of doing big, upfront investments on equipment, you can do it in a more innovative way financially. It brings about the solution that solves the customers’ real problems, rather than locking the customer into some certain infrastructure.

Flexibility improves performance 

Gardner: You are broadening the idea of making something consumable when you innovate, not only around the technology and the partnerships, but also the economic model, the consumption model. Tell us more about how HPE GreenLake Flex Capacity and acquiring a turnkey HPE SimpliVity HCI solution can accelerate value when you consume it, not as a capital expense, but as an operating cost affair.

Goepel: No industry is 100 percent predictable, at least I haven’t seen it, and I haven’t found it. Not even the most conservative government institution that has a five-year plan is predictable. There are always factors that will disrupt that predictability plan, and you have to react to that.

How Hyperconverged Infrastructure

Solves Unique Challenges

For Datacenters at the Edge

Traditionally, what we have done in the industry is oversized our environments to calculate for anticipated growth over five years — and then add another 25 percent on top of it, and then another 10 percent cover on top of that. Hopefully we did not undersize the environment once we get to the end of the life of the equipment. 

That is a lot of capital you are investing into something that just sits there and has no value, no use, and just basically stands around, and you take off of your books in the financial perspective. 

Now, HPE GreenLake gives you a flexible-capacity model. You only pay literally for what you consume. If you grow faster than you anticipated, you just use more. If you grow slower, you use less. If you have an extremely successful business — but then something in the economic model changes and your business doesn’t perform as you have anticipated — then you can reduce your spending. That flexibility better supports your business.

IT shouldn’t be a burden that slows you down, it should be an accelerator. By having a flexible financial model, you get exactly that.You can scale up and down based on your business needs. 

We are ultimately doing IT to help our businesses to perform better. IT shouldn’t be a burden that slows you down, it should be an accelerator. By having a flexible financial model, you get exactly that. HPE GreenLake allows you to scale up and scale down your environment based on your business needs with the right financial benefits behind it.

Gardner: There is such a thing as too much of a good thing. And I suppose that also applies to innovation. If you are doing so many new and interesting things — allowing for hybrid models to accelerate and employing new economic models — sometimes things can spin out of control.

But you can also innovate around management to prevent that from happening. How does management innovation fit into these other aspects of a solution, to keep it from getting out of control?

Checks and balances extend manageability

Goepel: You bring up a really good point. One of the things we have learned as an industry is that things can spin out of control very quickly. And for me, the best example is when I go back two years when people said, “I need to go to the cloud because that is going to save my world. It’s going to reduce my costs, and it’s going to be the perfect solution for me.”

What happened is people went all-in for the cloud and every developer and IT person heard, “Hey, if you need a virtual machine just get it on whatever your favorite cloud provider is. Go for it.” People very quickly learned that this means exploding their costs. There was no control, no checks and balances.

On both the HCI and general IT side, we have learned from that initial mistake in the public cloud and have put the right checks and balances in place. HPE OneView is our infrastructure management platform that allows the system administrator to operate the infrastructure from a single-entry point or single point of view.

How Hyperconverged Infrastructure

Helps Trim IT Complexity

Without Sacrificing Quality

That gives you a very simple way of managing and plays along with the way HCI is operated — from a single point of view. You don’t have five consoles or five screens, you literally have one screen you operate from. 

You need to have a common way of managing checks and balances in any environment. You don’t want the end user or every developer to go in there and just randomly create virtual machines, because then your HCI environment quickly runs out of resources, too. You need to have the right access controls so that only people that have the right justification can do that, but it still needs to happen quickly. We are in a world where a developer doesn’t want to wait three days to get a virtual machine. If he is working on something, he needs the virtual machine now — not in a week or in two days.

Similarly, when it comes to a hybrid environment — when we bring together the private cloud and the public cloud — we want a consistent view across both worlds. So this is where HPE OneSphere comes in. HPE OneSphere is a cloud management platform that manages hybrid clouds, so private and public clouds. 

It allows you to gain a holistic view of what resources you are consuming, what’s the cost of these resources, and how you can best distribute workloads between the public and private clouds in the most efficient way. It is about managing performance, availability, and cost. You can put in place the right control mechanisms to curb rogue spending, and control how much is being consumed and where.

Gardner: From all of these advancements, Thomas, have you made any personal observations about the nature of innovation? What is it about innovation that works? What do you need to put in place to prevent it from becoming a negative? What is it about innovation that is a force-multiplier from your vantage point?

Faster is better 

Goepel: The biggest observation I have is that innovation is happening faster and faster. In the past, it took quite a while to get innovation out there. Now it is happening so fast that one innovation comes, then the next one just basically runs over it, and we are taking advantage of it, too. This is just the nature of the world we are living in; everything is moving much faster. 

There are obviously some really great benefits from the innovation we are seeing. We have talked about a few of them, like AI and how HCI is being used in edge use-cases. In manufacturing, hospitals, and these kinds of environments, you can now do things in better and more efficient ways. That’s also helping on the business side.

How One Business

Took Control of their Hybrid Cloud 

But there’s also the human factor, because innovation makes things easier for us or makes it better for us to operate. A perfect example is in hospitals, where we can provide the right compute power and intelligence to make sure patients get the right medication. It is controlled in a good way, rather than just somebody writing on a piece of paper and hoping the next person can read it. You can now do all of these things electronically, with the right digital intelligence to ensure that you are actually curing the patient.

I think we will see more and more of these types of examples happening and bringing compute power to the edge. That is a huge opportunity, and there is a lot of innovation in the next two to three years, specifically in this segment, and that will impact everyone’s life in a positive way. 

Gardner: Speaking of impacting people’s lives, I have observed that the IT operator is being greatly impacted by innovation. The very nature of their job is changing. For example, I recently spoke with Gary Thome, CTO for Composable Cloud at HPE, and he said that composability allows for the actual consumers of applications to compose their own supporting infrastructure.

Because of ease, automation, and intelligence, we don’t necessarily need to go to IT to say, “Set up XYZ infrastructure with these requirements.” Using composablity, we can move innovation to the very people who are in the most advantageous position to define what it is they need.

Thomas, how do you see innovation impacting the very definition of what IT people do?

No more mundane tasks 

Goepel: This is a very positive impact, and I will give you a really good example. I spend a lot of time talking to customers and to a lot of IT people out there. And I have never encountered a single systems administrator in this industry who comes to work in the morning and says, “You know, I am so happy that I am here this morning so I can do a backup of my environment. It’s going to take me four hours, and I am going to be the happiest person in the world if the backup goes through.” Nobody wants to do this. 

Nobody goes to work in the morning and says, “You know, I really hope I get a hard problem to solve, like my network crashes and I am going to be the hero in solving the problem, or by making a configuration change in my virtual environment.”

These are boring tasks that nobody is looking for, but we have to do it because we don’t have the right automation in our environments. We don’t have the right management tools in our environment. We put a lot of boring tasks to our administrators and let them do them. They are mundane and they don’t really look forward to them.

How Hyperconverged Infrastructure

Gives You 54 Minutes Back Every Hour

Innovation takes these burdens away from the systems administrator and frees up their time to do things that are not only more interesting, but also add to the bottom line of the company. They can better help drive the businesses and spend IT resources on something that makes the difference for the company’s bottom line.

Ultimately, you don’t want to be the one watching backups going through or restoring files. You want this to be automatic, with a couple of clicks, and then you spend your time on something more interesting.

Every systems administrator I talk to really likes the new ways. I haven’t seen anyone coming back to me and saying, “Hey, can you take this automation away and all this hyperconvergence away? I want to go back to the old way and do things manually so I know how to spend my eight hours of the day.” People have much more to do with the hours they have. This is just freeing them up to focus on the things that add value.

HCI to make IT life easier and easier 

Gardner: Before we close out, Thomas, how about some forward-looking thoughts about what innovation is going to bring next to HCI? We talked about the edge and intelligence, but is there more? What are we going to be talking about when it comes to innovation in two years in the HCI space?

Goepel: I touched on the edge. I think there will be a lot of things happening across the entire edge space, where HCI will clearly be able to make a difference. We will take advantage of the capabilities that HCI brings in all these segments — and it will actually drive innovation outside of the hyperconverged world, but by being enabled by HCI.

But there are a couple of other things to look at. Self-healing using AI in IT troubleshooting, I think, will become a big innovation point in the HCI industry. What we are doing with HPE InfoSight is a start, but there is much more to come. This will continue to make the life of the systems administrator easier.

We want HCI as a platform to be almost invisible to the end user because they shouldn’t care about the infrastructure. It will behave like a cloud, but just be on-premises and private, and in a better, more controlled way. 

Ideally, we want HCI as a platform to be almost invisible to the end user because they shouldn’t care about the infrastructure. It will behave like a cloud, but just be on-premises and private, and in a better, more controlled way.

The next element of innovation you will see is HCI acting very similar to a cloud environment. And some of the first steps with that are what we are doing around composability. This will drive forward to where you change the personality of the infrastructure depending on the workload needed. It becomes a huge pool of resources. And if you need to look like a bare-metal server, or a virtual server — a big one or a small one — you can just change it and this will be all software controlled. I think that innovation element will then enable a lot of other innovations on top of it.

How to Achieve Composability

Across Your Datacenter

If you take these three elements — AI, composability of the infrastructure, and driving that into the edge use cases — that will enable a lot of business innovation. It’s like the three legs of a stool. And that will help us drive even further innovation.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, data center, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud, Security, server, Software, Software-defined storage, storage, Virtualization | Tagged , , , , , , , , , , , , , | Leave a comment

How Ferrara Candy depends on automated IT intelligence to support rapid business growth

The next BriefingsDirect Voice of the Customer IT modernization journey interview explores how a global candy maker depends on increased insight for deploying and optimizing servers and storage.

We’ll now learn how Ferrara Candy Company boosts its agility as a manufacturer by expanding the use of analysis and proactive refinement in its data center operations by bringing more intelligence to IT infrastructure.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to hear about unlocking the potential for end-to-end process and economic efficiency with Stefan Floyhar, Senior Manager of IT Infrastructure at Ferrara Candy Co. in Oakbrook Terrace, Illinois. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the major reasons Ferrara Candy took a new approach in bringing added intelligence to your servers and storage operations?

Floyhar: The driving force behind utilizing intelligence at the infrastructure level specifically was to alleviate the firefighting operations that we were constantly undergoing with the old infrastructure.

Gardner: And what sort of issues did that entail? What was the nature of the firefighting?

Floyhar: We were constantly addressing infrastructure-related hardware failures, firmware issues, and not having visibility into true growth factors. That included not knowing what’s happening on the backend during an outage or from a problem with performance. We had a lack of visibility into true real-time performance data and fully scalable performance data.

Gardner: There’s nothing worse than being caught up in reactive firefighting mode when you’re also trying to be innovative, re-architect, and adjust to things like mergers and growth. What were some of the business pressures that you were facing even as you were trying to keep up with that old-fashioned mode of operations?

IT meets expanded candy demands

Floyhar: We have undergone a significant amount of growth in the last seven years — going from 125 virtual machines to 452, as of this morning. Those 452 virtual machines are all application-driven and application-specific. As we continued to grow, as we continued to merge and acquire other candy companies, that growth exploded exponentially.

The merger with Ferrara Pan Candy, and Farley’s and Sathers in 2012, for example, saw an initial growth explosion. More recently, in 2017 and 2018, we were acquired by Ferrero. We also acquired Nestlé Confections USA, which has essentially doubled the business overnight. The growth is continuing at an exponential rate.

Gardner: The old mode of IT operations just couldn’t keep up with that dynamic environment?

Floyhar: That is correct, yes.

Gardner: Ferrara Candy might not roll off the tongue for many people, but I bet they have heard a lot of your major candy brands. Could you help people understand how big and global you are as a confectionery manufacturer by letting us know some of your major brands?

Floyhar: We are the producers of Now and LaterLemonheads, Boston Baked BeansAtomic FireballsBob’s Candy Canes, and Trolli Gummies, which is one of our major brands. We also recently acquired Crunch BarButterfinger100 GrandLaffy Taffy, and Willy Wonka brands, among others.

We produce a little over 1 million pounds of gummies per week, and we are currently utilizing 2.5 million square feet of warehousing.

Learn More About Intelligent,

Self-Managing Flash Storage

In the Data Center and Cloud

Gardner: Wow! Some of those brands bring me way back. I mean, I was eating those when I was a kid, so those are some age-old and favorite brands.

Let’s get back to the IT that supports that volume and diversity of favorite confections. What were some of the major drivers that brought you to a higher level of automation, intelligence, and therefore being able to get on top of operations rather than trying to play catch up?

Floyhar: We have a very lean staff of engineers. That forced us to seek the next generation of product, specifically around artificial intelligence (AI) and machine learning (ML). We absolutely needed that because we’re growing at this exponential rate. We needed to take the focus off of infrastructure-related tasks and leverage technology to manage and operate the application stack and get it up to snuff. And so that was the major driving force for seeking AI [in our operations and management].

Gardner: And when you refer to AI you are not talking about helping your marketers better factor which candy to bring into a region. You are talking about intelligence inside of your IT operations, so AIOps, right?

Floyhar: Yes, absolutely. So things like Hewlett Packard Enterprise (HPE) InfoSight and some of the other providers with cloud-type operations for failure metrics and growth perspectives. We needed somebody with proven metrics. Proven technology was a huge factor in product determination.

Gardner: How about storage specifically? Was that something you targeted? It seems a lot of people need to reinvent and modernize their storage and server infrastructure in tandem and coordination.

Floyhar: Storage was actually the driving factor for us. It’s what started the whole renovation of IT within Ferrara. With our older storage, we were constantly suffering bottlenecks with administrative tasks and in not having visibility into what was going on.

During that discovery process and research, HPE InfoSight really jumped off the page at us. That level of AI, the proven track record, and being able to produce data around my work loads.

Storage drove that need for change. We looked at a lot of different storage area networks (SANs) and providers, everything from HPE Nimble to Pure, VNXUnityHitachi, … insert major SAN provider here. We probably did six or so months’ worth of research working with those vendors, doing proof of concepts (POCs) and looking at different products to truly determine what was the best storage solution for Ferrara.

During that discovery process, during that research, HPE InfoSight really jumped off the page at us. That level of AI, the proven track record, being able to produce data around my actual work loads. I needed real-life examples, not a sales and marketing pitch.

By having a demo and seeing that data being given that on the fly and on request was absolutely paramount in making our decision.

Gardner: And, of course, InfoSight, was a part of Nimble Storage and Nimble became acquired by HPE. Now we are even seeing InfoSight technology being distributed and integrated across HPE’s broad infrastructure offerings. Is InfoSight something that you are happy to see extended to other areas of IT infrastructure?

Floyhar: Yes, ever since we adopted the Nimble Storage solution I have been waiting for InfoSight to be adopted elsewhere. Finally it’s been added across the ProLiant series of servers. We are an HPE ProLiant DL560 shop.

I am ultra-excited to see what that level of AI brings for predictive failures monitoring, which is essentially going to alleviate any downtime. Any time we can predict a failure, it’s obviously better than being reactive, with a retroactive approach where something fails and then we have to replace it.

Gardner: Stefan, how do you consume that proactive insight? What does InfoSight bring in terms of an operations interface? Or have you crafted a new process in your operations? How have you changed your culture to accommodate such a proactive stance? As you point out, being proactive is a fairly new way of avoiding failures and degraded performance.

Proactivity improves productivity

Floyhar: A lot of things have changed with that proactivity. First, the support model, with the automatic opening and closure of tickets with HPE support. The Nimble support is absolutely fantastic. I don’t have to wait for something reactive at 2 am, and then call HPE support. The SAN does it for me; InfoSight does it for me. It automatically opens the ticket and an engineer calls me at the beginning of my workday.

No longer are we getting interrupted with those 2, 3, 4 am emergency calls because our monitoring platform has notified us that, “Hey, a disk failed or looks like it’s going to fail.” That, in turn, has led to a complete culture change within my team. It takes us away from that firefighting, the constant, reactive methodologies of maintaining traditional three-tier infrastructure and truly into leveraging AI and the support behind it.

Learn More About Intelligent,

Self-Managing Flash Storage

In the Data Center and Cloud

We are now able to turn the corner from reactiveto proactive, including on applications redesign or re-work, or on tweaking performance improvements. We are taking that proactive approach with the applications themselves, which has rolled even further downhill to our end users and improved their productivity.

In the last six months, we have received significant praise for the applications performance, based on where it was three years ago compared with today. And, yes, part of that is because of the back-end upgrades in the infrastructure platform, but also because as we’ve been able to focus more on the applications administration tasks and truly making it a more pleasant experience for our end users — less pain, less latency, just less issues.

Gardner: You are a big SAP shop, so that improvement extends across all of your operations, to your logistics and supply chain, for example. How does having a stronger sense of confidence in your IT operations give you benefits on business-level innovation?

Floyhar: As you mentioned, we are a large SAP shop. We run any number of SAP-insert-acronym-here systems. Being proactive on addressing some of the application issues has honestly caused less downtime for the applications. We have seen into the four- and five-9s (99.99-9 percent) uptime from an application availability perspective. 

We have been able to proactively catch a number of issues, whether using HPE InfoSight or standard notifications. We have been able to proactively catch a number of issues that would have caused downtime, even as minimal as 30 minutes. But when you start talking about an operation that runs 24×7, 360 days a year, and truly depends on SAP to be the backbone, it’s the lifeblood of what we do on a business operations basis. 

So 30 minutes makes all the difference on the production floor. Being able to turn that support corner has absolutely been critical in our success.

Gardner: Let’s go back to data. When it comes to having storage confidence, you can extend that confidence across your data lifecycle. It’s not just storage and accommodating key mission-critical apps. You can start to modernize and gain efficiencies through backup and recovery, and to making the right cache and de-dupe decisions.

What’s it been like to extend your InfoSight-based intelligence culture into the full data lifecycle?

Sweet, simplified data backup and recovery

Floyhar: Our backup and recovery has gotten significantly less complex — and significantly faster — using Veeam with the storage API and Nimble snapshots. Our backup window went from about 22.5 hours a day, which was less than ideal, obviously, down to less than 30 minutes for a lot of our mission-critical systems.

We are talking about 8-10 terabytes of Microsoft Exchange data, 8-10 terabytes of SAP data — all being backed up, full backups, in less than 60 minutes, using Veeam with the storage API. Again, it’s transformed how much time and how much effort we put into managing our backups.

Again, we have turned the corner on managing our backups on an exception-basis. So now it’s only upon failure. We have gained that much trust in the product and the back-end infrastructure.

We specifically watch for failure, and any time something comes up that’s what we address as opposed to watching everything 100 percent of the time to make sure it’s working.

We specifically watch for failure, and any time something comes up that’s what we address as opposed to watching everything 100 percent of the time to make sure that it’s all working. Outside of the backups, just every application has seen significant performance increases.

Gardner: Thinking about the future, a lot of organizations are experimenting more with hybrid cloud models and hybrid IT models. One of the things that holds them up from adoption is not feeling confident about having insight, clarity, and transparency across these different types of systems and architectures.

Does what HPE InfoSight and similar technologies bring to the table give you more confidence to start moving toward a hybrid model, or at least experimenting in that direction for better performance in price and economic payback?

Headed to hybrid, invested in IoT

Floyhar: Yes, absolutely, it does. We started to dabble into the cloud, and a mixed-hybrid infrastructure a few years before Nimble came into play. We now have a significantly larger cloud presence. And we were able to scale that cloud presence easily specifically because of the data. With our growth trending, all of the pieces involved with InfoSight, we were able to use that data to scale out and know what it looks like from a storage perspective on Amazon Web Services (AWS).

We started with SAP HANA out in the cloud, and now we’re utilizing some of that data on the back end. We are able to size and scale significantly better than we ever could have in the past, so it has actually opened up the door to adopting a bit more cloud architecture for our infrastructure.

Gardner: And looking to the other end from cloud, core, and data center, increasingly manufacturers like yourselves — and in large warehouse environments like you have described — the Internet of Things (IoT) is becoming much more in demand. You can place sensors and measure things in ways we didn’t dream of before.

Even though IoT generates massive amounts of data — and it’s even processing at the edge – have you gained confidence to take these platform technologies in that direction, out to the edge, and hope that you can gain end-to-end insights, from edge to core?

Floyhar: The executives at our company have deemed that data is a necessity. We are a very data-driven company. Manufacturers of our size are truly benefiting from IoT and that data. For us, people say “big data” or insert-common-acronym-here. People process big data, but nobody truly understands what that term means.

Learn More About Intelligent,

Self-Managing Flash Storage

In the Data Center and Cloud

With our executives, we have gone through the entire process and said, “Hey, you know what? We have actually defined what big data means to Ferrara. We are going to utilize this data to help drive leaner manufacturing processes, to help drive higher-quality products out the door every single time to achieve an industry standard of quality that quite frankly has never been met before.”

We have very lofty goals for utilizing this data to drive the manufacturing process. We are working with a very large industrial automation company to assist us in utilizing IoT, not quite edge computing yet, but we might get there in the next couple of years. Right now we are truly adopting the IoT mentality around manufacturing.

And that is, as you mentioned, a huge amount of data. But it is also a very exciting opportunity for Ferrara. We make candy, right? We are not making cars, or tanks, or very expansive computer systems. We are not doing that level of intricacy. We are just making candy.

But to be able to leverage the machine data at almost every inch of the factory floor? If we could get that and utilize it to drive end-to-end process, efficiency, and manufacturing efficiencies? It not only helps us produce a better-quality product faster, it’s also environmentally conscious, because there will be less waste, if any waste at all.

The list of wonderful things that comes out of this goes on and on. It really is an exciting opportunity. We are trying to leverage that. The intelligent back-end storage and computer systems are ultra-imperative to us for meeting those objectives.

Gardner: Any words of advice for other organizations that are not as far ahead as you are when it comes to going to all-flash and highly intelligent storage — and then extending that intelligence into an AIOps culture? With 20/20 hindsight, for those organizations that would like to use more AIOps, who would like to get more intelligence through something like HPE InfoSight, what advice can you give them?

Floyhar: First things first — use it. For even small organizations, all the way up to the largest of organizations, it may almost seem like, “Well, what is that data really going to be used for?” I promise, if you use it, it is greatly beneficial to your IT operations.

Historically we would constantly be fighting infrastructure-related issues — outages, performance bottlenecks, and so on. With the AI behind HPE InfoSight, the AI makes all the difference. You don’t have to fight that fight when it becomes a problem because you nip it in the bud.

If you don’t have it — get it. It’s very important. This is the future of technology. Using AI to predictively analyze all of the data — not just from your environment — but being able to take a conglomerate view of customer data and keep it together and use predictive analytics – that truly does allow IT organizations to turn the corner from reactive to proactive.

Historically we would constantly be fighting infrastructure-related issues — outages, performance bottlenecks, and so on. With the AI behind HPE InfoSight, and other providers, including cloud platforms, the AI makes all the difference. You don’t have to fight that fight when it becomes a problem because you get to nip it in the bud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, big data, Cloud computing, data analysis, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, ERP, Hewlett Packard Enterprise, machine learning, multicloud, Software-defined storage, storage | Tagged , , , , , , , , , , , , , , | Leave a comment

As price transparency grows inevitable, healthcare providers need better tools to close the gap on patient trust

The next BriefingsDirect healthcare finance insights discussion explores ways that healthcare providers can become more proactive in financial and cost transparency from the patient perspective.

By anticipating rather than reacting to mandates on healthcare economics and process efficiencies, providers are becoming more competitive and building more trust and satisfaction with their patients — and caregivers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the benefits of a more proactive and data-driven approach to healthcare cost estimation, we are joined by expert Kate Pepoon, Manager of Revenue Cycle Operations at Baystate Health in Springfield, Mass., and Julie Gerdeman, President of HealthPay24 in Mechanicsburg, Penn. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We are at the point with healthcare and medical cost transparency that the finger, so to speak, has been pulled out of the dike. We have had mandates and regulations, but it’s still a new endeavor. People are feeling their way through providing cost transparency and the need for more accurate estimations about what things will actually cost when you have a medical procedure.

Kate, why does it remain difficult and complex to provide accurate medical cost estimates?

Pepoon

Pepoon: It has to do with educating our patients. Patients don’t understand what a chargemaster is, which, of course, is the technical term for the data we are now required to post on our websites. For them to see a spreadsheet that lists 21,000 different codes and costs can be overwhelming. 

What Baystate Health does, as I’m sure most other hospitals in Massachusetts do, is give patients an option to call us if they have any questions. You’re right, this is in its infancy. We are just getting our feet wet. Patients may not even know what questions to ask. So we have to try and educate as much as possible.

Gardner: Julie, it seems like the intention is good, the idea of getting more information in peoples’ hands so they can make rational decisions, particularly about something as important as healthcare. The intent sounds great, but the implementation and the details are not quite there yet.

Given that providers need to become more proactive, look at the different parts of transparency, and make it user-friendly, where are we in terms of this capability? 

Gerdeman: We are still in the infancy. We had a race to the deadline, to the Centers for Medicare and Medicaid Services (CMS) [part of the U.S. Department of Health and Human Services] deadlineof Jan. 1, 2019. That’s when all the providers rushed to at least meet the bare minimum of compliance. A lot of what we have seen is just the publishing of the chargemaster with some context available.

But where there is competition, we have seen it taken a bit further. Where I live in Pennsylvania, for example, I could drive to a number of different healthcare providers. Because of that competition, we are seeing providers that don’t just provide context, they are leveraging the chargemaster and price transparency as competitive differentiation.

Gardner: Perhaps we should make clear that there are many areas where you don’t really have a choice and there isn’t much competition. There is one major facility that handles most medical procedures, and that’s where you go.

But that’s changing. There are places where it’s more of a marketplace, but that’s not necessarily the case at Baystate Health. Tell us why for your patients, they don’t necessarily do a lot of shopping around yet.

Clearing up charge confusion 

Pepoon: They don’t. That question you just asked Julie, it’s kind of the opposite for us because we have multiple hospitals. When we posted our chargemaster, we also posted it for our other three hospitals, not just for the main one, which is Baystate Medical Center (BMC). And that can create confusion for our patients as well.

We are not yet at the drive to be competitive with other area hospitals because BMC is the only level-1 trauma center in its geographical area. But when we had patients ask why costs are so different at our other hospitals, which are just 40 miles away, we had to step up and educate our staff. And that was largely guiding patients as to the difference between a chargemaster price and what they are actually going to pay. And that is more an estimate of charges from their particular insurance.

We have not yet had a lot of questions from patients, but we anticipate it will definitely increase. We are ready to answer the questions and guide our patients.

We have not yet had a lot of questions from patients, but we anticipate it will definitely increase. We are ready to answer the questions and guide our patients.

Gardner: The chargemaster is just a starting point, and not necessarily an accurate one from the perspective of an outsider looking in.

But it began the process to more accurate price transparency. And even while there is initially a regulatory impetus, one of the biggest drivers is gaining trust, loyalty, and a better customer experience, a sense of confidence about the healthcare payments process.

Julie, what does it take to get past this point of eroding trust due to complexity? How do we reverse that erosion and build a better process so people to feel comfortable about how they pay for their healthcare?

Gerdeman

Gerdeman: There is an opportunity for providers to create a trusted, unique, and personalized experience, even with this transparency regulation. In any experience when you are procuring goods and services, there is a need for information. People want to get information and do research. This has become an expectation now with consumerization — a superior consumer experience.

And what Kate described for her staff, that’s one way of providing a great experience. You train the staff. You have them readily available to answer questions to the highest level of detail. That’s necessary and expected by patients.

There is also a larger opportunity for providers, even just from a marketing perspective. We are starting to see healthcare providers define their brand uniquely and differently.  And patients will start to look for that brand experience. Healthcare is so personal, and it should be part of a personalized experience.

Gardner: Kate, I think it’s fair to say that things are going to get even more challenging.  Increasingly, insurance companies are implementing more co-pays, more and different deductibles, and offering healthcare plans that are more complex overall. 

What would you like to see happen in terms of the technologies and solutions that come to the market to help make this process better for you and your patients?

Accounting for transparency 

Pepoon: Dana, transparency is going to be the future. It’s only going to get more … transparent.

This infancy stage of the government attempting to help educate consumers — I think it was a great idea. The problem is that that did not come with a disclaimer. Now, each hospital is required to provide that disclaimer to help guide patients. The intent was fantastic, but there are so many different ways to look at the information provided. If you look at it face-value, it can be quite shocking. 

I heard a great anecdote recently, that a patient can go online and look at the chargemaster and see that aspirin is going to cost them $100 at a hospital. Obviously, you are taken aback. But that’s not the actual cost to a patient.

Learn How to Meet Patient Demands

For Convenient Payment Options

For Healthcare Services

There needs to be much more robust education regarding what patients are looking at. Technology companies can help bring hospitals to the next level and assist with the education piece. Patients have to understand that there is a whole other layer, which is their actual insurance.

In Massachusetts we are pretty lucky because 12 years ago, then-Governor Mitt Romney [led a drive to bring health insurance to almost everyone]. Because of that, it’s reduced the amount of self-pay patients to the lowest level in the entire United States. Only around two to three percent of our patients don’t have insurance.

Some of the benefits that other states see from the published chargemaster list is better engaging with patients and to have conversations. Patients can say, “Well, I don’t have insurance and I would like to shop around. Thank you to Hospital A, because Hospital A is $2,000 for the procedure and Hospital B is only $1,500.”

But Massachusetts, as part of its healthcare laws, further dedicates itself to educating patients about their benefits. MassHealth, the Medicaid program of Massachusetts, requires hospitals to have certified financial counselors.

Those counselors are there to help with patient benefits and answer questions like, “Is this going to cost me $20,000?” No, because if you sign up for benefits or based on the benefits you have, it’s not going to cost you that much. That chargemaster is more of a definition of what is charged to insurance companies.

The fear is that this is not so easily explained to patients. Patients don’t always even get to the point where they ask questions. If they think that something is going to cost $20,000, they may just move on.

Gardner: The sticker shock is something you have to work with them on and bring them back to reality by looking at the particulars of their insurance as well as their location, treatment requirements, and the specific medical issues. That’s a lot of data, a lot of information to process.

Not only are the patients shopping for healthcare services, they will also be shopping for their next insurance policy. The more information, transparency, and understanding they have about their health payments, the better shopper they become the next time they pick an insurance company and plan. These are all choices. This is all data-driven. This is all information-dependent. 

So Julie, why is it so hard in the medical setting for that data to become actionable? We know in other businesses that it’s there. We know that we can even use machine learning (ML) and artificial intelligence (AI) to predict the weather, for example. And the way we predict the weather is we look at what happened the last 500 times a storm came up the East Coast as an example that sets a pattern.

Where do we go next? How can the same technologies we use to predict the weather be brought to the medical data processing problem?

Gerdeman: Kate said it well that transparency is here, and transparency is the future. But, honestly, transparency is table stakes at this point.

CMS has already indicated that they expect to expand the pricing transparency ruling to require even more. This was just the first step. They know that more has to be done to address complexity for patients as consumers.

Technology is going to play a critical role in all of this, because when you reference things like predicting the weather and other aspects of our lives, they all leverage technology. They look back in order to look forward. The same is true for and will be used in healthcare. It’s already starting to.

So [patient support] teams like Kate’s use estimation tools to provide the most accurate as possible costs to patients in advance of services and procedures. HealthPay24 has been involved as part of our mission, from pre-service to post-service, in that patient financial engagement.

But it is in arming providers and their staffs with that [predictive] technology that is most important for making a difference in the future. There will always be complexities in healthcare. There will always be things happening during procedures that physicians and surgeons can’t anticipate, and that’s where there will be modifications made later.

But given what we know of the costs around the 5,000 knee replacements some healthcare provider might already have done, I think we can begin to provide forward-looking data to patients so that they can make informed decisions like they never have before by comparing all of that.

See the New Best Practice

Of Driving Patient Loyalty

Through Estimation

Gardner: We know from other industries that bringing knowledge and usability works to combat complexity. And one of the places that can be most powerful is for a helpdesk. Those people are on the other end of a telephone or a chatbot from consumers — whether you are in consumer electronics or information technology.

It seems to me that those people at Baystate Health, mandated by the Commonwealth of Massachusetts, who help patients are your helpdesk. So what tools would you like to see optimally in the hands of those people who are explaining away this complexity for your patients?

How to ask before you pay 

Pepoon: That’s a great question. Step one, I would love to see some type of education, perhaps a video from some hospitals if they partnered together, that helps patients understand what it is they are about to look at when they look at a chargemaster and the dollar amounts associated with certain procedures.

That’s going to set the stage for questions to come back through to the staff that you mentioned, the helpdesk people, who are there ready and willing to respond to patients.

But there is another problem with that. The problem is that these are moving targets. People like black-and-white. People like, “This is definitely what I’m going to pay,” before they get a procedure done.

We have heard of the comparison to buying a car. This is very similar to educating yourself in advance, of looking for a specific model you may like for a car, of going to different dealers, looking it up online, seeing what you’re going to pay and then negotiating that before you buy the car.

That’s the piece that’s missing from this healthcare process. You can’t yet negotiate on it. But in the future – with the whole transparency thing, you never know. But it’s that moving target that’s going to make this hard to swallow for a lot of patients because, obviously, this is not like buying a car. It’s your life, it’s your health.

The future is going to have more price transparency. And the future is also going to bring higher costs to patients regardless of who they are and what plan they have. Plans 10 years ago didn’t have deductibles. The plans we had 10 years ago that had a $5 co-pay, and now those plans have a $60 co-pay and a $5,000 deductible.

That’s the direction our healthcare climate is moving to. We are only going to see more cost burdens on patients. As people realize they are going to need to pay out more money for their own healthcare services, it’s going to bring a greater sense of concern.

So, when they do call and talk to that helpdesk, it’s really important for all of us in all of our hospitals to make sure that we are answering patients properly. It was an amazing idea to have this new transparency, but we need to explain what it means. We need to be able to reach out personally to patients and explain what it is they are about to look at. That’s our future.

Gerdeman: I would just like to add that at HealthPay24 we work with healthcare providers all across the country. There are areas that have already had to do this. They have had to be proactive and jump into a competitive landscape with personalized marketing materials.

We are starting to see educational videos in places like Pennsylvania using the human touch, and the approach of, “Yes, we recognize that you’re a consumer, and we recognize that you have a choice.” They have even gone to the extent of creating online price-checkers and charge-checkers to give people flexibility from their homes of conveniently clicking a box from a chargemaster to determine what procedure or service they are to be receiving. They can furthermore check those charges across multiple hospitals that are competing and that are making those calculators available to consumers proactively.

We are starting to see educational videos using the human touch. The providers recognize that you’re a consumer and that you have a choice. They have created online price-checkers to allow people from their homes to determine the procedures and pricing.

Gardner: I’m sensing a building urgency around this need for transparency from large organizations like Baystate Health. And they are large, with service providers in both Western Massachusetts as well as the “Knowledge Corridor” of Massachusetts and Connecticut. They have four hospitals, 80 medical practices, 25 reference laboratories, 12,000 employees, and 1,600 physicians.

They have a sense of urgency but aren’t yet fully aware of what is available and how to solve this problem. It’s a big opportunity. I think we can all agree it’s time now to be proactive and recognize what’s required to make transparency happen and be accurate.

What do you recommend, Kate, for organizations to be more proactive, to get out in front of this issue? How can vendors in the marketplace such as Julie and HealthPay24 help?

Use IT to explain healthcare costs

Pepoon: There needs to be a better level of education at the place where patients go to look at what medical charge prices are. That forms a disclaimer, in a way, of, “Listen, this is what you are about to look at. It’s a little bit like jargon, and that’s okay. You are going to feel that way because this is raw data coming from a hospital, and a lot of people have to go to school for very long time to read and understand what it is that they are looking at.”

And I think if there has to be a way that we can have patients focused and able to call and ask questions. That’s going to help.

For the technology side going forward, I am very interested to see what it’s going to look like in about a year. I want to see the feedback from other hospitals and providers in Massachusetts as to how this has gone. Today, quite frankly, when I was doing research for us at Baystate I reached out to find out what are the questions patients are asking. Patients are not really calling that much to talk about this subject yet. I don’t know if that’s a good thing or a bad thing. I think that that’s a sentiment most hospitals in Massachusetts are feeling right now.

I don’t think there is one hospital system that’s ahead of the curve or running toward the goal of plastering all of this data out there. I don’t think everybody knows what to do with it yet. IT companies and partners that we have — our technical partners like HealthPay24 – can help take jargon and put it into some version that is easily digestible.

That is going to be future. It ties back to the question of: Is transparency going to be the wave of the future? And that’s absolutely, “Yes.” But it’s all about who can read the language? If me and Julie are the only two people in a room who can read the language, we are letting our patients down.

Gardner: Well, engineering complexity out is one of the things the technology does very well. Software has been instrumental in that for the past 15 or 20 years.

There is a huge opportunity to look at technology and emerging technology today to provide new levels of clarity, reduce complexity, and to become more proactive.

Julie, as we end our discussion, for organizations like Baystate Health that want to be more proactive, to be able to answer those patient phone calls in the best way, what do you recommend? What can healthcare provider organizations start doing to be in front of this issue when it comes to accurate and transparent healthcare cost information? 

 Gerdeman: There is a huge opportunity to look at technology available today, as well as emerging technology and where it’s headed. If history proves anything, Dana, to your point, it’s that technology can provide new levels of clarity and reduce complexity. You can digitize processes that were completely manual and where everything needed to be done on the phone, via fax, and on paper.

In healthcare, there’s a big opportunity to embrace technology to become more proactive. We talk about being proactive, and it really means to stop reacting and take a strategic approach, just like in IT architectures of the past. When you take that strategic approach you can look at processes and workflows and see what can be completely digitized and automated in new ways. I think that’s a huge opportunity.

I also don’t want to lose sight of the humane aspect because this is healthcare and we are all human, and so it’s personal. But again, technology can help personalize experiences. People may not be calling because they want access online via their phone, or they want everything to be mobile, simple, beautiful, and digital because that’s what we increasingly experience in all of our lives.

View a Webinar

On How Accurate Financial Data

Helps Providers Make Informed Decisions 

Providers have a great opportunity to leverage technology to make things even more personal and humane and to differentiate themselves as brands, in Massachusetts and all across the country as they become leading brands in healthcare.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in electronic medical records, healthcare, professional services, supply chain | Tagged , , , , , , , , , | Leave a comment

How Texmark Chemicals pursues analysis-rich, IoT-pervasive path to the ‘refinery of the future’

Oil refinery industry

The next BriefingsDirect Voice of the Customer discussion revisits the drive to define the “refinery of the future” at Texmark Chemicals.

Texmark has been combining the best of operational technology (OT) with IT and now Internet of Things (IoT) to deliver data-driven insights that promote safety, efficiency, and unparalleled sustained operations.

Stay with us now as we hear how a team approach — including the plant operators, consulting experts and latest in hybrid IT systems — joins forces for rapid process and productivity optimization results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how, are are joined by our panel, Linda Salinas, Vice President of Operations at Texmark Chemicals, Inc. in Galena Park, Texas; Stan Galanski, Senior Vice President of Customer Success at CB Technologies (CBT) in Houston, and Peter Moser, IoT and Artificial Intelligence (AI) Strategist at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stan, what are the trends, technologies, and operational methods that have now come together to make implementing a refinery of the future approach possible? What’s driving you to be able to do things in ways that you hadn’t been able to do before?

Galanski

Galanski: I’m going to take that in parts, starting with the technologies. We have been exposed to an availability of affordable sensing devices. These are proliferating in the market these days. In addition, the ability to collect large amounts of data cheaply — especially in the cloud — having ubiquitous Wi-Fi, Bluetooth, and other communications have presented themselves as an opportunity to take advantage of.

On top of this, the advancement of AI and machine learning (ML) software — often referred to as analytics — has accelerated this opportunity.

Gardner: Linda, has this combination of events dramatically changed your perspective as VP of operations? How has this coalescing set of trends changed your life?

Salinas: They have really come at a good time for us. Our business, and specifically with Texmark, has morphed over the years to where our operators are more broadly skilled. We ask them to do more with less. They have to have a bigger picture as far as operating the plant.

Today’s operator is not just sitting at a control board running one unit. Neither is an operator just out in a unit, keeping an eye on one tower or one reactor. Our operators are now all over the plant operating the entire utilities and wastewater systems, for example, and they are doing their own lab analysis.

Learn More About Transforming

the Oil and Gas Industry

This technology has come at a time that provides information that’s plant-wide so that they can make more informed decisions on the board, in the lab, whenever they need.

Gardner: Peter, as somebody who is supplying some of these technologies, how do you see things changing? We used to have OT and IT as separate, not necessarily related. How have we been able to make those into a whole greater than the sum of their parts?

OT plus IT equals success 

Moser

Moser: That’s a great question, Dana, because one of the things that has been a challenge with automation of chemical plants is these two separate towers. You had OT very much separate from IT.

The key contributor to the success of this digitization project is the capability to reboot those two domains together successfully. 

Gardner: Stan, as part of that partnership, tell us about CBT and how you fit.

Galanski: CBT is a 17-year-old, privately owned company. We cut our teeth early on by fulfilling high-tech procurement orders for the aerospace industry. During that period, we developed a strength for designing, testing, and installing compute and storage systems for those industries and vendors.

It evolved into developing an expertise in high-performance computing (HPC), software design platforms, and so forth.

About three years ago, we recognized the onset of faster computational platforms and massive amounts of data — and the capability for software to control that dataflow — was changing the landscape. Now, somebody needed to analyze that data faster over multiple mediums. Hence, we developed a practice around comprehensive data management and combined that with our field experience. That led us to become a systems integrator (SI), which is what we’ve been assigned to for this refinery of the future.

Gardner: Linda, before we explore more on what you’ve done and how it improves things, let’s learn about Texmark. With a large refinery operation, any downtime can be a big problem. Tell us about the company and what you are doing to improve your operations and physical infrastructure.

Salinas

Salinas: Texmark is a family-owned company, founded in 1970 by David Smith. And we do have a unique set of challenges. We sit on eight acres in Galena Park, and we are surrounded by a bulk liquid terminal facility.

So, as you can imagine, a plant that was built in the 1940s has older infrastructure. The layout is probably not as efficient as it could be. In the 1940s, we didn’t have a need for wastewater treatment. Things may not have been laid out in the most efficient ways, and so we have added these things over the years. So, one, we are landlocked, and, two, things may not be sited in the most optimal way.

For example, we have several control rooms sprinkled throughout the facility. But we have learned that siting is an important issue. So we’ve had to move our control room to the outskirt of the process areas.

As a result, we’ve had to reroute our control systems. We have to work with what we have, and that presents some infrastructure challenges.

Also, like other chemical plants and refineries, the things we handle are hazardous. They are flammable, toxic, and they are not things people want to have in the air that they breath in neighborhoods just a quarter-mile downwind of us.

So we have to be mindful of safe handling of those chemicals. We also have to be mindful that we don’t disrupt our processes. Finding the time to shut down to install and deploy new technology, is a challenge. Chemical plants and refineries need to find the right time to shut down and perform maintenance with a very defined scope, and on a budget.

And so that capability to come up and down effectively is a strength for Texmark because we are a smaller facility and so are able to come up and down and deploy and test and prove out some of these technologies. 

Gardner: Stan, in working with Linda, you are not just trying to gain incremental improvement. You are trying to define the next definition, if you will, of a safe, efficient, and operationally intelligent refinery.

How are you able to leapfrog to that next state, rather than take baby steps, to attain an optimized refinery?

Challenges of change 

Galanski: First we sat down with the customer and asked what the key functions and challenges they had in their operations. Once they gave us that list, we then looked at the landscape of technologies and the available categories of information that we had at our disposal and said, “How can we combine this to have a significant improvement and impact on your business?”

We came up with five solutions that we targeted and started working on in parallel. They have proven to be a handful of challenges — especially working in a plant that’s continuously operational.

The connected worker solution is garnering a lot of attention in the marketplace. With it, we are able to bring real-time data from the core repositories of the company to the hands of the workers in the field.

Based on the feedback we’ve received from their personnel; we feel we are on the right track. As part of that, we are attacking predictive maintenance and analytics by sensoring some of their assets, their pumps. We are putting video analytics in place by capturing video scenes of various portions of the plant that are very restrictive but still need to have careful monitoring. We are looking at worker safety and security by capturing biometrics and geo-referencing the location of workers so we know they are safe or if they might be in danger.

The connected worker solution is garnering a lot of attention in the marketplace. With it, we are able to bring real-time data from the core repositories of the company to the hands of the workers in the field. Oftentimes it comes to them in a hands-free condition where the worker has wearables on his body that project and display the information without them having to hold a device.

Lastly, we are tying this all together with an asset management system that tracks every asset and ties them to every unstructured data file that has been recorded or captured. In doing so, we are able to put the plant together and combine it with a 3D model to keep track of every asset and make that useful for workers at any level of responsibility.

Gardner: It’s impressive, how this touches just about every aspect of what you’re doing.

Peter, tell us about the foundational technologies that accommodate what Stan has just described and also help overcome the challenges Linda described.

Foundation of the future refinery

Moser: Before I describe what the foundation consists of, it’s important to explain what led to the foundation in the first place. At Texmark, we wanted to sit down and think big. You go through the art of the possible, because most of us don’t know what we don’t know, right?

You bring in a cross-section of people from the plant and ask, “If you could do anything what would you do? And why would you do it?” You have that conversation first and it gives you a spectrum of possibilities, and then you prioritize that. Those prioritizations help you shape what the foundation should look like to satisfy all those needs.

That’s what led to the foundational technology platform that we have at Texmark. We look at the spectrum of use cases that Stan described and say, “Okay, now what’s necessary to support that spectrum of use cases?”

But we didn’t start by looking at use cases. We started first by looking at what we wanted to achieve as an overall business outcome. That led us to say, “First thing we do is build out pervasive connectivity.” That has to come first because if things can’t give you data, and you can’t capture that data, then you’re already at a deficit.

Then, once you can capture that data using pervasive Wi-Fi with HPE Aruba, you need a data center-class compute platform that’s able to deliver satisfactory computational capabilities and support, accelerators, and other things necessary to deliver the outcomes you are looking for.

The third thing you have to ask is, “Okay, where am I going to put all of this computing storage into?” So you need a localized storage environment that’s controlled and secure. That’s where we came up with the edge data center. It was those drivers that led to the foundation from which we are building out support for all of those use cases.

Gardner: Linda, what are you seeing from this marriage of modernized OT and IT and taking advantage of edge computing? Do you have an ability yet to measure and describe the business outcome benefits?

Hands-free data at your fingertips 

Salinas: This has been the perfect project for us to embark on our IT-OT journey with HPE and CBT, and all of our ecosystem partners. Number one, we’ve been having fun.

Two, we have been learning about what is possible and what this technology can do for us. When we visited the HPE Innovation Lab, we saw very quickly the application of IT and OT across other industries. But when we saw the sensored pump, that was our “aha moment.” That’s when we learned what IoT and its impact meant to Texmark.

Learn More About Transforming

the Oil and Gas Industry

As for key performance indicators (KPIs), we gather data and we learn more about how we can employ IoT across our business. What does that mean? That means moving away from the clipboard and spreadsheet toward having the data wherever we need it — having it available at our fingertips, having the data do analytics for us, and telling us, “Okay, this is where you need to focus during your next precious turnaround time.”

The other thing is, this IoT project is helping us attract and retain talent. Right now it’s a very competitive market. We just hired a couple of new operators, and I truly believe that the tipping point for them was that they had seen and heard about our IoT project and the “refinery of the future” goal. They found out about it when they Googled us prior to their interview.

We just hired a new maintenance manager who has a lot of IoT experience from other plants, and that new hire was intrigued by our “refinery of the future” project.

Finally, our modernization work is bringing in new business for Texmark. It’s putting us on the map with other pioneers in the industry who are dipping their toe into the water of IoT. We are getting national and international recognition from other chemical plants and refineries that are looking to also do toll processing.

They are now seeking us out because of the competitive edge we can offer them, and for the additional data and automated processes that that brings to us. They want the capability to see real-time data, and have it do analytics for them. They want to be able to experiment in the IoT arena, too, but without having to do it necessarily inside their own perimeter.

Gardner: Linda, please explain what toll processing is and why it’s a key opportunity for improvement?

Collaboration creates confidence 

Salinas: Texmark produces dicyclopentadiene, butyl alcohol, propyl alcohol, and some aromatic solvents. But alongside the usual products we produce and sell, we also provide “toll processing services.” The analogy I like to tell my friends is, “We have the blender, and our customers bring the lime and tequila. The we make their margaritas for them.”

So our customers will bring to us their raw materials. They bring the process conditions, such as the temperatures, pressures, flows, and throughput. Then they say, “This is my material, this is my process. Will you run it in your equipment on our behalf?”

When we are able to add the IoT component to toll processing, when we are able to provide them data that they didn’t have whenever they ran their own processes, that provides us a competitive edge over other toll processors.

When we are able to add the IoT component to toll processing, when we are able to provide them data that they didn’t have whenever they ran their own processes, that provides us a competitive edge over other toll processors.

Gardner: And, of course, your optimization benefits can go right to the bottom line, so a very big business benefit when you learn quickly as you go.

Stan, tell us about the cultural collaboration element, both from the ecosystem provider team support side as well as getting people inside of a customer like Texmark to perhaps think differently and behave differently than they had in the past.

Galanski: It’s all about human behavior. If you are going to make progress in anything of this nature, you are going to have to understand the guy sitting across the table from you, or the person out in the plant who is working in some fairly challenging environments. Also, the folks sitting at the control room table with a lot of responsibility for managing the processes with lots of chemicals for many hours at a time. 

So we sat down with them. We got introduced to them. We explained to them our credentials. We asked them to tell us about their job. We got to know them as people; they got to know us as people.

We established trust, and then we started saying, “We are here to help.” They started telling us their problems, asking, “Can you help me do this?” And we took some time, came up with some ideas, and came back and socialized those ideas with them. Then we started attacking the problem in little chunks of accomplishments.

We would say, “Well, what if we do this in the next two weeks and show you how this can be an asset for you?” And they said, “Great.” They liked the fact that there was quick turnaround time, that they could see responsivity. We got some feedback from them. We developed a little more confidence and trust between each other, and then more things started out-pouring a little at a time. We went from one department to another and pretty soon we began understanding and learning about all aspects of this chemical plant.

It didn’t happen overnight. It meant we had to be patient, because it’s an ongoing operation. We couldn’t inject ourselves unnaturally. We had to be patient and take it in increments so we could actually demonstrate success.

And over time you sometimes can’t tell the difference between us and some of their workers because we all come to meetings together. We talk, we collaborate, and we are one team — and that’s how it worked.

Gardner: On the level of digital transformation — when you look at the bigger picture, the strategic picture — how far along are they at Texmark? What would be some of the next steps? 

All systems go digital 

Galanski: They are now very far along in digital transformation. As I outlined earlier, they are utilizing quite a few technologies that are available — and not leaving too many on the table. 

So we have edge computing. We have very strong ubiquitous communication networks. We have software analytics able to analyze the data. They are using very advanced asset integrity applications to be able to determine where every piece, part, and element of the plant is located and how it’s functioning.

I have seen other companies where they have tried to take this only one chapter at a time, and they sometimes have multiple departments working on these independently. They are not necessarily ready to integrate or to scale it across the company.

But Texmark has taken a corporate approach, looking at holistic operations. All of their departments understand what’s going on in a systematic way. I believe they are ready to scale more easily than other companies once we get past this first phase.

Gardner: Linda, any thoughts about where you are and what that has set you up to go to next in terms of that holistic approach?

Salinas: I agree with Stan. From an operational standpoint, now that we have some sensored pumps for predictive analytics, we might sensor all of the pumps associated with any process, rather than just a single pump within that process.

That would mean in our next phase that we sensor another six or seven pumps, either for toll processing or our production processes. We won’t just do analytics on the single pump and its health, lifecycle, and when it needs to be repaired. Instead we look at the entire process and think, “Okay, not only will I need to take this one pump down for repair, but instead there are two or three that might need some service or maintenance in the next nine months. But the fuller analytics can tell me that if I can wait 12 months, then I can do them all at the same time and bring down the process and have a more efficient use of our downtime.”

I could see something like that happening.

Galanski: We have already seen growth in this area where the workers have seen us provide real-time data to them on hands-free mobile and wearable devices. They say, “Well, could you give me historical data over the past hour, week, or month? That would help me determine whether I have an immediate problem, not just one spike piece of information?”

So they have given us immediate feedback on that and that’s progressing.

Gardner: Peter, we are hearing about a more granular approach to sensors at Texmark, with the IoT edge getting richer. That means more data being created, and more historical analysis of that data.

Are you therefore setting yourself up to be able to take advantage of things such as AI, ML, and the advanced automation and analytics that go hand in hand? Where can it go next in terms of applying intelligence in new ways?

Deep learning from abundant data 

Moser: That’s a great question because the data growth is exponential. As more sensors are added, videos incorporated into their workflows, and they connect more of the workers and employees at Texmark their data and data traffic needs are going to grow exponentially.

But with that comes an opportunity. One is to better manage the data so they get value from it, because the data is not all the same or it’s not all created equal. So the opportunity there is around better data management, to get value from the data at its peak, and then manage that data cost effectively.

That massive amount of data is also going to allow us to better train the current models and create new ones. The more data you have, the better you can do ML and potentially deep learning.

Learn More About Transforming

the Oil and Gas Industry

Lastly, we need to think about new insights that we can’t create today. That’s going to give us the greatest opportunity, when we take the data we have today and use it in new and creative ways to give us better insights, to make better decisions, and to increase health and safety. Now we can take all of the data from the sensors and videos and cross-correlate that with weather data, for example, and other types of data, such as supply chain data, and incorporate that into enabling and empowering the salespeople, to negotiate better contracts, et cetera.

So, again, the art of the possible starts to manifest itself as we get more and more data from more and more sources. I’m very excited about it.

Gardner: What advice do you have for those just beginning similar IoT projects? 

Galanski: I recommend that they have somebody lead the group. You can try and flip through the catalogs and find the best vendors who have the best widgets and start talking to them and bring them on board. But that’s not necessarily going to get you to an end game. You are going to have to step back, understand your customer, and come up with a holistic approach of how to assign responsibilities and specific tasks, and get that organized and scheduled. 

There are a lot of parties and a lot of pieces on this chess table. Keeping them all moving in the right direction and at a cadence that people can handle is important. And I think having one contractor, or a department head in charge, is quite valuable.

Salinas: You should rent a party bus. And what I mean by that is when we first began our journey, actually our first lecture, our first step onto the learning curve about IoT, was when Texmark rented a party bus and put about 13 employees on it and we took a field trip to the HPE Innovation Lab.

When Doug Smith, our CEO, and I were invited to visit that lab we decided to bring a handful of employees to go see what this IoT thing was all about. That was the best thing we ever could have done, because the excitement was built from the beginning.

They saw, as we saw, the art of the possible at the HPE IoT Lab, and the ride home on that bus was exciting. They had ideas. They didn’t even know where to begin. The buy-in was there from the beginning. 

They saw, as we say, the art of the possible at the HPE IoT Lab, and the ride home on that bus was exciting. They had ideas. They didn’t even know where to begin, but they had ideas just from what they had seen and learned in a two-hour tour about what we could do at Texmark right away. So the engagement, the buy-in was there from the beginning, and I have to say that was probably one of the best moves we have made to ensure the success of this project.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise. 

You may also be interested in:

Posted in artificial intelligence, big data, data analysis, Data center transformation, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, Security, storage | Tagged , , , , , , , , , , , , , , | Leave a comment

How the composable approach to IT aligns automation and intelligence to overcome mounting complexity

The next edition of the BriefingsDirect Voice of the Innovator podcast series explores the latest developments in hybrid IT and datacenter composability.

Bringing higher levels of automation to data center infrastructure has long been a priority for IT operators, but it’s only been in the past few years that they have actually enjoyed truly workable solutions for composability.

The growing complexities, from hybrid cloud and the pressing need for conservation of IT spend — as well as the need to find high-level IT skills — means there is no going back. Indeed, there is little time for even a plateau on innovation around composability.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us now as we explore how pervasive increasingly intelligent IT automation and composability can be with Gary Thome, Vice President and Chief Technology Officer for Composable Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gary, what are the top drivers making composability top-of-mind and something we’re going to see more of?

Thome: It’s the same drivers for businesses as a whole, and certainly for IT. First, almost every business is going through some sort of digital transformation. And that digital transformation is really about transforming to leverage IT to connect with their customers and make IT the primary way they interact with customers and make revenue.

Digital transformation drives composability 

Thome

With that, there’s a desire to go very fast, of rapidly getting connections to customers much faster and for adding features faster via software for your customers.

The whole idea of digital transformation and becoming a digital business is driving a whole new set of behaviors in the way enterprises run – and as a result – in the way that IT needs to support them.

From the IT standpoint, there is this huge driver to say, “Okay, I need to be able to go faster to keep up with the speed of the business.” That is a huge motivator. 

But at the same time, there’s the constant desire to keep IT cost in line, which requires higher levels of automation. That automation — along with a desire to flexibly align with the needs of the business — drives what we call composability. It combines the flexibility of being able to configure and choose what you need to meet the business needs — and ultimately customer needs — and do it in a highly automated manner.

Gardner: Has the adoption of cloud computing models changed the understanding of how innovation takes place in an IT organization? There used to be long periods between upgrades or a new revision. Cloud has given us constant iterative improvements. Does composability help support that in more ways?

Thome: Yes, it does. There has been a general change in the way of thinking, of shifting from occasional, large changes to frequent, smaller changes. This came out of an Agile mindset and a DevOps environment. Interestingly enough, it’s permeated to lots of other places outside of IT. More companies are looking at how to behave that way in general.

How to Achieve Composability

Across Your Datacenter

On the technology side, the desire for rapid, smaller changes means a need for higher levels of automation. It means automating the changes to the next desired state as quickly as possible. All of those things lend themselves toward composability.

Gardner: At the same time, businesses are seeking economic benefits via reducing unutilized IT capacity. It’s become about “fit-for-purpose” and “minimum viable” infrastructure. Does composability fit into that, making an economic efficiency play?

Thome: Absolutely. Along with the small, iterative changes – of changing just what you need when you need it – comes a new mindset with how you approach capacity. Rather than buying massive amounts of capacity in bulk and then consuming it over time, you use capacity as you need it. No longer are there large amounts of stranded capacity.

Composability is key to this because it allows you through technical means to gain an environment that gets the desired economic result. You are simply using what you need when you need it, and then releasing it when it’s not needed — versus pre-purchasing large amounts of capacity upfront.

Innovation building blocks 

Gardner: As an innovator yourself, Gary, you must have had to rethink a lot of foundational premises when it comes to designing these systems. How did you change your thinking as an innovator to create new systems that accommodate these new and difficult requirements?

Thome: Anyone in an innovation role has to always challenge their own thinking, and say, “Okay, how do I want to think differently about this?” You can’t necessarily look to the normal sources for inspiration because that’s exactly where you don’t want to be. You want to be somewhere else.

For myself it may mean looking at any other walk of life – from what I do, read, and learn as possible sources of inspiration for rethinking the problem.

Interestingly enough, there is a parallel in the IT world of taking applications and decomposing them into smaller chunks. We talk about microservices that can be quickly assembled into larger applications — or composed, if you want to think of it that way. And now we’re able to disaggregate the infrastructure into elements, too, and then rapidly compose them into what’s needed. 

Those are really parallel ideas, going after the same goal. How do I just use what I need when I need it — not more, not less? And then automate the connections between all of those services.

That, in turn, requires an interface that makes it very easy to assemble and disassemble things together — and therefore very easy to produce the results you want. 

When you look at things outside of the IT world, you can see patterns of it being easy to assemble and disassemble things, like children’s building blocks. Before, IT tended to be too complex. How do you make the IT building blocks easier to assemble and disassemble such that it can be done more rapidly and more reliably?

Gardner: It sounds as if innovations from 30 years ago are finding their way into IT. Things such as simultaneous engineering, fit-for-purpose design and manufacturing, even sustainability issues of using no more than you need. Were any of those inspirations to you?

Cultivate the Agile mindset

Thome: There are a variety of sources, everything from engineering practices, to art, to business practices. They all start swiveling around in your head. How do I look at the patterns in other places and say, “Is that the right kind of pattern that we need to apply to an IT problem or not?”

The historical IT perspective of elongated steps and long development cycles led to the end-place of very complex integrations to get all the piece-parts put together. Now, the different, Agile mindset says, “Why don’t you create what you need iteratively but make sure it integrates together rapidly?”

Can you imagine trying to write a symphony and have 20 different people develop their own parts? There’s separate trombone, or timpani, or violin. And then you just say, “Okay, play it together once, and we will start debugging when it doesn’t sound right.” Well, of course that would be a disaster. If you don’t think about it upfront, do you want to develop it as-you-go?

The same thing needs to go into how we develop IT — with both the infrastructure and applications. That’s where the Agile and the DevOps mindsets have evolved to. It’s also very much the mindset we have in how we develop composability within HPE.

Gardner: At HPE, you began bringing composability to servers and the data center stack, trying to make hardware behave more like software, essentially. But it’s grown past that. Could you give us a level-set of where we are right now when it comes to the capability to compose the support for doing digital business?

Intelligent, rapid, template-driven assembly 

Thome: Within the general category of composablity, we have this new thing called Composable Infrastructure, and we have a product called HPE Synergy. Rather than treat the physical data resources in the data center as discrete servers, storage arrays, switches, it looks at them as pools of compute capacity, storage capacity, fabric capacity, and even software capacity or images of what you want to use.

Each of those things can be assembled rapidly through what we call software-defined intelligence. It knows how to assemble the building blocks – compute, storage, and networking — into something interesting. And that is template-driven. You have a template, which is a description of what you want the end-state to look like, what you want your infrastructure look like, when you are done.

And the templates say, “Well, I need a compute of this big block or size. This much storage, or this kind of network.” Whatever you want. “And then, by the way, I want this software loaded on it.” And so forth. You describe the whole thing as a template and then we can assemble it based on that description.

That approach is one we’ve innovated on in a lab from the infrastructure’s standpoint. But what’s very interesting about it is, if you look at a modern cloud made up of applications, it uses a very similar philosophical approach to the assembling. In fact, just like with modern applications, you say, “Well, I’m assembling a group of services or elements. I am going to create it all via APIs.” Well, guess what? Our hardware is driven by APIs also. It’s an API-level assembly of the hardware to compose the hardware into whatever you want. It’s the same idea of composing that applies everywhere.

Millennials lead the way

Gardner: The timing for this is auspicious on many levels. Just as you’re making crafting of hardware solutions possible, we’re dealing with an IT labor shortage. If, like many Millennials, you are of a cloud-first mentality you will find kinship with composability — even though you’re not necessarily composing a cloud. Is that right?

Thome: Absolutely. That cloud mindset, or service’s mindset, or asset-service mindset — whatever you want to think of it as – is one where this is a natural way of thinking. The younger people may have grown up with this mindset. It wouldn’t occur to them to think any differently. And others may have to shift to a new way of thinking.

This is one of the challenges for organizations. How do they shift — not just the technologies or the tools — but the mindset within the culture in a different direction?

How to Remove Complexity

From Multicloud and Hybrid IT

You have to start with changing the way you think. It’s a mindset change to ask, “How do I think about this problem differently?” That’s the key first thing that needs to happen, and then everything falls behind that mindset.

It’s a challenge for any company doing transformation, but it’s also true for innovation — shifting the mindset.

Gardner: The wide applicability of composability is impressive. You could take this composable mindset, use these methods and tools, and you could compose a bare-metal, traditional, on-premises data center. You could compose a highly virtualized on-premises data center. You could compose a hybrid cloud, where you take advantage of private cloud and public cloud resources. You can compose across multiple types of private and public clouds. 

Cross-cloud composability

Thome: We think composability is a very broad, useful idea. When we talk to customers they are like, “Okay, well, I’ll have my own kind of legacy estate, my legacy applications. Then I have my new applications, and new way of thinking that are being developed. How do I apply principles and technologies that are universal across them?”

The idea of being able to say, “Well, I can compose the infrastructure for my legacy apps and also compose my new cloud-native apps, and I get the right infrastructure underneath.” That is a very appealing idea.

But we also take the same ideas of composability and say, “Well, I would even want to compose ultimately across multiple clouds.” So more-and-more enterprises are leveraging clouds in various shapes and forms. They are increasing the number of clouds they use. We are trending to hybrid cloud, where there are people using different clouds for different reasons. They may actually have a single application that’s spanning multiple clouds, including on-premises clouds.

When you get to that level, you start thinking, “Well, how do I compose my environment or my applications across all of those areas?” Not everybody is necessarily thinking about it that way yet, but we certainly are. It’s definitely something that’s coming.

You start thinking, “How do I compose my environment or my applications across all areas?” Not everyone is thinking about it yet that way, but we certainly are. It’s definitely coming.

Gardner: Providers are telling people that they can find automation and simplicity but the quid pro quo is that you have to do it all within a single stack, or you have to line up behind one particular technology or framework. Or, you have to put it all into one particular public cloud.

It seems to me that you may want to keep all of your options open and be future-proof in terms of what might be coming in a couple of years. What is it about composability that helps keep one’s options open?

Thome: With automation, there’s two extremes that people wind up with. One is a great automation framework that promises you can automate anything. The most important thing is that you can; meaning, wedon’t do it, but youcan, if you are willing to invest all of the hard work into it. That’s one approach. The good news is that there are multiple vendors with actual parts of the automation-technology total. But it can be a very large amount of work to develop and maintain systems across that kind of environment.

On the other hand, there are automation environments where, “Hey, it works great. It’s really simple. Oh, by the way, you have to completely stay within our environment.” And so you are stuck within the confines of their rules for doing things.

Both of these approaches, obviously, have a very significant downside because any one particular environment is not going to be the sum of everything that you do as a business. We see both of them as wrong.

Real composability shines when it spans the best of both of those extremes. On the one hand, composability makes it very easy to automate the composable infrastructure, and it also automates everything within it. 

In the case of HPE Synergy, composable management (HPE OneView) makes it easy to automate the compute, storage, and networking — and even the software stacks that run on it — through a trivial interface. And at the same time, you want to integrate into the broader, multivendor automation environments so you can automate across all things.

You need that because, guaranteed, no one vendor is going to provide everything you want, which is the failing of the second approach I mentioned. Instead, what you want is to have a very easy way to integrate into all of those automation environments and automation frameworks without throwing a whole lot of work to the customer to do.

We see composability strength in being API-driven. It makes it easy to integrate into automation frameworks, but secondly, it completely automates the things that are underneath that composable environment. You don’t have to do a lot of work to get things operating.

So we see that as the best of those two extremes that have historically been pushed on the market by various vendors.

Gardner: Gary, you have innovated and created broad composability. In a market full of other innovators, have there been surprises in what people have done with composability? Has there been follow-on innovation in how people use composability that is worth mentioning and was impressive to you? 

Success stories 

Thome: One of my goals for composability was that, in the end, people would use it in ways I never imagined. I figured, “If you do it right, if you create a great idea and a great toolset, then people can do things with it you can’t imagine.” That was the exciting thing for me.

One customer created an environment where they used the HPE composable API in the Terraform environment. They were able to rapidly span a variety of different environments based on self-service mechanisms. Their scientist users actually created the IT environments they needed nearly instantly. 

It was cool because it was not something that we set out specifically to do. Yet they were saying it solves business needs and their researchers’ needs in a very rapid manner.

Another customer recently said, “Well, we just need to roll out really large virtualization clusters.” In their case, it’s a 36-node cluster. It used to take them 21 days. But when they shifted to HPE composability, they got it down to just six hours.

How to Tame

Multicloud Complexity

Obviously it’s very exciting to see such real benefits to customers, to get faster with putting IT resources to use and to minimize the burden on the people associated with getting things done.

When I hear those kinds of stories come back from customers — directly or through other people — it’s really exciting. It says that we are bringing real value to people to help them solve both their IT needs and their business needs.

Gardner: You know you’re doing composable right when you have non-IT people able to create the environments they need to support their requirements, their apps, and their data. That’s really impressive.

Gary, what else did you learn in the field from how people are employing composability? Any insights that you could share?

Thome: It’s in varying degrees. Some people get very creative in doing things that we never dreamed of. For others, the mindset shift can be challenging, and they are just not ready to shift to a different way of thinking, for whatever reasons.

Gardner: Is it possible to consume composability in different ways? Can you buy into this at a tactical level and a strategic level?

Thome: That’s one of the beautiful things about the HPE composability approach. The answer is absolutely, “Yes.” You can start by saying, “I’m going to use composability to do what I always did before.” And the great news is it’s easier than what you had done before. We built it with the idea of assembling things together very easily. That’s exactly what you needed.

Then, maybe later, some of the more creative things that you may want to do with composability come to mind. The great news is it’s a way to get started, even if you haven’t yet shifted your thinking. It still gives you a platform to grow from should you need to in the future.

Gardner: We have often seen that those proof-points tactically can start the process to change peoples’ mindsets, which allows for larger, strategic value to come about.

Thome: Absolutely. Exactly right. Yes.

Gardner: There’s also now at HPE, and with others, a shift in thinking about how to buy and pay for IT. The older ways of IT, with longer revisions and forklift upgrades meant paying was capital-intensive.

What is it about the new IT economics, such as HPE GreenLake Flex Capacity purchasing, that align well with composability in terms of making it predictable and able to spread out costs as operating expenses?

Thome: These two approaches are perfect together; they really are. They are hand-in-glove and best buddies. You can move to the new mindset of, “Let me just use what I need and then stop using it when I don’t need it.”

That mindset — and being able to do rapid, small changes in capacity or code or whatever you are doing, it doesn’t matter – also allows a new economic perspective. And that is, “I only pay for what I need, when I need it; and I don’t pay for the things I am not using.”

Our HPE GreenLake Flex Capacity service brings that mindset to the economic side as well. We see many customers choose composability technology and then marry it with GreenLake Flex Capacity as the economic model. They can bring together that mindset of making minor changes when needed, and only consuming what is needed, to both the technical and the economic side.

We see this as a very compelling and complementary set of capabilities — and our customers do as well.

Gardner: We are also mindful nowadays, Gary, about the edge computing and the Internet of Things (IoT), with more data points and more sensors. We also are thinking about how to make better architectural decisions about edge-to-core relationships. How do we position the right amount of workload in the right place for the right requirements?

How does composability fit into the edge? Can there also be an intelligent fabric network impact here? Unpack for us how the edge and the intelligent network foster more composability.

Composability on the fly, give it a try 

Thome: I will start with the fabric. So the fabric wants to be composable. From a technology side, you want a fabric that allows you to say, “Okay, I want to very dynamically and easily assemble the network connections I want and the bandwidth I want between two endpoints — when I want them. And then I want to reconfigure or compose, if you will, on the fly.”

We have put this technology together, and we call it Composable Fabric. I find this superexciting because you can create a mesh simply by connecting the endpoints together. After that, you can reconfigure it on the fly, and the network meets the needs of the applications the instant you need them.

How to Achieve Composability

Across Your Datacenter 

This is the ultimate of composability, brought to the network. It also simplifies the management operation of the network because it is completely driven by the need from the application. That is what directly drives and controls the behavior of the network, rather than having a long list of complex changes that need to be implemented in the network. That tends to be cumbersome and winds up being unresponsive to the real needs of the business. Those changes take too long. This is completely driven from the needs of application down into the needs of the fabric. It’s a super exciting idea, and we are really big on it, obviously.

Now, the edge is also interesting because we have been talking about conserving resources. There are even fewer resources at the edge, so conservation can be even more important. You only want to use what you need, when you need it. Being able to make those changes incrementally, when you need them, is the same idea as the composability we have been talking about. It applies to the edge as well. We see the edge as ultimately an important part of what we do from a composable standpoint.

Gardner: For those folks interested in exploring more about composability, methodologies, technologies, and getting some APIs to experiment with — what advise do you have for them? What are some good ways to unpack this and move into a proof-of-concept project?

Thome: We have a lot of information on our website, obviously, about composability. There is a lot you can read up on, and we encourage anybody to learn about composability through those materials.

They can also try composability because it is completely software-defined and API-driven. You can go in and play with the composable concepts through software. We suggest people try directly. But they can also go and connect it to their automation tools and see how they can compose things via the automation tools they might already be using for other purposes. It can then extend into all things composable as well.

I definitely encourage people to learn more, but specially to move into the “doing phase.” Just try it out, see how easy it is to get things done.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business intelligence, Cloud computing, containers, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, machine learning, multicloud, Software-defined storage, storage, User experience | Tagged , , , , , , , , , , , | Leave a comment

SAP Ariba COO James Lee on the best path to an intelligent and talented enterprise

Hand holding artificial intelligence robot face with network data 0 and 1 background,3D Rendering.

The next BriefingsDirect enterprise management innovations discussion explores the role of the modern chief operating officer (COO) and how they are tasked with creating new people-first strategies in an age of increased automation and data-driven intelligence.

We will now examine how new approaches to spend management, process automation, and integrated procurement align with developing talent, diversity, and sustainability.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the leadership trends behind making globally dispersed and complex organizations behave in harmony, please welcome James Lee, Chief Operating Officer at SAP Aribaand SAP Fieldglass.The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: James, why has there never been a better time to bring efficiency and intelligence to business operations? Why are we in an auspicious era for bridging organizational and cultural gaps that have plagued businesses in the past?

Lee: If you look at the role of the modern COO, or anyone who is the head of operations, you are increasingly asked to be the jack-of-all-trades. If you think about the COO, they are responsible for budgeting and planning, for investment decisions, organizational and people topics, and generally orchestrating across all aspects of the business. To do this at scale, you really need to drive standardization and best practices, and this is why efficiency is so critical. 

Lee

Now, in terms of the second part of your question, which has to do with intelligence, the business increasingly is asking for — not just reporting the news — but making the news. What does that mean? That means you have to offer insights to different parts of the business and help them make the right decisions; things that they wouldn’t know otherwise. That requires leveraging all the data available to do thorough analysis and provide the data that all the functional leaders can use to make the best-possible decision.

Gardner: It seems that the COO is a major consumer of such intelligence. Do you feel like you are getting better tools?

Make sense of data

Lee: Yes, absolutely. We talk about being in the era of big data, so the information you can get from systems — even from a lot of devices, be it mobile devices or sensors – amounts to an abundance and explosion of data. But how to make sense of this data is very tricky.

As a COO, a big part of what I do is not only collect the data from different areas, but then to make sense of it, to help the business understand the insights behind this data. So I absolutely believe that we are in the age where we have the tools and the processes to exploit data to the fullest.

Gardner: You mentioned the COO needs to be a jack-of-all-trades. What in your background allows you to bring that level of Renaissance man, if you will, to the job?

Lee: As COO of SAP Ariba and now SAP Fieldglass, too, I have operational responsibilities across our entire, end-to-end business. I’m responsible for helping with our portfolio strategy and investments, sales excellence, our commercial model, data analytics, reporting, and then also our learning and talent development. So that is quite a broad purview, if you will. 

I feel like the things I have done before at SAP have equipped me with the tools and the mindset to be successful in this position. Before I took this on, I was a COO and general manager of sales for the SAP Greater China business. In that position, during that time, I doubled the size of SAP’s business in China, and we were also involved in some of the largest product launches in China, including SAP S/4HANA

Before that, having been with SAP for 11 years, I had the opportunity to work across North America, Europe, and Asia in product and operating roles, in investment roles, and also sales roles.

Before joining SAP, I was a management consultant by training. I had worked at Hewlett Packard and then McKinsey and Company.

Gardner: Clearly most COOs of large companies nowadays are tasked with helping extend efficiency into a global environment, and your global background certainly suits you for that. But there’s another element of your background that you didn’t mention – which is having studied and been a concert pianist. What do you think it is about your discipline and work toward a high level of musical accomplishment that also has a role in your being a COO?

The COO as conductor 

Lee: That’s a really interesting question. You have obviously done your research and know my background. I grew up studying classical music seriously, as a concert pianist, and it was always something that was very, very important to me. I feel even to this day — I obviously have pursued a different profession — that it is still a very key and critical part of who I am.

If I think about the two roles — as a COO and as a musician — there are actually quite a few parallels. To start, as a musician, you have to really be in tune with your surroundings and listen very carefully to the voices around you. And I see the COO team ultimately as a service provider, it’s a shared services team, and so it’s really critical for me to listen to and understand the requirements of my internal and external constituents. So that’s one area where I see similarities.

Secondly, the COO role in my mind is to orchestrate across the various parts of the business, to produce a strong and coherent whole. And again, this is similar to my experiences as a musician, in playing in ensembles, and especially in large symphonies, where the conductor must always know how to bring out and balance various musical voices and instruments to create a magical performance. And again, that’s very similar to what a COO must do.

Gardner: I think it’s even more appropriate now — given that digital transformation is a stated goal for so many enterprises – to pursue orchestration and harmony and organize across multiple silos.

Does digital transformation require companies to think differently to attain that better orchestrated whole?

Lee: Yes, absolutely. From the customers that I have spoken to, digital transformation to be successful has to be a top-to-down movement. It has to be an end-to-end movement. It’s no longer a case where management just says, “Hey, we want to do this,” without the full support and empowerment of people at the working level. Conversely, you can have people at the project team level who are very well-intentioned, but without senior executive level support, it doesn’t work.

The role of the COO is to orchestrate across the various parts of the business, to produce a strong and coherent whole. This is similar to my experiences as a musician, in playing in ensembles, and especially in large symphonies. 

In cases where I have seen a lot of success, companies have been able to break down those silos, paint an overarching vision and mission for the company, brought everyone onto the same bandwagon, empowered and equipped them with the tools to succeed, and then drive with ruthless execution. And that requires a lot of collaboration, a lot of synergies across the full organization.

Gardner: Another lens through which to view this all is a people-centric view, with talent cultivation. Why do you think that that might even be more germane now, particularly with younger people? Many observers say Millennials have a different view of things in many ways. What is it about cultivating a people-first approach, particularly to the younger workers today, that is top of mind for you?

People-first innovation 

Lee: We just talked about digital transformation. If we think about technology, no matter how much technology is advancing, you always need people to be driving the innovation. This is a constant, no matter what industry you are in or what you are trying to do.

And it’s because of that, I believe, that the top priority is to build a sustainable team and to nurture talent. There are a couple of principles I really adhere to as I think about building a “people-first team.”

First and foremost, it’s very important to go beyond just seeking work-life balance. In this day and age, you have to look beyond that and think about how you help the people on your team derive meaning from what they do.

This goes beyond just work and life and balance, this has to do with social responsibility, personal enrichment, personal aspiration, and finding commonality and community among your peers. And I find that now — especially with the younger generation — a lot of what they do is virtual. We are not necessarily in the office all together at the same time. So it becomes even more important to build a sense of connectivity, especially when people are not all present in the same room. And this is something that Millennials really care about.

Also for Millennials it’s important for them, at the beginning of their careers, to have a strong true-north. Meaning that they need to have great mentors who can coach them through the process, work with them, develop them, and give them a good sense of belonging. That’s something I always try to do on my team, to ensure that the young people get mentorship early on in their career to have one-on-one dedicated time. There should always be a sounding board for them to air their concerns or questions.

Gardner: Being a COO, in your case, means orchestrating a team of other operations professionals. What do you look for in them, in their background, that gives you a sense of them being able to fulfill the jack-of-all-trades approach?

Growth mindset drives success

Lee: I tend to think about successful individuals, or teams, along two metrics. One is domain expertise. Obviously if you are in charge of, say, data analytics then your background as a data scientist is very important. Likewise, if you are running a sales operation, a strong acumen in sales tools and processes is very important. So there is obviously a domain expertise aspect of it.

But equally, if not more important, is another mentality. I tend to believe in people who are of a growth-mindset as opposed to a closed-mindset. They tend to achieve more. What I mean by that are people who tend to want to explore more, want to learn more, who are open to new suggestions and new ways of doing things. The world is constantly changing. Technology is changing. The only way to keep up with it is if you have a growth mindset.

It’s also important for a COO team to have a service mentality, of understanding who your ultimate customer is — be it internal or external. You must listen to them, understand what the requirements are, and then work backward and look at what you can create or what insights you can bring to them. That is very critical to me.

When we talk about procurement, end users are increasingly looking for a marketplace-like experience. They are used to a B2C experience. And for Millennials, they are pushing everyone to think differently. They expect easy, seamless access across all of their different platforms. 

Gardner: I would like to take advantage of the fact that you travel quite a bit, because SAP Ariba and SAP Fieldglass are global in nature. What you are seeing in the field? What are your customers telling you?

Lee: As I travel the globe, I have the privilege of supporting our business across the Americas, Europe, the Middle East, and Asia, and it’s fascinating to see that there are a lot of differences and nuances — but there are a lot of commonalities. At the end of the day, what people expect from procurement or digital transformation are more or less very similar.

There are a couple of trends I would like to share with you and your listeners. One is, when we talk about procurement, end users are increasingly looking for a marketplace-like experience. Even though they are in a business-to-business (B2B) environment, they are used to the business-to-consumer (B2C) user experience. It’s like what they get on Amazon where they can do shopping, they have a choice, it’s easy to compare value, and features — but at the same time you have all of the policies and compliance that comes with B2B. And that’s something that is beginning to be the lowest common denominator.

Secondly, when we talk about Millennials, I think the Millennial experience is pushing everyone to think differently about the user experience. And not just for SAP Ariba and SAP Fieldglass, but for any software. How do we ensure that there is easy data access across different platforms — be it your laptop, your desktop, your iPad, your mobile devices? They expect easy, seamless access across all their different platforms. So that is something I call the Millennial experience.

Contingent, consistent labor

Thirdly, I have learned about the rise of contingent labor in a lot of regions. We, obviously, are very honored to now have Fieldglass as part of the SAP Ariba family. And I have spent more and more time with the Fieldglass team.

In the future, there may even be a situation where there are few permanent, contracted employees. Instead, you may have a lot of project-based, or function-based, contingent laborers. We hear a lot about that, and we are focused on how to provide them with the tools and systems to manage the entire process with contingent labor.

Gardner: It strikes as an interesting challenge for COOs — how do you best optimize and organize workers who work with you, but not for you.

Lee: Right! It’s very different because when you look at the difference between indirect and direct procurement, you are talking about goods and materials. But when you are talking about contingent labor, you are talking about people. And when you talk about people, there is a lot more complexity than if you are buying a paper cup, pen, or pencil.

You have to think about what the end-to-end cycle looks like to the [contingent workers]. It extends from how you recruit them, to on-boarding, enabling, and measuring their success. Then, you have to ensure that they have a good transition out of the project they are working on.

SAP Fieldglass is one of the few solutions in the market that really understands that process and can adapt to the needs of contingent laborers. 

Gardner: One more area from your observations around the globe: The definition and concept of the intelligent enterprise. That must vary somewhat, and certain cultures or business environments might accept more information, data, and analytics differently than others. Do you see that? Does it mean different things to different people?

Intelligent enterprise on the rise

Lee: At its core, if you look at the revolution of the enterprise software and solutions, we have gone from being a very transactional system — where we are the system of bookings and record, just tracking what is being done — to we start to automate, what we now call the intelligent enterprise. That means making sense of all the information and data to create insight.

A lot of companies are looking to transform into an intelligent enterprise. That means you need to access an abundance of data around you. We talked about the different sources — through sensors, equipment, customers, suppliers, sometimes even from the market and your competitors — a 360-degree view of data. 

Then how do you have a seamless system that analyzes all of this data and actually makes sense of it? The intelligent enterprise takes it to the next level, which is leveraging artificial intelligence (AI). There is no longer a person or a team sitting in front of a computer and doing Excel modeling. This is the birth of the age of AI.

Now we are looking at predictive analytics, where, for example, at SAP Ariba, we look for patterns and trends on how you conduct procurement, how you contract, and how you do sourcing. We then suggest actions for the business to take. And that, to me, is an intelligent enterprise.

Gardner: How do you view the maturity of AI, in a few years, as an accelerant to the COO’s job? How important will AI be for COOs specifically?

Lee: AI is absolutely a critical, critical topic as it relates to — not just procurement transformation — but any transformation. There are four main areas addressed with AI, especially the advanced AI that we are seeing today.

Number one, it allows you to drive deeper engagement and adoption of your solution and what you are doing. If you think about how we interact with systems through conversations, sometimes even through gestures, that’s a different level of engagement than we had before. You are involving the end user in a way that was never done before. It’s interactive, it’s intuitive, and it avoids a lot of cost when it comes to training.

Secondly, we talk a lot about decision-making. AI gives you access to a broad array of data and you can uncover hidden insights and patterns while leveraging it.

Thirdly, we talked about talent, and I believe that having AI helps you attract and retain talent with state-of-the-art technology. We have self-learning systems that help you institutionalize a lot of knowledge.

And last, but not least, it’s all about improving business outcomes. So, you think about how you increase efficiencies for your personalized, context-specific information. In the context of procurement, you can improve approvals and accuracy, especially when you are dealing with contracts. An AI robot is a lot less prone to error than the human working on a contract. We have the statistics to prove it. 

At the end of the day, we look at procurement and we see an opportunity to transform it from a very tactical, transactional function into a very strategic function. And what that means is AI can help you automate a lot of the repetitive tasks, so that procurement professionals can focus on what is truly value-additive to the organization.

Gardner: We seem to be on the cusp of an age where we are going to determine what it is that the people do best, and then also determine what the machines do best — and let them do it.

This whole topic of bots and robotic process automation (RPA) is prevalent now across the globe. Do you have any observations about what bots and RPA are doing to your customers of SAP Fieldglass and SAP Ariba?

Sophisticated bot benefits

Lee: When we talk about bots, there are two types that come to mind. One is in the shop floor, in a manufacturing setting, where you have physical bots replacing humans and what they do. 

Secondly, you have virtual bots, if you will. For example, at SAP Ariba, we have bots that analyze data, make sense of the patterns, and provide insights and decision-making support to our end users.

In the first case, I absolutely believe that the bots are getting more sophisticated. The kinds of tasks that they can take on, on the shop floors, are a lot more than what they were before — and it drives a lot of efficiency, cuts costs, and allows employees to be redeployed to more strategic, higher value-added roles. So I absolutely see that as a positive trend going forward.

When it comes to the artificial, virtual bots, we see a lot of advancement now, not just in procurement, but in the way they are being used across sales and human resources systems. I was talking to a company just last week and they are utilizing virtual bots to do the recruiting and interviewing process. Can you imagine that?

The next time you submit your resume to a company, on the other end of the line might not be a human, but a robot that is screening you. It’s now to that level of sophistication.

The next time that you are submitting your résumé to a company, on the other end of the line might not be a human that you are talking to, but actually a robot that’s screening you. And it’s now to the level of sophistication where it’s hard for you to tell the difference.

Gardner: I might feel better that there is less subjectivity. If the person interviewing me didn’t have a good sleep the previous night, for example. I might be okay with that. So it’s like the Turing test, right? Do you know whether it’s real bodies or virtual bots?

Before we close out, James, do you have any advice for other COOs who are seeking to take advantage of all the ways that digital transformation is manifesting itself? What advice do you have for COOs who are seeking to up their game?

It’s up to you to up your COO game

Lee: Fundamentally, the COO role is what you make of it. A lot of companies don’t even have a COO. It’s a unique role. There is no predefined job scope or job definition.

For me, a successful COO — at least in the way I measure myself — is about what kind of business impact you have when you look at the profits and loss (P and L). Everything that you do should have a direct impact on your top line, as well as your bottom line. And if you feel like the things that you are doing are not directly impacting the P and L, then it’s probably time to reconsider some of those things.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in application transformation, Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, data analysis, Enterprise transformation, ERP, Information management, machine learning, Networked economy, procurement, SAP, SAP Ariba, Spot buying, supply chain, User experience | Tagged , , , , , , , , , , , , , , , , | Leave a comment

How healthcare providers are taking cues from consumer expectations to improve patient experiences

The next BriefingsDirect healthcare insights discussion explores the shift medical services providers are making to improve the overall patient experience.

Taking a page from modern, data-driven industries that emphasize consumer satisfaction and ease, a major hospital in the New York metro area has embarked on a journey to transform healthcare-as-a-service.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the surging importance and relevance for improving patient experiences in the healthcare sector using the many tools available to other types of businesses, we are joined by Laura Semlies, Vice President of Digital Patient Experience, at Northwell Health in metro New York, and Julie Gerdeman, President at HealthPay24 in Mechanicsburg, Penn. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s driving a makeover in the overall medical patient experience?

Semlies: The trend we’re watching is recognizing the patient as a consumer. Now, healthcare systems are even calling patients “consumers” — and that is truly critical.

Semlies

In our organization we look at [Amazon founder and CEO] Jeff Bezos’ very popular comment about having “customer obsession” — and not “competition obsession.” In doing so, you better understand what the patient needs and what the patient wants as a consumer. Then you can begin to deliver a new experience. 

Gardner: This is a departure. It wasn’t that long ago when a patient was typically on the receiving end of information and care and was almost expected to be passive. They were just off on their way after receiving treatment. Now, there’s more information and transparency up-front. What is it about the emphasis on information sharing that’s changed, and why?

Power to the patients

Semlies: A lot of it has to do with what patients experience in other industries, and they are bringing those expectations to healthcare. Almost every industry has fundamentally changed over the course of a last decade, and patients are bringing those changes and expectations into healthcare. 

In a digital world, patients expect their data is out there and they expect us to be using it to be more transparent, more personalized, and with more curated experiences. But in healthcare we haven’t figured it out just yet — and that’s what digital transformation in healthcare means. 

How do you take information and translate it into more meaningful and personalized services to get to the point where patients have experiences that drive better clinical outcomes?

Gardner: Healthcare then becomes more of a marketplace. Do you feel like you’re in competition? Could other providers of healthcare come in with a better patient experience and draw the patients away?

Semlies: For sure. I don’t know if that’s true in every market, but it certainly is in the market that I operate in. We live in a very competitive market in New York. The reality is if the patient is not getting the experience they want, they have choices, and they will opt for those choices. 

A recent study concluded that 2019 will be the year that patients choose who renders their care based on things that they do or do not get. Those things can range from the capability to book appointments online, to having virtual visits, to access to a patient portal with medical record information — or all of the above. 

And those patients are going to be making those choices tomorrow. If you don’t have those capabilities to treat the patient and meet their needs — you won’t get that patient after tomorrow.

Gardner: Julie, we’re seeing a transition to the healthcare patient experience similar to what we have seen in retail, where the emphasis is on an awesome experience. Where do you see the patient experience expanding next? What needs to happen to make it a more complete experience?

Gerdeman: Laura is exactly right. Patients are doing research upfront before providers interact with them, before they even call and book an appointment. Some 70 percent of patients spend that time to look at something online or make a phone call.

Competitive, clinical customer services

Gerdeman

We’re now talking about addressing a complete experience. That means everything from up-front research, through the clinical experience, and including the financial and billing experiences. It means end-to-end, from pre-service through post-service.

And that financial experience needs to be at or better than the level of experience they had clinically. Patients are judging their experience end-to-end, and it is competitive. We hear from healthcare providers who want to keep patients out of their competitors’ waiting rooms. Part of that is driving an improved experience, where the patient-as-consumer is informed and engaged throughout the process. 

Financially speaking, what does that mean? It means digital engagement — something simple, beautiful, and mobile that’s delivered via email or text. We have to meet the consumer, whenever, and wherever they are. That could be in the evening or early in the morning on their devices. 

That’s how people live today. Those personalized and curated experiences with Google or Alexa, they want that same experience in healthcare.

Gardner: You don’t want a walk into a time machine and go back 30 to 40 years just because you go to the hospital. The same experience you can get in your living room should be there when you go to see your doctor. 

Laura, patient-centric care is complicated enough in just trying to understand the medical issues. But now we have a growing level of complexity about the finances. There are co-pays, deductibles, different kinds of insurance, and supplemental insurance. There are upfront cost estimates versus who knows what the bill is going to be in six months.

How do we fulfill the need for complete patient-centric services when we now need to include these complex financial issues, too?

Learn How to Meet Patient Demands

For Convenient Payment Options

For Healthcare Services

Semlies: One way is to segment patients based on who they are at any moment. Patients can move very quickly from a healthy state to a state of chronic disease management. Or they can go from an episode where they need very intense care to quickly being at home. 

First, you need to understand where the patients’ pain points are across those different patient journeys.

Second is studying your data and looking back and analyzing it to understand what those ranges of responsibility look like. Then you can start to articulate and package those things. You have more norms to do early and targeted financial counseling. 

The final part is being able to communicate, even as things change in a person’s course of treatment, and that has an impact on your financial responsibility. That kind of dialogue in our industry is almost non-existent right now.

Sharing data and dialogue

Among the first things patients look for is via searches based on their insurance carrier. Well, insurance isn’t enough. It’s not enough to know you are going to see doctor so-and-so for x with insurance plan B. You need to know far more than that to really get an accurate sense of what’s going on. Our job is to figure out how to do that for patients.

We have to get really good at figuring out how to deliver the right level of detail on information about you and what you are seeking. We need to know enough about our healthcare system, what are the costs are and what the options are so that we can engage in dialogue.

It could be a digital dialogue, but we have to engage in a dialogue. The reality is we know even in a digital situation that patients only want to share certain amount of information. But they also want accurate information. So what’s that balance? How do you achieve that? I think the next 12 to 18 months is going to be about figuring that out. 

Transparency isn’t only posting a set of hospital charges; it’s just not. It’s a step in the right direction. There is now a mandate saying that transparency is important, and we all agree with that. Yet we still need meaningful transparency, which includes the ability to start to control your options and make decisions in association with a patients’ financial health goals, too.

Gardner: So, the right information, to the right person, in the right context, at the right time. To me, that means a conversation based on shared data, because without data all along the way you can’t get the context.

What is the data sharing and access story behind the patient-centric experience story?

One of the biggest problems right now is the difference between an explanation of benefits and a statement. They don’t say the same thing, and are coming from two different places. It’s very difficult to explain to a patient.

Semlies: If we look at the back-end of the journey, one of the biggest problems right now is the difference between an explanation of benefits and a statement. They don’t say the same thing, and they are coming from two different places. It’s very difficult to explain everything to a patient when you don’t have that explanation of benefits (EOB) in front of you. 

What we’re going to see in the next months and years — as more collaboration is needed between payers and health systems and providers – is a new standard around how to communicate. Then we can perhaps have an independent dialogue with a patient about their responsibilities. 

But we don’t own the benefits structure. There are a lot of moving parts in there. To independently try to control that conversation across health systems, we couldn’t possibly get it right.

So one of the strategies we are pursuing is how do we work with each and every one of our health systems to try and drive innovation around data sharing and collaboration so that we can get the right answer for a shared patient. 

That “consumer” is shared between us as providers as well as the payer plan that hosts the patient. Then you need to add another layer of extra complexity around the employer population. Those three players need to be working carefully together to be able to solve this problem. It’s not going to be a single conversation.

Gardner: This need to share collaborative data across multiple organizations is a big problem. Julie, how do you see this drive for a customer-centric shared data equation playing out?

Healthy interoperability 

Gerdeman: Technology and innovation are going to drive the future of this. It’s an opportunity for companies to come together. That means interoperability, whether you’re a payments provider like HealthPay24, or you’re providing statement information, you’re providing estimates information. Across those fronts, all of that data relates to one patient. Technology and innovation can help solve these problems.

We view interoperability as the key, and we hear it all the time. Northwell and our other provider customers are asking for that transparency and interoperability. We, as part of that community, need to be interoperable and integrate in order to present data in a simple way that a consumer can understand. 

When you’re a consumer you want the information that you need at that moment to make a decision. If you can get it proactively — all the better. Underlying all this, though, is trust. It’s something I like to talk about. Transparency is needed because there is lack of trust.

Transparency is just part of the trust equation. If you present transparency and you do it consistently, then the consumer — the patient — has trust. They have immediate trust when they walk into a provider or doctor’s office as a patient. Technology has an opportunity to help solve that.

Gardner: Laura, you’re often at the intercept point with patients. They are going to be asking you – the healthcare provider — their questions. They will look to you to be the funnel into this large ecosystem behind the scenes.

What would you like to see more of from those other players in that ecosystem to make your job easier, so that you can provide that right level of trusted care upfront to the patient?

Simplify change and choice

Semlies: Collaboration and interoperability in this space are essential. We need to see more of that.

The other thing that we need — and it’s not necessarily from those players, but from the collective whole — is a sense of modeling “if-then” situations. If thishappens what will thenhappen?

By leveraging from such process components, we can remodel things really well and in a very sophisticated fashion. And that can work in many areas with so many choices and paths that you could take. So far, we don’t do any of that in price transparency with our patients. And we need to because the boundaries are not tight.

What you charge – from copay to coinsurance – can change as you’re moving from observation to inpatient, or inpatient back to observation. It changes the whole balance card for a patient. We need the capability to model that out and articulate the why, how, and when — and then explain what the impact is. It’s a very complicated conversation.

But we need to figure out all of those options along with the drivers of costs. It has to be made simple so that patients can engage, understand, and anticipate it. Then, ultimately, we can explain to them their responsibility.

I often hear that patients are slow to pay, or struggle to pay. Part of what makes them slow to pay is the confusion and complexity around all of this information. I think patients want to pay their share.

Earn patients’ trust

It’s just the complexity around this makes it difficult, and it creates a friction point that shouldn’t be there. We do have a trust situation from an administrative perspective. I don’t think our patients trust us in regard to the cost of their care, and to what their share of the care is. 

I don’t think they trust their insurers and payers tremendously. So we have to earn trust. And it’s going to mean that we need to be way more accurate and upfront. It’s about the basics. Did you give me a bill that I can understand? Did I have options when I went to pay it? We don’t even do that easy stuff well today.

I used to joke that we should be paying patients to pay us because we make it so difficut. We are now in a better place. We are putting in the foundation so that we can earn trust and credibility. 

I used to joke that we should be paying patients to pay us because we made it so difficult. We are now in a better place. We are putting in the foundation so that we can earn trust and credibility. We are beginning the dialogue of, “What do you need as a patient?” With that information, we can go back and create the tools to engage with these patients. 

We have done more than 1,000 hours of patient focus group studies on financial health issues, along with user testing to understand what they need to feel better about their financial health. There is clinical health, there are clinical outcomes — but there is also financial health. Those are new words to the industry.

If I had a crystal ball, I’d say we’re going to be having new conversations around what a patient needs to feel secure, that they understood what they were getting into, and that they knew about their ability to pay it or had other options, too. 

Meet needs, offer options

Gerdeman: Laura made two points that I think are really important. The first is around learning, testing, and modeling — so we can look at the space differently. That means using predictive analytics upfront in specific use cases to anticipate patient needs. What do they need, and what works? 

We can use isolated, specific use-cases to test using technology — and learn. For example, we have offered up-front discounts for patients. If they pay in full, they get a discount. We learned that there are certain cases where you can collect more by offering a discount. That’s just one use-case, but predictive analytics, testing, and learning are the key. 

The second thing that is dead-on is around options. Patients want options. Patients want to know, “Okay, what are my choices?” If that’s an emergency situation, we don’t have the option to research it, but then soon after, what are the choices?

Most American consumers have less than $500 set aside for medical expenses. Do they have the option of a self-service and flexible payment plan? Can they get a loan? What are their choices to make an informed choice? Perhaps at home at their convenience.

Those are two examples where technology can really help play a role in the future. 

Gardner: You really can’t separate the economics from healthcare. We’re in a new era where economics and healthcare blend together, the decision-making for both of them comes together.

We talked about the need for data and how it can help collaboration and process efficiency. It also allows for looking at that data and applying analytics, learning from it, then applying those lessons back. So, it’s a really exciting time.

But I want to pause for a moment. Laura, your title of “Vice President of Digital Patient Experience” is unique. What does it take to become a Vice President of Digital Patient Experience?

Journey to self-service 

Semlies: That is a great question. The Digital Patient Experience Office at Northwell is a new organization inside of the health system. It’s not an initiative- or a program-focused office where it’s one and done, where you go in and you deliver something and then you’re done. 

We are rallying around the notion that the patient expects to be able to interact with us digitally. To do so we need to transform our entire organization — culturally, operationally, and technically to be able accommodate that transformation. 

Before, I was responsible for revenue cycle transformation of the Northwell Health system. So I do have a financial background. However, what set me up for pursuing this digital transformation was the recognition that self-service was going to disrupt the traditional revenue cycle. We need to have a new capability around self-service that inherently allows the consumer to do what they want and need to manage their administrative interactions differently with the health system. 

See the New Best Practice

Of Driving Patient Loyalty

Through Estimation

I was a constant voice for the last decade in our health system, saying, “We need to do this to our infrastructure so that we can be able to rationalize and standardize our core applications that serve the patient, including the revenue cycle systems, so that we can interoperate in a different way and create a platform by which patients can self-serve.”

And we’re still in that journey, but we’re at a point where we can begin to engage very differently. I’m working to solve three fundamental questions at the heart of the primary pain-points, or friction points, that patients have.

Patients tell us these three things: “You never remember who I am. I have been coming here for the last 10 years and you still ask me for my name, my date of birth, my insurance, my clinical history. You should know that by now.”

Two, they say, “I can’t figure out how to get in to see the right doctor at the right time at the right location for me. Maybe it’s a great location for you, or a great appointment time for you. But what if it doesn’t work for me? How do I fix that?”

And, third, they say, “My bills are confusing. The whole process of trying to pay a bill or get a question answered about one is infuriating.”

Whenever you talk to anyone in our health system — whether it’s our chief patient experience officer, CEO, chief administrative officer, or COO — those are the three things that were also coming out of customer service, Press Ganey [patient satisfaction] results, and complaints. When you have direct conversations with patients, such as through family advisory councils, the complaints weren’t about the clinical stuff. 

Digital tools to ease the pain

It was all on the administrative burden that we were putting on patients, and this anonymity that patients were walking through our halls with. Those are what we needed to focus on first. And so that’s what we’re doing.

We will be bringing out a set of tools so our patients will be able to, in a very systematic way, negotiate appointment management. They will be able to view and manage their appointments online with the ability to book, change, and cancel anything that they need to. They will simply see those appointments and get directions to those appointments and communicate with those administrative officers.

The second piece of the improvement is around the, “You never remember who I am” problem, where they have been to a doctor and get the blank clipboard to fill out. Then, regardless of whether they were there yesterday or went to see a new doctor, they get the same blank clipboard.

We’re focused on getting away from the clipboard to remembering information and not seeking the same information twice — only if there is the potential that information has changed. Instead of a blank form, we present them the opportunity to revise. And they do it remotely on their time. So we are respecting them by being truly prepared when they come to the office.

The second piece of the improvement is around the, “You never remember who I am” problem, where they have been to a doctor and get the blank clipboard to fill out. Regardless of whether they were there yesterday or go to a new doctor, they get the same blank clipboard to fill out.

The other side of “never remembering who I am” is proper authentication of digital identity. It’s not just attaching a name with the face virtually. You have to be able to authenticate so that information can be shared with the patient at home. It means being able to have digital interactions that are not superficial. 

The third piece [of our patient experience improvement drive] is the online payment portal for which we use HealthPay24. The vision is not only for patients to be able to pay one bill, but for any party that has a responsibility within the healthcare system — whether it’s a lab, ambulance, hospital or physician – to provide the capability to all be paid in a single transaction using our digital tools. We take it one step further by giving it a retail experience, with such features as “save the card on file” so if you paid the bill last week you shouldn’t have to rekey those digits into the system. 

We plan to take it even further. For example, providing more options to pay — whether by a loan, payment plan, or to use such services as Apple Pay and Google Pay. We believe these should be stable stakes, but we’re behind and are putting in those pieces now just to catch up. 

Our longer-term vision goes far deeper. We expect to go all the way back to the point of when patients are just beginning to seek care. How do I help them understand what their financial responsibility and options are at that point, before they even have a bill in our system? This is the early version of digital transformation.

Champion patient loyalty

Gerdeman: Everything Laura just talked about comes down to one word — loyalty. What they are putting in place will drive patient loyalty, just like consumer loyalty. In the retail space we have seen loyalty to certain brands because of how consumers interact with them, as an emotional experience. It comes down to a combination of human elements and technology to create the raving fans, in this case, of Northwell Health.

Gardner: We have seen the digital user experience approach be very powerful in other industries. For example, when I go to my bank digitally I can see all my transactions. I know what my balances are. I can set payment schedules. If I go to my investment organization, I can see the same thing with my retirement funds. If I go to my mortgage holder, same thing. I can see what I owe on my house, and maybe I want a second property and so I can immediately initiate a new loan. It’s all there. We know that this can be done.

Julie, what needs to happen to get that same level of digital transparency and give the power to the consumer to make good choices across the healthcare sector?

Rx: Tech for improved healthcare

Gerdeman: It requires a forward-looking view into what’s possible. And we’re seeing disruption. At the recent HiMSS 2019 conference [in February in Orlando] a gathering of 45,000 people were thinking like champions of healthcare — about what can be done and what’s possible. To me, that’s where you start. 

Like Laura said, many are playing catch-up. But we also need to be leapfrogging into the future. What emerging technologies can change the dynamic? Artificial intelligence (AI) and what’s happening there, for example. How can we better leverage predictive analytics? We’re also examining Blockchain, so what can distributed ledger do and what role can it play?

I’m really excited about what’s possible with marrying emerging technology, while still solving the nuts and bolts of interoperability and integration. There is hard work in integration and interoperability to get systems talking to one another. You can’t get away from that ugly part of the job, but then there is an exciting future part of job that I think is fundamental. 

Laura also talked about culture and cultural shift. None of it can happen without an embrace of change management. That’s also hard because there are always people and personalities. But if you can embrace change management along with the technology disruption, new things can happen.

Semlies: Julie mentioned the hard, dirty work behind the scenes. That data work is really fundamental, and that has prevented healthcare from becoming more digital. People are represented by their data in the digital space. You only know people when you understand their data.

In healthcare — at least from a provider perspective — we have been pretty good about collecting information about a patient’s clinical record. We understand them clinically.

We also do a pretty decent job at understanding the patient from a reimbursement and charges perspective. We can get a bill out the door and get the bill paid. Sometimes if we don’t get the bill paid, when it gets down to the secondary responsibility, we do collect that information and we get those bills out. The interaction is there. 

What we don’t do well is managing processes across hundreds of systems. There are hundreds of systems in any big healthcare system today. The bridges and connections between those data systems are just not there. So a patient often finds themselves interacting with each and every one of them.

For example, I am a patient as the mom of three kids. I am a patient as the daughter of two aging parents. I am wife to a husband who I am interacting with. And I am myself my own patient. The data that I need to deal with — and the systems I need to interact with — when I am booking an appointment, versus paying a bill, versus looking for lab results, versus trying to look for a growth chart on a child — I am left to self-navigate across this world. It’s very complex and I don’t understand it as a patient. 

Our job is to figure out how to manage tomorrow and the patient of tomorrow who wants to interact digitally. We have to be able to integrate all of these different data points and make that universally accessible.

Electronic medical record (EMR) portals deal more with the clinical interactions. Some have gotten good at doing some of the administrative components, but certainly not all of them. We need to create something that is far broader and has the capability to connect the data points that live in silos today — both operationally as well as technically. This has to be the mandate. 

Open the digital front door

Gardner: You don’t necessarily build trust when you are asking the patient to be the middleware, to be the sneaker-ware, walking between the PC and the mainframe. 

Let’s talk about some examples. In order to get cultural change, one of the tried-and-true methods is to show initial progress, have a success story that you can champion. That then leads to wider adoption, and so forth. What is Northwell Health’s Digital Front Door Team? That seems an example of something that works and could be a harbinger of a larger cultural shift.

Semlies: Our Digital Front Door Team is responsible for creating tools and technology to provide a single access point for our patients. They won’t have to have multiple passwords or multiple journeys in order to interact with us.

Over the course of the last year, we’ve established a digital platform that all of our digital technologies and personnel connect to. That last point is really important because when a patient interacts with you digitally, there is a core expectation today that if they have told you something digitally, as soon as they show up in person, you are going to know it, use it, and remember it. The technology needs to extend the conversation or journey of experience as opposed to starting over. That was really critical for our platform to provide.

When a patient interacts with you digitally, there is a core expectation today that if they have told you something digitally, as soon as they show up, you are going to know it and use it. The technology needs to extend the conversation. 

Such a platform should consist of a single sign-on (SSO) capability, an API management tool, and a customer relationship management (CRM) database, from which we can learn all of the information about a patient. The CRM data drives different kinds of experiences that can be personalized and curated, and that data lives in the middle of the two data topics we discussed earlier. We collect that data today, and the CRM tool brokers all of this so it can be in the hands of every employee in the health system. 

The last piece was to put meaningful tools around the friction points we talked about, such as for appointment management. We can see availability of a provider and book directly into it with no middleman. This is direct booking, just like when I book an appointment on OpenTable. No one has to call me back. They may just send a digital reminder.

Gardner: And how has the Digital Front Door Team worked out? Do you have any metrics of success?

Good for patients, good for providers

Semlies: We took an agile approach to implementing it. Our first component was putting in the online payment capability with HealthPay24 in July 2018. Since then, we have approximately $25 million collected. In just the last six months, there have been more than 46,000 transactions. In December, we began a sign-in benefit so patients can login and see all of their balances across Northwell. 

We had 3,000 people sign-in to that process in the first several weeks, and we’re seeing evidence that our collections are starting to go up.

We implemented our digital forms tool in September 2018. We collected more than 14,000 digital forms in the first few months. Patients are loving that capability. The next version will be an at-home version so you will get text messages saying, “We see you have booked an appointment. Here are your forms to prepare for your visit.” They can get them all online. 

We are also piloting biometrics so that when you first show up at your appointment you will have the opportunity to have your picture taken. It’s iris-scanning and deep facial recognition technology so that will be the method of authentication. That will also be used more over time for self check-ins and eventually to access the ultimate portal. 

The intent was to deploy as early as there was value to the patient. Then over time all of those services will be connected as a single experience. Next to come are improved appointment management with the capability to book appointments online, as well as to change, manage, see all appointments via a connection to the patient portal.

All of those connection points will be rendered through the same single sign-in by the end of this quarter, both on our website, https://www.northwell.edu/, and via a proprietary mobile app that will come out in the app stores.

Gardner: Those metrics and rapid adoption show that a good patient experience isn’t just good for the patient — it’s good for the provider and across the entire process. Julie, is Northwell Health unique in providing the digital front door approach?

Gerdeman: We are seeing more healthcare providers adopt this approach, with one point of access into their systems, whether you are finding a doctor or paying a bill. We have seen in our studies that seven out 10 patients only go to a provider’s website to pay a bill. 

From a financial perspective, we are working hard with leaders like Laura whose new roles support the digital patient experience. Creating that experience drives adoption, and that adoption drives improved collections. 

Ease-of-use entertains and retains clients

Semlies: This channel is extremely important to us from a patient loyalty and retention perspective. It’s our opportunity to say, “We have heard you. We have an obligation to provide you tools that are convenient, easy to use, and, quite frankly, delight you.”

But the portal is not the only channel. We recognize that we have to be in lots of different places from the adoption perspective. The portal is not the only place every patient is going. There will be opportunities for us to populate what I refer to as the book-now button. And the book-now button cannot be exclusive to the Northwell digital front door. 

View More Provider Success Stories

On Driving Useage, Adotion,

And Loyalty Among Patients

need to have that book-now button in the hands of every payer agent who is on the phone talking to a patient or in their digital channel or membership. I need to have it out in the Zocdocs of the world, and in any other open scheduling application out there. 

I need to have ratings and reviews. We need to be multichannel in our funnel in, but once we get you in we have to give you tools and resources that surprise and delight you and make that re-engagement with somebody else harder because we make it so easy for you to use our health system. 

And we have to be portable so you can take it with you if you need to go somewhere. The concept is that we have to be full service, and we want to give you all of the tools so you can be happy about the service you are getting — not just the clinical outcome but the administrative service, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in Business intelligence, Cloud computing, data analysis, electronic medical records, ERP, healthcare, Identity, Information management, machine learning, procurement, Security, supply chain, User experience | Tagged , , , , , , , , , , | Leave a comment

How HPC supports ‘continuous integration of new ideas’ for optimizing Formula 1 car design

The next BriefingsDirect extreme use-case for high-performance computing (HPC) examines how the strictly governed redesign of Formula 1 race cars relies on data center innovation to coax out the best in fluid dynamics analysis and refinement.

We’ll now learn how Alfa Romeo Racing (formerly Alfa Romeo Sauber F1 Team) in Hinwil, Switzerland leverages the latest in IT to bring hard-to-find but momentous design improvements — from simulation, to wind tunnel, to test track, and ultimately, to victory. The goal: To produce cars that are glued to the asphalt and best slice through the air.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to describe the challenges and solutions from the compute-intensive design of Formula 1 cars is Francesco Del Citto, Head of Computational Fluid Dynamics Methodology for Alfa Romeo Racing, and Peter Widmer, Worldwide Category Manager for Moonshot/Edgeline and Internet of Things (IoT) at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why does Alfa Romeo Racing need to prepare for another car design?

Del Citto

Del Citto: Effectively, it’s a continuous design process. We never stop, especially on the aerodynamic side. And what every Formula 1 team does is dictated by each race season and by the specific planning and concept of your car in terms of performance. 

For Formula 1 racing, the most important and discriminating factor in terms of performance is aerodynamics. Every Formula 1 team puts a lot of effort in designing the aerodynamic shape of their cars. That includes for brake cooling, engine cooling, and everything else. So all the airflow around and inside of the car is meticulously simulated to extract the maximum performance.

Gardner: This therefore becomes as much an engineering competition as it is a racing competition.

Engineered to race

Del Citto: Actually, it’s both. On the track, it’s clearly a racing competition between drivers and teams. But before you ever get to the track, it is an engineering competition in which the engineers both design the cars as well as the methods used to design the cars. Each Formula 1 team has its own closely guarded methodologies and processes – and they are each unique.

Gardner: When I first heard about fluid dynamics and aerodynamic optimization for cars, I was thinking primarily about reduction of friction. But this is about a lot more, such as the cooling but also making the car behave like a reverse airplane wing.

Tell us why the aerodynamic impacts are much more complicated than people might have appreciated.

Del Citto: It is very complicated. Most of the speed and lap-time reductions you gain are not on the straightaways. You gain over your competitors in how the car behaves in the corners. If you can increase the force of the air acting on the car — to push the car down onto the ground — then you have more force preventing the car from moving out of line in the corners.

Why use the force of the air? Because it is free. It doesn’t come with any extra weight. But it is difficult to gain such extra inertial control forces. You must generate them in an efficient way, without being penalized too much from friction.

Learn How High-Density HPC 

Doubles Throughput 

While Slashing Energy Use

It’s also difficult to generate such forces without breaking the rules, because there are rules. There are limits for designing the shapes of the car. You cannot do whatever you want. Still, within these rules, you have to try to extract the maximum benefits. 

The force the car generates is called downforce, which is the opposite of lift forcefrom the design of an airplane. The airplane has wings designed to lift. The racing car is designed to be pushed down to the ground. The more you can push to the ground, the more grip you have between the tires and the asphalt and the faster you can go in the corners before the friction gives up and you just slide.

Gardner: And how fast do these cars go nowadays?

Del Citto: They are very fast on the straight, around 360-370 km/hour (224-230 mph), especially in Mexico City, where the air is thin due to the altitude. You have less resistance and they have a very long straight there, so this is where you get the maximum speeds. 

But what is really impressive is the corner speed. In the corners you can now have a side acceleration force that is four to five times the force of gravity. It’s like being in a jet fighter plane. It’s really, really high.

Widmer: They wear their security belts not only to hold them in in case of an accident, but also for when they brake and steer. Otherwise, they could be catapulted out of the car because the forces are close to 5G. The efficiency of the car is really impressive, not only from the acceleration or high speeds. The other invisible forces also differentiate a Formula 1 car from a street car.

Gardner: Peter, because this is an engineering competition, we know the simulations result in impactful improvements. And that then falls back on the performance of the data center and its level of innovation. Why is the high-performance computing environment such an essential part of the Formula 1 team?

Widmer

Widmer: Finding tens of thousands of a second on the racetrack, where a lap time can be one minute or less, pushes the design of the cars to the extreme edge. To find that best design solution requires computer-aided design (CAD) guidance — and that’s where the data center plays an important part.

Those computational fluid dynamics (CFD) simulations take place in the data center. That’s why we are so happy to work together with Alfa Romeo Racing as a technology partner.

Gardner: Francesco, do you have constraints on what you can do with the computers as well as what you can do with the cars?

Limits to compute for cars

Del Citto: Yes, there are limits in all aspect of the car, design, and especially in the aerodynamic research. That’s because aerodynamics is where you can extract more performance — but it’s where you can spend more money as well.

The Formula 1 governing body, the FIA, a few years ago put in place ways of controlling the money spent for aerodynamic research. So instead of putting on a budget cap, they decided to put a limit on the resources you can use. The resources are both the wind tunnel and the computational fluid dynamics. It’s a tradeoff between the two. The more wind tunnel you use, the less computational power you can use, and vice versa. So each team has its sweet spot, depending on their strategy. 

You have restrictions in how much computational capacity you can use to solve your simulations. You can do a lot of post-processing and pre-processing, but you cannot extract too much from that. The solving part, in which it tells you the performance results of the new car design, is what is limited.

Gardner: Peter, how does that translate into an HPE HPC equation? How do you continuously innovate to get the most from the data center, but without breaking the rules?

Widmer: We work with a competency center on the HPC to determine the right combination of CPU, throughput, and whatever it takes to get the end results, which are limited by the regulations.

We are very open on the platform requirements for not only Alfa Romeo Racing, but for all of the teams, and that’s based on the most efficient combination of CPU, memory, networking, and other infrastructure so that we can offer the CFD use-case.

Learn How High-Density HPC 

Doubles Throughput 

While Slashing Energy Use

It takes know-how about how to tune the CPUs, about the specifics of the CFD applications, and knowledge of the regulations formula which then leads us to get that success in CFD for Formula 1.

Gardner: Let’s hear more about that recipe for success. 

Memory makes the difference

Widmer: It’s an Intel Skylake CPU, which includes graphic cards onboard. That obviously is not used for the CFD use-case, but the memory we do use as a level-four memory cache. That then provides us extra performance, which is not coming from the CPU, which is regulated. Due to the high-density packaging of the HPE Moonshot solution — where we can put 45 compute notes in a 4.30 rack chassis — this is quite compact. And it’s just topped out at about 5,000-plus cores.

Del Citto: Yes, 5,760 cores. As Peter was saying before, the key factor here is the software. There are three main CFD software applications used by all the Formula 1 teams. 

The main limitation for this kind of software is always the memory bandwidth, not the computational power. It’s not about the clock speed frequency. The main limitation is the memory bandwidth. This is why the four-level cache gives the extra performance, even compared to a higher spec Intel server CPU. The lower spec with low energy use CPU version gives us the extra performance we need because of the extra memory cache.

Gardner: And this isn’t some workload you can get off of a public cloud. You need to have this on-premises? 

Del Citto: That’s right. The HPC facility is completely owned and run by us for the Formula 1 team. It’s used for research and even for track analysis data. We use it for multiple purposes, but it’s fully dedicated to the team.

It is not in the cloud. We have designed a building where we have a lot of electricity and cooling capacity requirements. Consider that the wind tunnel fan — only the fan – uses 3 megawatts. We need to have a lot of electricity there.

Gardner: Do you use the wind tunnel to cool the data center?

Del Citto: Sort of. We use the same water to cool the wind tunnel and the data center. But the wind tunnel has to be cooled because you need the air at a constant temperature to have consistent tests.

Gardner: And Peter, this configuration that HPE has put together isn’t just a one-off. You’re providing the basic Moonshot design for other Formula 1 teams as well?

A winning platform

Widmer: Yes, the solution and fit-for-regulations design was so compelling that we managed to get 6 out of 10 teams to use the platform. We can say that at least the first three teams are on our customer list. Maybe the other ones will come to us as well, but who knows?

We are proud that we can deliver a platform to a sport known for such heavy competition and that is very technology-oriented. It’s not comparable to any other sport because you must consistently evolve, develop, and build new stuff. The evolution never stops in Formula 1 racing.

For a vendor like HPE, it’s really a very nice environment. If they have a new idea that can give a team a small competitive advantage, we can help them do it. And that’s been the case for 10 years now.

Let’s figure out how much faster we can go, and then let’s go for it. These teams are literally open-minded to new solutions, and they are eager to learn about what’s coming down the street in technology and how could we get some benefits out of it. So that’s really the nice story around it.

These teams are literally open-minded to new solutions, and they are eager to learn about what’s coming down the street in technology and how they could get benefits out of it. That’s the nice story around it.

Gardner: Francesco, you mentioned this is a continuous journey. You are always looking for new improvements, and always redesigning.

Now that you have a sophisticated HPC environment for CFD and simulations, what about taking advantage of HPC data center for data analysis? For using artificial intelligence (AI) and machine learning (ML)?

Is that the next stage you can go to with these powerful applications? Do you further combine the data analysis and CFD to push the performance needle even further?

Del Citto: We generate tons of data — from experiments, the wind tunnel, the CFD side, and from the track. The cars are full of sensors. During a practice run, there are hundreds of pressure sensors around the car. In the wind tunnel, there are 700 sensors constantly running. So, as you can imagine, we have accumulated a lot of data.

Now, the natural step will be how we can use it. Yes, this is something everyone is considering. I don’t know where this will bring us. There is nothing else I can comment on at the moment.

Gardner: If they can put rules around the extent to which you can use a data center for AI, for example, it could be very powerful.

Del Citto: It could be very powerful, yes. You are suggesting something to the rule-makers now. Obviously, we have to work with what we have now and see what will come next. We don’t know yet, but this is something we are keeping our eyes on, yes.

Learn How High-Density HPC 

Doubles Throughput 

While Slashing Energy Use

Gardner: Good luck on your redesign for the 2019 season of Formula 1 racing, which begins in March 2019.

Widmer: Thanks a lot.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Cloud computing, data center, Data center transformation, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, server | Tagged , , , , , , , , , , , , , | Leave a comment

Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible

The next BriefingsDirect intelligent solutions discussion explores how healthcare organizations are using the latest digital technologies to transform patient care and experiences.

When it comes to healthcare, time is of the essence and every second counts, but healthcare is a different game today. Doctors and clinicians, once able to focus exclusively on patients, are now being pulled into administrative tasks that can eat into their capability to deliver care.

To give them back their precious time, innovative healthcare organizations are turning to a new breed of intelligent digital workspace technologies. We are now joined by two leaders who will share their thoughts on how these solutions change the game and help transform healthcare as we know it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Please welcome Mick Murphy, Vice President and Chief Technology Officer at WellSpan Health, and Christian Boucher, Director and Strategist-Evangelist for Healthcare Solutions at Citrix. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

An integrated healthcare system with more than 19,000 employees serving Central Pennsylvania and Northern Maryland, WellSpan Health consists of 1,500 physicians and clinicians, a regional behavioral health organization, a homecare organization, eight respected hospitals and more than 170 patient care locations.

Here are some excerpts:

Gardner: Christian, precision medicine is but one example of emerging trends that target improved patient care, in this case to specifically to treat illnesses with more specialized and direct knowledge. What is it about intelligent workspace solutions that help new approaches, such as precision medicine, deliver more successful outcomes?

Boucher: We investigated precision medicine to better understand how such intricate care was being delivered. Because every individual is different — they have their own needs, whether on the medication side, the support side, or the genomic side — physicians and healthcare are beginning to identify better ways to treat patients as a customized experience. This comes not only in the clinical setting, but also when the patients get home. Knowing this helped us formulate our concept of the intelligent workspace.

Boucher

So, we decided to look at how users consume resources. As an IT organization — and I supported a healthcare organization for 12 years before joining Citrix — it was always our role to deliver resources to our users and anticipate how they needed to consume them. It’s not enough to define how they utilize those resources, but to identify how and when they need to access them, and then to make it as simple and seamless as possible.

With the intelligent workspace we are looking to deliver that customized experience to not only the organizations that deploy our solutions, but also to the users who are consuming them. That means being able to understand how and where the doctors and nurses are consuming resources and being able to customize that experience in real-time using our analytics engines and machine learning (ML). This allows us to preemptively deliver computing resources, applications, and data in real-time.

For example, when it comes to walking into a clinic, I can understand through our analytics engine that we will need for this specific clinic to utilize three applications. So before that patient walks in, we can have those apps spinning up in the background. That helps minimize the time to actual access.

Every minute we can subtract from a technology interaction is another minute we can give back to our clinicians to work with the patients and spend direct healthcare time with them.

Gardner: Understanding the context and the specific patient in more detail requires bringing together a lot of assets and resources on the back end. But doing so proactively can be powerful and ultimately reduces the complexity for the people on the front lines. 

Mick, WellSpan Health has been out front on seeking digital workspace technology for such better patient outcomes. What were some of the challenges you faced? Why did you need to change the way things were?

IT increases doctor-patient interaction 

Murphy: There are a couple of things that drive us. One is productivity and giving time back to clinicians so that they can focus on patients. There is a lot of value in bringing more information to the clinical space. The challenge is that the physicians and nurses can end up interacting more with computers than with the patients.

Murphy

We don’t think about this in minutes, but in 9-second increments. That may sound a little crazy, but when we talk about a 15-minute visit with a primary care doctor, that’s 900 seconds. And if you think about 9 seconds, that’s 1 percent of that visit.

We are looking to give back multiple percentage points of such a visit so that the physician is not interacting with the computer, they are interacting directly with the patient. They should be able to quickly get the information they need from the medical record and then swivel back and focus directly on the patient.

Gardner: It’s ironic that you have to rely on digital technology and integration — pulling together disparate resources and assets — in order to then move past interacting with the computers. 

Murphy: Optimally the technology fades into the background. Many of us in technology may like to have the attention, but at the end of the day if the technology just works, that’s really what we are striving for. 

We want to make sure that as soon as a physician wants something — they get it. Part of that requires the ability to quickly get into the medical record, for example. With the digital workspace, an emphasis for us was improve on our old systems. We were at 38 seconds to get to the medical records, but we have been able to cut that to under 4 seconds.

How the Right Technology

Improves Patient Care and 

Helps Save Lives 

This gets back to what Christian was talking about. We know when a physician walks into an exam room that they are going to need to get into the medical records. So we spin up a Citrix session in advance. We have already connected to the electronic health records (EHRs). All we are waiting for is the person to walk in the door. And as soon as they drop their ID badge onto a reader they are quickly and securely into that electronic medical record. They don’t spend any time doing searches, and whatever applications are needed to run are ready for them.

Gardner: Christian, having technology in the background, anticipating needs, operating in the context of a process — this is all complex. But the better you do it, the better the outcome in terms of the speed and the access to the right information at the right time.

What goes on in the background to make these outcomes better for the interactions between the physicians, clinicians, and patients?

Boucher: For years, IT has worked with physicians and clinicians to identify ways to increase productivity and give time back to focus more on patient care. What we do is leverage newer technologies. We useartificial intelligence (AI),  ML, and analytics and drill down deeper into what is happening — and not only on a generic physician workflow.

It can’t just be generic. There may be 20 doctors at a hospital, but they all work differently. They all have preferences in how they consume information, and they perform at different levels, depending on the technology they interact with. Some doctors want to work with tablets and jump from one screen to the next depending upon their specific workflow. Others are more comfortable working with full-on laptops or working on a desktop.

We have to understand that and deliver an experience that each clinician can decide is best-suited for their specific work style. This is really key. If you go from one floor to another in a hospital and watch how nurses work differently — from the emergency room to the neonatal intensive care unit (NICU) — the workflows are considerably different. 

Not only do we have to deliver those key applications, we have to be mindful of how each of those different groups interacts with the technologies. It’s not just applications. It’s not just accessing the health record. It’s hardware, software, and location.

We have to be able to not only deliver those experiences but predict in real-time how they will be consumed to expedite the processes for them to get back into the patient-focused arena. 

Work outside the walls

Murphy: That’s a great point. You mentioned tablets. I don’t know what it is about physicians, but a lot of their kids seem to swim. So a lot of our doctors spend time at swim meets. And if you are on-call and are at a swim meet, you have a lot of time when your child is not in the pool. We wanted to give them secure access [to work while at such a location]. 

It’s really important, of course, that we make sure that medical records are private and secure. We are now able to say to our physicians, “Hey, grab your tablet, take it with you to the swim meet. You will be able to stay at the swim meet if you get a call or you get paged. You will be able to pop out that tablet, access the medical records – and all of that access stays inside of our data center.”

All they are looking at is a pretty picture of what’s going on inside the data center at that point. And so that prescription refill that’s urgent, they are able to handle that there without having to leave and take time away from their kids. 

We are able to improve the quality of life for our physicians because they are under intense pressure with healthcare the way it is today.

Boucher: I agree with that. As we look at how work is being done, there is no predefined workspace anymore — especially in healthcare where you have these on-call physicians.

Look at business operations. We are able to offset internal resources for billing. The work does not just get done in the hospital anymore. We are looking at ways to extend that same secure delivery of apps and data outside the four walls.

Just look at business operations as well. We are able to offset internal resources for billing. The work does not just get done in the hospital anymore. We are looking for ways to extend that same secure delivery of applications and data outside the four walls, especially if you have 19 hospitals. 

As you find leverage points for organizations to be able to attract resources that may not fall inside the hospital walls, it’s key for us to be more flexible in how we can allow organizations to deliver those resources outside those walls.

Gardner: Christian, you nailed it when you talked about how adoption is essential. Mick, how have digital workspace solutions helped people use these apps? What are the adoption patterns now that you can give flexibility and customize the experience?

Faster workflow equals healthier data

Murphy: Our adoptions are pretty strong. I will be clear, it’s required that you interact with electronic health records. There isn’t really an option to opt out. But what we have seen is that by making this more effective and faster, we have seen better compliance with things like securing workstations. Going back to privacy, we want to make sure that that electronic health data is protected.

And if it takes me too long to get back into a work context, well, then I may be tempted to not lock that workstation when I step away for just a moment. And then that moment can become an extended period and that would be dangerous for us. Knowing that I am going to get back to where I was in less than four seconds — and I am not even going to have to do anything other than touch my badge to get there, — means we see that folks secure their workstations with great frequency. So we feel like we are safer than we were. That’s a major improvement. 

Gardner: Mick, tell us more about the way you use workspaces to allow people to authenticate easily regardless of where they are.

Murphy: We have combined a couple of technologies. We use smart badges with a localized reader, with the readers scattered about for the folks who need to touch multiple workstations. 

So myself as an executive, for example, I can log into one machine by typing in my password. But for clinicians going from place to place, we have them login once a day and then as long as they are retaining their badge and they are getting back in. All they have to do is touch their badge to a reader and it drops them right back into their workspace. 

Gardner: We began our conversation talking about precision medicine, but there are some other healthcare trends afoot now, too. Transparency about the financial side of healthcare interactions is increasingly coming into play, for example.

We have new kinds of copays and coinsurance, and it’s complex. Physicians and clinicians are being asked more to be part of the financial discussion with patients. That requires a whole new level of integration and back-end work to make that information available in these useful tools. 

Taking care of business

Murphy: That is a big challenge. It’s something we are investing in. Already we are extending our website to allow patients to get on and say, “Hey, what’s this going to cost?” What the person really wants to know is, “What are my out-of-pocket costs going to be?” And that depends on that individual. 

We haven’t automated that yet end-to-end, but we have created a place where a patient can come on and say, “Hey, this is what I am going to need to have done. Can you tell me what it’s going to cost?”

We actually can turn that back around [with answers]. We have to get a human being involved, but we make that available either by phone or through our website. 

Gardner: Christian, we are seeing that the digital experience and the workspace experience in the healthcare process are now being directed back to the patient, for patient experience and digital experience benefits. Is the intelligent workspace approach that provider organizations like WellSpan are using putting them in an advantageous position to extend the digital experience to the patient — wherever they are — as well as to clinicians and physicians?

Boucher: In some scenarios we have seen some of our customers extend some of Citrix’s resources outside to customers. A lot of electronic health records now include patient portalsas part of their ecosystem. We see a lot of customers leveraging that side of the house for electronic health records.

We understand that regardless of the industry, the finance side and he back-office side play a major role in any organization’s offerings. It’s just as important to be able to get paid for something as it is to deliver care or any resource.

We understand that regardless of the industry, the finance side and the back-office side play a major role in any organization’s offerings. It’s just as important to be able to get paid for something as it is to deliver care or deliver any resources that your organization may deliver. 

So one of the key aspects for us was understanding how the workspace approach transforms over the next year. Some of the things we are doing on our end is to look at those extended workflows.

We made an acquisition in the last six months, a software company, [Sapho], that essentially creates micro apps. What that really means is we may have processes on ancillary systems, they could be Software as a service (SaaS)-based, they could be web-based applications, they could be on-premises installations of old client/server technologies. But this technology allows us to create micro experienceswithin the application.

So just say a process for verifying billing takes seven steps, and you have to login to a system, and you have to navigate through menus, and then you get to the point where you can hit a button to say, “Okay, this is going to work.”

What we have done is take that entire workflow — maybe it’s 10 clicks, plus a password — and create a micro app that goes through that entire process and gives you a prompt to do it all in one or two steps. 

So every application that we can integrate to — and there are 150 or so – we can take those workflows, which in some cases can take five minutes to walk-through and turn it into a 30-second interaction with [the Sapho] technology. Our goal is to look beyond just general workflows and be able to extend that out into these ancillary programs, where you may have these kinds of normal everyday activities that don’t need to take as long as they do and simplify that process for our end users to really optimize their workflows during the day.

Gardner: This sounds like moving in a direction of simplifying process, using software robots and as a way of automating things, taking the time and compressing it, and simplifying things — all at the same time.

Murphy: It’s fascinating. It sounds like a great direction. We are completely transparent, and that’s a future for us. It sounds like I need to get together with Christian after this interview.

Gardner: Let’s revisit the idea of security and compliance. Regulations are always there, data sharing is paramount, but protecting that data can be a guard rail or a limiter in how well you can share information.

Mick, how are you able to take that patient experience with the clinician and enrich it with all the data and resources you can regardless of whether they are at the pool, at home, on the road, and yet at the same time have compliance and feel confident about your posture when it comes to risk?

Access control brings security, compliance

Murphy: We feel pretty good about this for a couple of reasons. One is, as I mentioned, the application is still running in our data center. The the next question is, “Well, who can get access to that?”

One way is strong passwords, but as we all know with phishing those can be compromised. So we have gone with multifactor authentication. We feel pretty good about remote access, and once you have access we are not letting you pull stuff down onto your local device. You are just seeing what’s on the screen, but you are not pulling files down or anything of that nature. So, we have a lot of confidence in that approach.

Boucher: Security is always a moving target, and there may be certain situations when I access technology and I have full access to do what I please. I can copy and paste out of applications, I can screenshot, and I may be able to print specific records. But there may be times within that same workflow — but a different work style — where I may be remote-accessing technologies and those security parameters change in real-time.

As an organization, I don’t feel comfortable allowing user X to be able to print patient records when they are not on a trusted network, or they are working from home, or on an unknown device.

So we at Citrix understand those changing factors and understanding that our border now is the Internet. If we are allowing access from home, we are now extending our resources out to that, out to the Internet. So it really gives us a lot more to think about.

We have built into our solutions granular control that uses ML and analytics solutions. When you access something from inside the office, you have a certain amount of privileges as the end user. But as soon as you extend out that same access outside of the organization, in real-time we can flip those permissions and stop allowing users to print or screenshot or copy and paste between applications.

Digitally Enhanced Precision Medicine 

Delivers Patient-Specific Care to 

Treat Illness and Cure Diseases 

And that’s invisible to the end-user. It all happens in the back-end in real-time. Security is something that we take extremely seriously at Citrix, and we understand that our customers do as well. So, giving them those controls allows them to be a lot more nimble in how they deploy solutions.

Murphy: I agree with that. Another thing that we like to do is have technology control and help people be safe. A lot of this isn’t about the bad actor, it’s about somebody who’s just trying to do the right thing — but they don’t realize the risk that they are taking. We like to put in technology safeguards. For example, if you are working at home, you are going to have some guardrails around you that are tighter than the guardrails when you are standing in our hospital. 

Gardner: Let’s revisit one of our core premises, which is the notion of giving time back to the clinicians, to the physicians, to improve the patient experience and outcomes. Do you have other examples of intelligent and digital workspace solutions that help give time back? Are there other ways that you’re seeing the improvement in the quality of care and the attention span that can be directed at that patient and their situation?

The top of your license

Murphy: We talk a lot in healthcare about working at the top of your license. We try and push tasks to the least skill level needed in order to do something. 

When you come in for a visit, rather than having the physician look up your record, we have the medical assistant that rooms you and asks why you are there. They open the record, they ask you a few questions. They get all that in place. Then they secure the workstation. So it’s locked when the physician walks in and they drop their badge and get right in to the electronic medical record in four seconds.

That doctor can then immediately turn to you and say, “Hey, how are you doing today? What brings you in?” And they can just go right into the care conversation. The technology tees everything up so that the focus is on the patient.

Gardner: I appreciate that because the last thing I want to do is fill out another blank clipboard, telling them my name, my age, date of birth, and the fact that I had my appendix out in 1984. I don’t want to do that four times in a day. It’s so great to know that the information is going to follow me across the process.

Murphy: And, Dana, you are better than me, because I had a tonsillectomy in ‘82, ‘83, ‘84? It depends on which time I answered the survey correctly, right?

All systems go with spatial computing 

Boucher: As we look forward a few years, that tighter integration between the technologies and our clinicians is going to become more intertwined. We will start talking about spatial computing and these new [augmented reality] interfaces between doctors and health records systems or ambulatory systems. Spatial computing can become more of a real-time factor in how care is delivered.

And these are just some of the things we are talking about in our labs, in better understanding how workflows are created. But imagine being able to walk into a room with no more than a smart watch on my wrist that’s essentially carrying my passport and being able to utilize proximity-based authentication into those systems and interact with technology without having to login and do all the multifactor authentications.

And then take a step further by having these interfaces between the technology in the room, the electronic records, and your bed-flow systems. So as soon as I walk into a room, I no longer have to navigate within the EHR to find out which patient is in the room. By them being in the room and interfacing with bed flow, or having a smart patient ID badge, I can automatically navigate to that patient in real-time.

As soon as I walk into the room, I no longer have to navigate within the EHR to find out which patient is in the room. By them being in the room and interfacing with the bed flow, or having a smart ID badge, I can navigate to the patient in real time.

In reality, I am removing all of the administrative tasks from a clinician workflow. Whether it’s Internet of things (IoT)-based devices, or smart devices in rooms, they will help complete half of that workflow for you before you even step in.

Those are some of the things we look at for our intelligent workspace in our micro app design and our interfaces across different applications. Those are the kind of ways that we see our solutions being able to help clinicians and organizations deliver better care.

Gardner: And there is going to be ever-more data. It’s always increasing, whether it’s genomic information, a smart device that picks up tracking information about an individual’s health, or more from population information across different types of diseases and the protocols for addressing them.

Mick, we are facing more complexity, more data, and more information. That’s great because it can help us do better things in medicine. But it also needs to be managed because it can be overwhelming.

What steps should we be taking along the way so that information becomes useful and actionable rather than overwhelming?

AI as medical assistant

Murphy: This is a real opportunity for AI around an actual smart clinical assistant. So something that’s helping comb through all the data. There’s genomic data, drug-drug interaction data, and we need to identify what’s most important to get that human judgment teed up.

These are the things that I think you should look at versus, “Oh, here is all the possible things you couldlook at.” Instead we want, “Here are the things that you shouldreally focus on,” or that seem most relevant. So really using computing to assist clinicians rather than tell them what to do. But at least help them with where to focus.

Gardner: Christian, where would that AI exist? Is that something we’re going to be putting into the doctor’s office, or is that going to be something in a cloud or data center? How does AI manifest itself to accomplish what Mick just described?

Boucher: AI leverages intense computing power, so we are talking about significant IT resources internally. While we do see some organizations trying to bring quantum computing-based solutions into their organization and leveraging that, what I see is probably more of a hosted solution at this point. That’s because of the expense but also because of the technology, of when you start talking about distributed computing and being able to leverage multiple input solutions.

If you talk about an Epic or Cerner, I’m sure that they are working on technologies like that within their own solutions — or at least that allow their information to be shared within that.

I think we’re in the infancy of that AI trend. But we will see more-and-more technology play a factor in that. We could see some organizations partnering together to build out solutions. It’s hard to say at this point, but we know there is a lot of traction right now and unfortunately, they are mostly high-tech companies trying to leverage their algorithms and their solutions to deliver that, which at some point, I would guarantee that they’ll be mass produced and ready for purchase.

Murphy: AI could be everything from learning to just applying rules. I might not classify applying rules as AI, but I would say it’s rudimentary AI. For example, we have a rule set, an algorithm for sepsis. It enables us to monitor a variety of things about a patient — vital signs, lab results, and various data points that are difficult for any one human to be looking at across the entire set of patients in our hospitals at any given time.

Learn How to Build 

An Intelligent Workspace 

Infrastructure and Culture 

So we have a computer watching that. And when certain criteria are met in this algorithm, it reports to a centralized team staffed with nurses. The nurses can then look at that individual patient and say, “Does this look like a false alarm or does this look like something that we really need to pursue?” And based off of that, they send someone to the bedside.

We’ve had dramatic improvements with sepsis. So there are some really easy technical things to do — but you have to engage with them, with human beings, to get the team involved and make that happen.

Gardner: The intelligent digital workspaces aren’t just helping cut time, they are going to be the front end to help coordinate some of these advanced services that are coming down that can have a really significant impact on the quality of care and also the cost of case, so that’s very exciting.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix Systems.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business intelligence, Citrix, Cloud computing, data center, Data center transformation, electronic medical records, Enterprise architect, enterprise architecture, healthcare, Identity, Internet of Things, machine learning, Security, User experience, Virtualization | Tagged , , , , , , , , , , | Leave a comment

Want to manage your total cloud costs better? Emphasize the ‘Ops’ in DevOps, says Futurum analyst Daniel Newman

The next BriefingsDirect Voice of the Analyst interview explores new ways that businesses can gain the most control and economic payback from various cloud computing models.

We’ll now hear from an IT industry analyst on how developers and IT operators can find newfound common ground to make hybrid cloud the best long-term economic value for their organizations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help explore ways a managed and orchestrated cloud lifecycle culture should be sought across enterprise IT organizations is Daniel Newman, Principal Analyst and Founding Partner at Futurum Research. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Daniel, many tools have been delivered over the years for improving software development in the cloud. Recently, containerization and management of containers has been a big part of that.

Now, we’re also seeing IT operators tasked with making the most of cloud, hybrid cloud, and multi-cloud around DevOps – and they need better tools, too.

Has there been a divide or lag between what developers have been able to do in the public cloud environment and what operators must be able to do? If so, is that gap growing or shrinking now that new types of tools for automation, orchestration, and composability of infrastructure and cloud services are arriving?

Out of the shadow, into the cloud 

Newman: Your question lends itself to the concept of shadow IT. The users of this shadow IT find a way to get what they need to get things done. They have had a period of uncanny freedom.

Newman

But this has led to a couple of things. First of all, generally nobody knows what anybody else is doing within the organization. The developers have been able to creatively find tools.

On the other hand, IT has been cast inside of a box. And they say, “Here is the toolset you get. Here are your limitations. Here is how we want you to go about things. These are the policies.”

And in the data center world, that’s how everything gets built. This is the confined set of restrictions that makes a data center a data center.

But in a developer’s world, it’s always been about minimum viable product. It’s been about how to develop using tools that do what they need them to do and getting the code out as quickly as possible. And when it’s all in the cloud, the end-user of the application doesn’t know which cloud it’s running on, they just know they’re getting access to the app.

Basically we now have two worlds colliding. You have a world of strict, confined policies — and that’s the “ops” side of DevOps. You also have the developers who have been given free rein to do what they need to do; to get what they need to get done, done.

Get Dev and Ops to collaborate 

Gardner: So, we need to keep that creativity and innovation going for the developers so they can satisfy their requirements. At the same time, we need to put in guard rails, to make it all sustainable.

Otherwise we see not a minimal viable cloud – but out-of-control expenses, out-of-control governance and security, and difficulty taking advantage of both private cloud and public cloud, or a hybrid affair, when you want to make that choice.

How do we begin to make this a case of worlds collaborating instead of worlds colliding?

Newman: It’s a great question. We have tended to point DevOps toward “dev.” It’s really been about the development, and the “ops” side is secondary. It’s like capital D, lowercase o.

The thing is, we’re now having a massive shift that requires more orchestration and coordination between these groups.

How to Make 

Hybrid IT 

Simple

You mentioned out-of-control expenses. I spoke earlier about DevOps and developers having the free rein – to do what they need to do, put it where they need to put it, containers, clouds, tools, whatever they need, and just get it out because that’s what impacts their customers.

If you have an application where people buy things on the web and you need to get that app out, it may be a little more expensive to deploy it without the support of Ops, but you feel the pressure to get it done quickly.

Now, Ops can come in and say, “Well, you know … what about a flex consumption-based model, what about multi-cloud, what about using containers to create more portability?”

“What if we can keep it within the constraints of a budget and work together with you? And, by the way, we can help you understand which applications are running on which cloud and provide you the optimal [aggregate cloud use] plan.”

Let’s be very honest, a developer doesn’t care about all of that. … They are typically not paid or compensated in any way that leads to optimizing on cost. That’s what the Ops people do.

Such orchestration — just like almost all larger digital transformation efforts — starts when you have shared goals. The problem is, they call it a DevOps group — but Dev has one set of goals and Ops has different ones.

What you’re seeing is the need for new composable tools for cloud services, which we saw at such events as the recent Hewlett Packard Enterprise (HPE) Discover conference. They are launching these tools, giving the Ops people more control over things, and — by the way — giving developers more visibility than has existed in the past.

There is a big opportunity [for better cloud use economics] through better orchestration and collaboration, but it comes down to the age-old challenges of having the Dev and Ops people share the same goals.

–Daniel Newman

There is a big opportunity [for better cloud use economics] through better orchestration and collaboration, but it comes down to the age-old challenges inside of any IT organization — and that is having the Dev and the Ops people share the same goals. These new tools may give them more of a reason to start working in that way.

Gardner: The more composability the operations people have, the easier it is for them to define a path that the developers can stay inside of without encumbering the developers.

We may be at the point in the maturity of the industry where both sides can get what they want. It’s simply a matter of putting that together — the chocolate and peanut-butter, if you will. It becomes more of a complete DevOps.

But there is another part of this people often don’t talk about, and that’s the data placement component. When we examine the lifecycle of a modern application, we’re not just developing it and staging it where it stays static. It has to be built upon and improved, we are doing iterations, we are doing Agile methods.

We also have to think about the data the application is consumingandcreating in the same way. That dynamic data use pattern needs to fit into a larger data management philosophy and architecture that includes multi-cloud support.

I think it’s becoming DevDataOps— not just DevOps these days. The operations people need to be able to put in requirements about how that data is managed within the confines of that application’s deployment, yet kept secure, and in compliance with regulations and localization requirements.

DevDataOps emerges

Newman: We’ve launched the DevDataOps category right now! That’s actually a really great point, because if you think about where does all that live — meaning IT orchestration of the infrastructure choices and whether that’s in the cloud or on-premises – there has to be enough of the right kind of storage.

Developers are usually worried about data from the sense of what can they do with that data to improve and enhance the applications. When you add in elements like machine learning (ML) and artificial intelligence (AI), that’s going to just up the compute and storage requirements. You have the edge and Internet of Things (IoT) to consider now too for data. Most applications are collecting more data in real-time. With all of these complexities, you have to ask, “Who really owns this data?”

Well, the IT part of DevOps, the “Ops,” typically worries about capacity and resources performance for data. But are they really worried about the data in these new models? It brings in that needed third category because the Dev person doesn’t necessarily deal with the data lifecycle. The need to best use that data is a business unit imperative, a marketing-level issue, a sales-level data requirement. It can include all the data that’s created inside of a cloud instance of SAP or Salesforce.

How to Solve Cost 

and Utilization Challenges 

of Hybrid Cloud

Just think about how many people need to be involved in orchestration to maximize that? Culturally speaking, it goes back to shared tools, shared visibility, and shared goals. It’s also now about more orchestration required across more external groups. So your DevOps group just got bigger, because the data deluge is going to be the most valuable resource any company has. It will be, if it isn’t already today, the most influential variable in what your company becomes.

You can’t just leave that to developers and operators of IT. It becomes core to business unit leadership, and they need to have an impact. The business leadership should be asking, “We have all this data. What are we doing with it? How are we managing it? Where does it live? How do we pour it between different clouds? What stays on-premises and what goes off? How do we govern it? How can we have governance over privacy and compliance?”

I would say most companies really struggle to keep up with compliance because there are so many rules about what kind of data you have, where it can live, how it should be managed, and how long it should be stored. 

I think you bring up a great point, Dana. I could probably rattle on about this for a long, long time. You’ve just added a whole new element to DevOps, right here on this podcast. I don’t know that it has to do with specifically Dev or Ops, but I think it’s Dev+Ops+Data — a new leadership element for meaningful digital transformation.

Gardner: We talked about trying to bridge the gap between development and Ops, but I think there are other gaps, too. One is between data lifecycle management – for backup and recovery and making it the lowest cost storage environment, for example. Then there is the other group of data scientists who are warehousing that data, caching it, and grabbing more data from outside, third-party sources to do more analytics for the entire company. But these data strategies are too often still divorced.

These data science people and what the developers and operators are doing aren’t necessarily in sync. So, we might have another category, which would be Dev+Data+DataScience+Ops.

Add Data Analytics to the Composition 

Newman: Now we’re going four groups. You are firstly talking about the data from the running applications. That’s managed through pure orchestration in DevOps, and that works fine through composability tools. Those tools provide IT the capability to add guard rails to the developers, so they are not doing things in the shadows, but instead do things in coordination.

The other data category is that bigger analytical data. It includes open data, third-party data, and historical data that’s been collected and stored inside of instances of Enterprise resource planning (ERP) apps and Customer-relationship management (CRM) apps for 20 or 30 years. It’s a gold mine of information. Now we have to figure out an extract process and incorporate that data into almost every enterprise-level application that developers are building. Right now Dev and Ops don’t really have a clue what is out there and available across that category because that’s being managed somewhere else, through an analytics group of the company.

Gardner: Or, developers will have to create an entirely different class of applications for analytics alone, as well as integrating the analytics services into all of the existing apps.

Newman: One of the HPE partners I’ve worked with the in the past, SAS, and companies such as SAS and SAP, are going to become much closer aligned with infrastructure. Your DevOps is going to become your analytics Ops, too.

How to Achieve 

Composability 

Across Your Data Center

Hardware companies have built software apps to run their hardware, but they haven’t been historically building software apps to run the data that sits on the hardware. That’s been managed by the businesses running business intelligence software, such as the ones I mentioned.

There is an opportunity for a new level of coordination to take place at the vendor level, because when you see these alliances, and you see these partnerships, this isn’t new. But, seeing it done in a way that’s about getting the maximum amount of usable data from one system into every application — that’s futuristic, and it needs to be worked on today. 

Gardner: The bottom line is that there are many moving parts of IT that remain disjointed. But we are at the point now with composability and automation of getting an uber-view over services and processes to start making these new connections – technically, culturally, and organizationally.

What I have seen from HPE around the HPE Composable Cloud vision moves a big step in that direction. It might be geared toward operators, but, ultimately it’s geared toward the entire enterprise, and gives the business an ability to coordinate, manage, and gain insights into all these different facets of a digital business.

Companies right now still struggle with the resources to run multi-cloud. They tend to have maybe one public cloud and their on-premises operations. They don’t know which is the best cloud approach because they are not getting the total information.

–Daniel Newman

Newman: We’ve been talking about where things can go, and it’s exciting. But let’s take a step back.

Multi-cloud is a really great concept. Hyper-converged infrastructure, it’s all really nice, and there has been massive movement in this area in the last couple of years. Companies right now still struggle with the resources to run multi-cloud. They tend to have maybe one public cloud and their on-premise operations. They have their own expertise, and they have endless contracts and partnerships.

They don’t know which the best-cloud approach is because they are not necessarily getting that total information. It depends on all of the relationships, the disparate resources they have across Dev and Ops, and the data can change on a week-to-week basis. One cloud may have been perfect a month ago, yet all of a sudden you change the way an application is running and consuming data, and it’s now in a different cloud.

What HPE is doing with HPE Composable Cloud takes the cloud plus composable infrastructure and, working through HPE OneSphere and HPE OneView, brings them all into a single view. We’re in a software and user experience world.

The tools that deliver the most usable and valuable dashboard-type of cloud use data in one spot are going to win the battle. You need that view in front of you for quick deployment, with quick builds, portability, and container management. HPE is setting itself in a good position for how we do this in one place.

How to Remove 

Complexity From 

Multi-Cloud and Hybrid IT

Give me one view, give me my one screen to look at, and I think your Dev and Ops — and everybody in between – and all your new data and data science friends will all appreciate that view.HPE is on a good track, and I look forward to seeing what they do in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.


You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Cloud computing, containers, Cyber security, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, multicloud, Security | Tagged , , , , , , , , , , , | Leave a comment

Price transparency in healthcare to regain patient trust requires accuracy via better use of technology


The next BriefingsDirect healthcare finance insights discussion explores the impacts from increased cost transparency for medical services.

The recent required publishing of hospital charges for medical procedures is but one example of rapid regulatory and market changes. The emergence of more data about costs across the health provider marketplace could be a major step toward educated choices – and ultimately more efficiency and lower total expenditures.

But early-stage cost transparency also runs the risk of out-of-context information that offers little actionable insight into actual consumer costs and obligations. And unfiltered information requirements also place new burdens on physicians, caregivers, and providers – in areas that have more to do with economics than healthcare.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the pluses and minuses of increased costs transparency in the healthcare sector, we are joined by our expert panel:

The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: For better or worse, we are well into an era of new transparency about medical costs. Heather, why is transparency such a top concern right now? 

Kawamoto

Kawamoto: It’s largely due to a cost shift. Insurance companies are having patients owe more of a payment’s portion. With that there has been a significant rise in the high-deductible health plans — not only in the amount of the deductible, but also in the number of patients on high-deductible plans. 

And when patients get, sadly, more surprise bills, we start to hear about it in the media. We also have the onset this month of the IPPS/LTCH PPS final rule from the Centers for Medicare and Medicaid Services (CMS) [part of the U.S. Department of Health and Human Services].

The New York Times did a recent story about this, and that’s created buzz. And then people start saying, “Hey, I know I have a medical service coming up, I probably need to call in and actually find out how much my service is going to be.”

Gardner: It seems like the consumer, the patient, needs to be far more proactive in thinking about their care, not just in terms of, “Oh, how do I get better? Or how do I stay as healthy as I can?” But in asking, “How do I pay for this in the best possible way?”

That economic component wasn’t the case that long ago. You would get care and you didn’t give much thought to price or how it was billed.

Joann, as somebody who provides care, what’s changed that makes it necessary for patients to be proactive about their health economics?

Know before you owe

Barnes-Lague

Barnes-Lague: It’s the consumer-driven health plans, where patients are now responsible for more. They have to make a decision – “Do I buy my groceries, or do I have an MRI.”

The shift in healthcare makes us go after the patient before insurance is paid 100 percent. Patients now have a lot of skin in the game. And they have to start thinking, “Do I really need this procedure, or can it wait?”

Gardner: And we get this information-rush from other parts of our lives. We have so much more information available to us when we buy groceries. If we do it online, we can compare and contrast, we can comparison shop, we can even get analysis brought to the table. It can be a good thing.

Julie, you are trying to help people make better paying decisions. If we have to live with more cost transparency, how can technology be a constructive part of it?

Gerdeman

Gerdeman: It’s actually a tremendous opportunity for technology to help patients and providers. We live in an experience economy, and in that economy everyone is used to having full transparency. We’re willing to pay for faster service, faster delivery.

We have highly personalized experiences. And all of that should be the same in our healthcare experiences. This is what people have come to expect. And that’s why, for us, it’s so important to provide personalized, consumer-friendly digital payment options. 

Sanborn: As someone who has been watching these high-deductible health plans unfold, data has come out saying the average American household can’t afford a $500 medical bill, that an unexpected $500 medical bill would drastically impact that household’s finances for months. So people are looking to understand upfront what they are going to owe. 

At the same time, patients are growing tired of the back-and-forth between the provider and the payer, with everyone kicking the can back and forth between then saying, “Well, I don’t know that. Your provider should know that.” And the provider says, “Well, your health plan is the one that arbitrates the price of your care. Why don’t you go ask them?” Patients are getting really, really tired of that.

Learn How to Meet Patient Demands

for Convenient Payment Options 

for Healthcare Services

Now the patients have the bullhorn, and they are saying, “I don’t care whose responsibility it is to inform me. Someone needs to inform me, and I want it now.” And in a consumer-driven healthcare space, which is what’s evolving now, consumers are going to go where they get that retail-like experience.

That’s why we are seeing the rise in urgent care centers, walk-in clinics, and places where they don’t have to wait. They can instead book an appointment on their phone and go to the appointment 20 minutes later. Patients have the opportunity to pick where they get their care and they know it. At the same time, they know they can demand transparency because it’s time. 

Gardner: So transparency can be a force for good. It can help people make better decisions, be more efficient, and as a result drive their cost down. But transparency can put too much information in front of people, perhaps at a time when they are not really in a mindset to absorb it.

What are you doing at CVS, Alena, to help people make better decisions, but not overload them? 

Clear information increases options

Harrison: The key to good transparency tools is that they have to be a 100 percent accurate. Secondly, the information has to be clear, actionable, and relevant to the patient.

If we gave patients 10 data points about the price of a drug — and sometimes there are 10 prices depending on how you look at it — it would overwhelm folks. It would confuse them, and we could lose that engagement. Providing simple, clear data that is accurate and actionable shows them the options specific to their benefit plan. That is what we can do to help consumers navigate through this very complex web in our healthcare system.

Gardner: Recondo helps people create and deliver estimates throughout this process. How does that help in providing the right information, at the right time, in the right context?

Kawamoto: It’s critical to provide [estimate information] when a patient schedules their service, because that gives them the opportunity — if there is a financial question or concern — to say, “Okay, I don’t know that I can pay for that. Is there another location where the price might be different? What are my financial options in terms of the payment plan or some sort of assistance?”

Enabling providers to proactively communicate that information to patients as they schedule a service or in advance gives patients an opportunity to shop. They know they are going to be meeting with an orthopedic surgeon because they need knee arthroscopy.

In advance of that, they should be able to get some idea of what they are going to owe, relative to their specific benefit information. It puts them in that position to engage with the orthopedic surgeon to say, “I looked at the facility and it’s actually going to be $3,000. What are my options?” Now, that provider can be a part of the cost discussion. I think that is critical. 

Barnes-Lague: As providers we have to be okay with patients making that decision, of saying, “Maybe I won’t have that service now.” That’s consumer-driven. And sometimes that hurts our volume. 

We may have had a hard time understanding that in the beginning, when we shared estimates and feared that the patients wouldn’t come. Well, would you rather trick them and then have bad debt?

As providers we have to be okay with patients making that decision, of saying, “Maybe I won’t have that service right now.” That’s consumer-driven. … It’s about being comfortable with the patient making educated decisions.

It’s about being comfortable with the patient making educated decisions. Perhaps they will come back for your MRI in December when their deductibles are met, and they can better afford it.

Gardner: Part of this solution requires the physician or practitioner to be educated enough to help the patient sort out the finances, as well as the care and medical treatments. As someone who has a lot of clinicians, technicians, and physicians, are they not the primary point for more transparency to the patient? 

Barnes-Lague: That would be the ideal solution, to have the physicians who are referring these very expensive services to begin having those conversations. Often patients are kind of robotic with what their doctors tell them. 

We have to tell them, “You have a choice. You have a choice to make some phone calls. You have a choice to do your own price shopping.” We would love it if the referring physicians began having those price-transparency conversations early, right in their offices. 

Gardner: So the new dual-major: Economics and pre-med?

Julie, your background is in technology. You and I both know there are lots of occupations where people have complex decisions to make. And they have to be provided trust and accommodation to make well-informed decisions.

Whether you are a purchasing agent, chief executive, or chief marketing officer, there are tools and data to help you. There have been great strides made in solving some of these problems. Is that what we are going to see applied to these medical decisions across the spectrum of payer, provider, and patient? 

Easy-to-access, secure data builds trust

Gerdeman: This field is ripe for disruption. And technology, particularly emerging technology, can make a big difference in providing transparency. 

A lot of my colleagues here have talked about trust. To me, the reason everybody is requiring transparency is to build trust. It goes back to that trusted relationship between the provider and the patient.

The data should be available to everyone. It’s now time to present the data in a very clear, simple, and actionable way for them to make decisions. The consumer can make an informed decision, and the provider can know what the consumer is facing.

Gardner: Yet to work, that data needs to be protected. It needs to adhere to multiple regulations in multiple jurisdictions, and compliance is a moving target because the regulations change so often. 

Beth, what do we do to solve the data availability problem? Everybody knows data is how to solve it. It’s about more data. But nobody wants to own and control that data.

Sanborn

Sanborn: Yes, it’s the $64,000 question. How do you own all that data and protect it at the same time? We know that healthcare is one of the most attacked industries when it comes to cyber criminals, ransomware, and phishing.

I hear all the time from experts that as much as the human element drives healthcare, as far as data and its protection [the human element] is also the greatest vulnerability. Most of the attacks you hear about happen because someone clicked on a link in an email or left their laptop somewhere. These are basic human errors that can have catastrophic consequences depending on who is on the receiving end of that error. 

Technology is, of course, a huge part of the future, but you can’t let technology develop faster than the protections that have to go with it. And so any developer, any innovator who is trying to help move this space forward has to make cybersecurity a grassroots foundational part of anything that they innovate.

It’s not enough to say, “My tool can help you do this, this, and this.” You have to be able to say, “Well, my tool will help you do this, this, and this, andthis is how we are going to protect you along the way.” That has to be part of, not just the conversation, but every single solution. 

Gardner: Alena, at CVS, do you see that data solution as a major hurdle to overcome? Meaning the controlling, managing, and protection of the data — but also making it available to every nook and cranny that it needs to get to? 

Harrison: That’s always a key focus for us, and it’s frankly ingrained in every single thing we do. To give a sense of what we are putting out there, the price transparency tools that we have developed are all directly connected to our claims system. It’s the only way we can make sure that the patient out-of-pocket costs we provide are 100 percent accurate. They must reflect what that patient would pay as they go to their local pharmacy.

See the New Best Practice

of  Driving Patient Loyalty 

Through Estimation

But making sure that our vendor partners have a robust and very rigorous process around security is paramount. It takes time to do that, and that’s one of the challenges we all face. 

Gardner: So we have a lot going on with new transparency regulations, and more information coming out. We know that we have to make it secure, and we are going to have to overcome that. So it’s early still.

It seems to me, though, there are examples of the tools already developed and how they can be impactful; they can work. 

Joann at Shields, do you have any examples of what benefits can happen when you bring in the right tools for transparency and for making good decisions? 

Transparency upfront benefits bottom line

Barnes-Lague: Yes, we bring in more revenue and we bring it in timely. We used to be at about 60 percent collected from the patient’s side overall. Since we implemented tools, we are at 85 percent collected, a 400 percent increase in our overall revenue.

We have saved $4.5 million in [advance procedure] denials, just based on eligibility, authorization, and things like that. We are bringing in more money and we don’t require as much labor because of the automation. We are staffed around the automation now. 

Gardner: Julie, how does it work? How do better tools and more information in advance help collect more money for a medical transaction? 

Gerdeman: It works in a couple of ways. First, from a patient-facing perspective, they have the access to pay whenever and wherever they are. Having that access and availability is critical.

We have saved $4.5 million in [advance procedure] denials — just based on eligibility, authorization, and things like that. We are bringing in more money and we don’t require as much labor because of the automation.

Also they need to be connected. An estimate – like Heather talked about, to be able to make a decision from that — has to be available from the very beginning. 

And then finally, it’s about options. All of these things help drive adoption if you give a patient options and clarity upfront. They have a choice of how to pay and they have the knowledge about costs. That adoption drives success.

So if you implement the tools appropriately you will see immediate impact. The patients adopt it, the staff adopts it, and then it drives up the collections that Joann is talking about. 

Gardner: Heather, we have seen in other industries that tracking decision processes and behaviors leads to understanding use patterns. From them, incentivization can come into play. Have you seen that? How can incentives and transparency improve the overall economic benefits?

Incentivization improves savings

Kawamoto: Being able to communicate to patients what their anticipated out-of-pocket costs will be is powerful. A lot of organizations have created the means where they say to the patient, “If you pay this amount in advance of your service, you will actually get a discount.” That puts the patient in a position tosay, “I could save $200 if I decide to pay this today.” That’s a key component of it. They know they are going to get a better cost if they pay sooner, and then many of them are incented to do that. 

Gardner: Any other thoughts about incentives, Alena? 

Harrison:Yes. An indirect incentive, but still quite relevant, is that our price transparency tools are available to all of our CVS Caremark members. We are seeing about 230,000 searches a month on our website.

When members search for the drugs they are taking, if there are lower-cost alternative options, we see members in their next refill order one of those lower cost drugs 20 percent of the time. That results in an average savings of $120 per prescription fill for those patients. As you can imagine, over the course of several months, that savings really starts to add up. 

Gardner: We have come back to the idea of the out-of-pocket costs. The higher the deductible, the lower the premiums. People are incentivized therefore to go to lower premiums. But then, heaven forbid, they have an illness, and then they have to start thinking about, “Oh my gosh, how do I best manage that out-of-pocket deductible?”

Nowadays, with technologies like machine learning (ML)artificial intelligence (AI), and big data analytics, we are seeing prescriptive or even recommendation types of technologies. How far do we need to go before we can start to bring some of those technologies about making good recommendations based on data — rather than intuition or even a lack of informed decision making — to medical finance decisions? How do we get to that point where we can be proscriptive in automated recommendations, rather than people slogging through this by themselves?

Automated advice advances

Gerdeman: At HealthPay24 we are looking at predictive analytics and what role the predictive capability can play in helping make recommendations for patients. That’s not necessarily on the clinical or pharmaceutical side, but we know when a patient makes an appointment and gets an estimate what their propensity to pay will be.

Proactively we can offer them options based on what we know ahead of time. They don’t even have to worry about it. They can just say, “Okay, here are my choices. I have only saved up $500; therefore, I am going to take advantage of a loan or a payment plan.” And I do believe that technology will help.

On the AI side, it’s already starting. As you talk to providers, they are using it for repetitive processes. But I think there is even more opportunity on the cognitive side of AI to play [a role] in hospitals. So there is a big opportunity. 

Gardner: We already see this in financial markets. People get more information, they get recommendations, and there is arbitrage. It’s not either/or. It’s what are the circumstances? What’s the credit we can offer? How do we make the most efficient transaction for all parties?

So, as in other transactions, we have to gain more comfort with the combination economics and medical procedures. Is that part of the culture shift? You have to be a crass consumer andyou have to be looking out for your health.

Any thoughts about the need to be both a savvy consumer as well as a patient? 

Kawamoto: It’s critical. To Julie’s point, we are now looking through our data and finding legitimate savings opportunities for patients, and we’re proactively outreaching to those patients. Of course, at the end of the day, the decision is always in the provider’s hands — and it should be, because not all of us are clinicians. I certainly am not. But to allow patients to prompt that fuller conversation helps drive the process, so the burden isn’t just on the provider. This is critical. 

Gardner: Before we close out, any recommendations? How should the industry best prepare for more transparency around procedures and payments in medical environments? Joann, what do you think people should be thinking about to better prepare themselves as providers for this new era of transparency? 

Lead with clear communication 

Barnes-Lague: Culture is very important within the organization. You need to continue to talk. It’s shifting. Let’s talk about the burden to the provider, now that the patients are responsible for more. There is no other product that you can purchase without paying upfront. But you can walk away from healthcare without paying for it. 

The more technology you implement, the more transparency you can provide, the more conversations you can have with those patients – these not only help the patients. You as providers are in business for revenue. This helps bring in the revenue that you have lost with the shift to consumer-driven health plans.

Gardner: Heather, as someone who provides tools to providers, what should they be thinking about when it comes to a new era of transparency?

View a Webinar on How Accurate 

Financial Data Helps Providers 

make Informed Decisions

Kawamoto: While there have been tools available to providers, now we have to make those tools available to patients. Providers are, in many cases, the first line of communication to patients. But before that patient even schedules, if they are in a position to know they need a service, they can go out and self-shop. 

That’s what providers need to be thinking about. How do I get even further out into the decision-making process? How do we engage with that patient at that early point, which is going to build trust, as well as ensure that revenue is coming to your particular facility?

Gardner: Beth, what advice do you have for consumers, the patients? What should they be thinking about to take advantage of transparency?

Take care of physicians and finances

Sanborn: First, I want to advocate for the physicians. We hear all the time about change fatigue, burnout; burnout is as hot a topic as transparency. If providers are going to be put in the position of having to have financial conversations with patients, I think health system leaders need to be aware of that and make sure that providers are properly educated. What do they need to know so that they can accurately communicate with patients? And they need to understand how that’s going to affect the workload — that is already onerous and at times damaging — to physicians. So along Joann’s comments about culture, there needs to be a culture around ushering in physicians into that role. 

From a consumer standpoint, when we look at the law that just went into effect, patients need to understand what are they looking at. The price list that the hospital is publishing is a chargemaster. It’s a naked price from a hospital. It’s not what they are going to pay, and so we need to eradicate the sticker shock that I am sure is happening at first glance. 

Gardner: The patient needs to self-educate about what’s net-net and what’s gross when it comes to these prices?

Patients need to be educated on what they are looking at, and then understand the options available to them as far as what they are actually going to pay. Payers need to make sure they are reaching out and make sure their consumers understand how the benefits work.

Sanborn:Right. You can put these prices in plain terms. The chargemaster is what a hospital charges. But remember you have insurance. There are discounts for self-pay. There could be other incentives or subsidies that you are eligible for.

So please don’t have a heart attack, literally, when you look at this price and go, “Oh, my gosh, is that what I am responsible for?” Patients need to be educated on what they are looking at, and then understand the options available to them as far as what you are actually going to pay. 

And the other thing is benefits literacy. Payers need to make sure they are reaching out to their consumers and making sure their consumers understand how the benefits work so that they can advocate for themselves. 

Gardner: Alena at CVS, as a provider of pharmaceutical services and goods, what advice do you have about making the best of transparency?

Harrison: Beth hit the nail on the head with a lot of her points. We see similar brute-force regulation happening in the prescription drug space. So pharmaceutical manufacturers now need to publish their “sticker” prices.

Little do most people know, the sticker price is something no one pays. Payers don’t pay it. Patients certainly don’t pay it. The pharmacy doesn’t pay it. And so it is so critical as this information becomes available to make sure that your customers, consumers, and members understand what they are looking at. You as an organization should be prepared to support them through the process of navigating this additional information.

Gardner: Julie, what should people be thinking about on the vendor side, the people providing these tools, now that transparency is a necessary part of the process? What should the tool providers be thinking about to help people navigate this?

Gerdeman: It comes back to the user experience — providing a simple, clear, and consumer friendly experience through the tools. That is what’s going to drive usage, adoption, and loyalty.

View Provider Success Stories

on Driving Usage, Adoption, 

and Loyalty Among Patients

Technology is a great way for providers to drive patient loyalty, and that is where it’s going to make a difference. That’s where you are going to engage them. You are going to win hearts and minds. They are going to want to come back because they had a great clinical experience. They feel better, they are healthier now, and you want the rest of their experience financially to match that great clinical experience. 

Anything we can do in the tools themselves to be predictive, clear, beautiful, and simple will make all the difference.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, electronic medical records, Enterprise transformation, Government, healthcare, Information management, machine learning, mobile computing, multicloud, Networked economy, procurement, Security, supply chain, User experience | Tagged , , , , , , , , , | Leave a comment

A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud

The next BriefingsDirect cloud adoption best practices discussion focuses on some of the strictest security and performance requirements that are newly being met for an innovative global finance services deployment.

We’ll now explore how a major financial transactions provider is exploiting cloud models to extend a distributed real-time payment capability across the globe. Due to the needs for localized data storage, privacy regulations compliance, and lightning-fast transactions speeds, this extreme cloud-use formula pushes the boundaries — and possibilities — for hybrid cloud solutions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us now as we hear from an executive at Mastercard and a cloud deployment strategist about a new, cutting-edge use for cloud infrastructure. Please welcome Paolo Pelizzoli, Executive Vice President and Chief Operating Officer at Realtime Payments International for Mastercard, and Robert Christiansen, Vice President and Cloud Strategist at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is happening with cloud adoption that newly satisfies such major concerns as strict security, localized data, and top-rate performance? Robert, what’s allowing for a new leading edge when it comes to the public clouds’ use?

Christiansen: A number of new use cases have been made public. For the front runners like Capital One [Financial Corp.], and some other organizations, they have taken core applications that would otherwise be considered sacred and are moving them to cloud platforms. Those have become more-and-more evident and visible. The Capital One CIO, Robert Alexander, has been very vocal about that.

Christiansen

So now others have followed suit. And the US federal government regulators have been much more accepting around the audit controls. We are seeing a lot more governance and automation happening as well. A number of the business control objectives – from security to the actual technologies to the implementations — are becoming more accepted practices today for cloud deployment.

So, by default, folks like Paolo at Mastercard are considering the new solutions that could give them a competitive edge. We are just seeing a lot more acceptance of cloud models over the last 18 months.

Gardner: Paolo, is increased adoption a matter of gaining more confidence in cloud, or are there proof points you look for that opens the gates for more cloud adoption?

Compliance challenges cloud 

Pelizzoli: As we see what’s happening in the world around nationalism, the on-the-soil [data sovereignty] requirements have become much more prevalent. It will continue, so we need the ability to reach those countries, deploy quickly, and allow data persistence to occur there.

Pelizzoli

The adoption side of it is a double-edged sword. I think everybody wants to get there, and everybody intuitively knows that they can get there. But there are a lot of controls around privacy, as well as the SOX and SOC 1 reports compliance, and everything else that needs to be adjusted to take into the cloud into account. And if the cloud is rerouting traffic because one zone goes down and it flips to another zone, is that still within the same borders, is it still compliant, and can you prove that?

So while technologically this all can be done, from a compliance perspective there are still a lot of different boxes left to check before someone can allow payments data to flow actively across the cloud — because that’s really the panacea.

Gardner: We have often seen a lag between what technology is capable of and what regulations, standards, and best practices allow. Are we beginning to see a compression of that lag? Are regulators, in effect, catching up to what the technology is capable of?

Pelizzoli: The technology is still way out in the front. The regulators have a lot on their plates. We can start moving as long as we adhere to all the regulations, but the regulations between countries and within some countries will continue to have a lagging effect. That being said, you are beginning to see governments understand how sanctions occur and they want their own networks within their own borders.

Those are the types of things that require a full-fledged payments network that predated the public Internet to begin to gain certain new features, functions, and capabilities. We are now basically having to redo that payments-grade network.

Gardner: Robert, the technology is highly capable. We have a major player like Mastercard interested in solving their new globalization requirements using cloud. What can help close the adoption gap? Does hybrid cloud help solve the log-jam?

Christiansen: The regionalization issues are upfront, if not the number-one requirement, as Paolo has been talking about. I think about South Korea. We just had a meeting with the largest banking folks there. They are planning now for their adoption of public cloud, whether it’s Microsoft Azure, Amazon Web Services (AWS), or Google Cloud. But the laws are just now making it available.

Prior to January 1, 2019, the laws prohibited public cloud use for financial services companies, so things are changing. There is lot of that kind of thing going on around the globe. The strategy seems to be very focused on making the compute, network, and storage localized and regionalized. And that’s going to require technology grounding in some sort of connectivity across on-premises and public, while still putting the proper security in-place.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

So, you may see more use of things like OpenShift or Cloud Foundry’s Pivotal platform and some overlay that allows folks to take advantage of that so that you can push down an appliance, like a piece of equipment, into a specific territory.

I’m not certain as to the cost that you incur as a result of adding such an additional local layer. But from a rollout perspective, this is an upfront conversation. Most financial organizations that globalize want to be able to develop and deploy in one way while also having regional, localized on-premises services. And they want it to get done as if in a public cloud. That is happening in a multiple number of regions.

Gardner: Paolo, please tell us more about International Realtime Payments. Are you set up specifically to solve this type of regional-global deployment problem, or is there a larger mandate? What’s the reason for this organization?

Hybrid help from data center to the edge

Pelizzoli: Mastercard made an acquisition a number of years ago of Vocalink. Vocalink did real-time secure interbank funds transfer, and linkage to the automated clearing house (ACH) mechanism for the United Kingdom (UK), including the BACS and LINK extensions to facilitate payments across the banking system. Because it’s nationally critical infrastructure, and it’s bank-to-bank secure funds transfer with liquidity checks in place, we have extended the capabilities. We can go through and perform the same nationally critical functions for other governments in other countries.

Vocalink has now been integrated into Mastercard, and Realtime Payments will extend the overall reach, to include the debit/credit loyalty gift “rails” that Mastercard has been traditionally known for.

I absolutely agree that you want to develop one way and then be able to deploy to multiple locations. As hybrid cloud has arrived, with the advent of Microsoft Azure Stack and more recently AWS’s Outposts, it gives you the cloud inside of your data center with the same capabilities, the same consoles, and the same scripting and automation, et cetera.

As we see those mechanisms become richer and more robust, we will go through and be deploying that approach to any and all of our resources — even being embedded at the edge within a point of sale (POS) device.

As we examine the different requirements from government regulations, it really comes down to managing personally identifiable information.

As we examine the different requirements from government regulations, it really comes down to managing personally identifiable information.

So, if you can secure the transaction information, by abstracting out all the other stuff and doing some interesting cryptography that only those governments know about, the [transaction] flow will still go through [the cloud] but the data will still be there, at the edge, and on the device or appliance.

We already provide for detection and other value-added services for the assurance of the banks, all the way down to the consumers, to protect them. As we start going through and seeing globalization — but also the regionalizationdue to regulation – it will be interesting to uncover fraudulent activity. We already have unique insights into that.

No more noisy neighbors

Christiansen: Getting back to the hybrid strategy, AWS Outposts and Azure Stack have created the opportunity for such globalization at speed. Someone can plug in a network and power cable and get a public cloud-like experience yet it’s on an on-premises device. That opens a significant number of doors.

You eliminate multi-tenancy issues, for example, which are a huge obstacle when it comes to compliance. In addition, you have to address “noisy neighbor” issues, performance issues, failovers, and stuff like that that are caused by multi-tenancy issues.

If you’re able to simply deploy a cloud appliance that is self-aware, you have a whole other trajectory toward use of the cloud technology. I am actively encouraged to see what Microsoft and Amazon can do to press that further. I just wanted to tag that onto what Paolo was talking about.

Pelizzoli: Right, and these self-contained deployments can use Kubernetes. In that way, everything that’s required to go through and run autonomously — even the software-defined networks (SDNs) – can be deployed via containers. It actually knows where its point of persistence needs to be, for data sovereignty compliance, regardless of where it actually ends up being deployed.

This comes back to an earlier comment about the technology being quite far ahead. It is still maturing. I don’t think it is fully mature to everybody’s liking yet. But there are some very, very encouraging steps.

As long as we go in with our eyes wide open, there are certain things that will allow us to go through and use those technologies. We still have some legacy stuff pinned to bare-metal hardware. But as things start behaving in a hybrid cloud fashion as we’re describing, and once we get all the security and guidelines set up, we can migrate off of those legacy systems at an accelerated pace.

Gardner: It seems to me that Realtime Payments International could be a bellwether use case for such global hybrid cloud adoption. What then are the checkboxes you need to sign off on in order to be able to use cloud to solve your problems?

Perpetual personal data protection

Pelizzoli: I can’t give you all the criteria, but the persistence layer needs to be highly encrypted. The transports need to be highly encrypted. Every time anything is persisted, it has to go through a regulatory set of checks, just to make sure that it’s allowed to do what it’s being asked to do. We need a lot of cleanliness in the way metrics are captured so that you can’t use a metric to get back to a person.

If nothing else, we have learned a lot from the recent [data intrusion] announcements by FacebookMarriott, and others. The data is quite prevalent out there. And payments data, just like your hospital data, is the most personal.

As we start figuring out the nuances of regulation around an individual service, it must be externalized. We have to be able to literally inject solutions to regulatory requirements – and not by coding it. We can’t be creating any payments that are ambiguous.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

That’s why we are starting to see a lot of effort going into how artificial intelligence (AI) can help. AI could check services and configurations to test for every possibility so that there isn’t a “hole” that somebody can go through with a certain amount of credentials.

As we go forward, those are the types of things that — when we are in a public cloud — we need to account for. When we were all internal, we had a lot of perimeter defenses. The new perimeter becomes more nebulous in a public cloud. You can create virtual private clouds, but you need to be very wary that you are expanding time factors or latency.

Gardner: If you can check off these security and performance requirements, and you are able to start exploiting the hybrid cloud continuum across different localities, what do you get? What are the business outcomes you’re seeking?

Common cloud consistency 

Pelizzoli: A couple of things. One is agility, in terms of being able to deploy to two adjacent countries, if one country has a major outage. That means ease of access to a payments-grade network — without having to go through and put in hardware, which will invariably fail.

Also, the ability to scale quickly. There is an expected peak season for payments, such as around the Christmas holidays. But there could be an unexpected peak season based on bad news — not a peak season, but a peak day. How do you go through and have your systems scale within one country that wasn’t normally producing a lot of transactions? All of a sudden, now it’s producing 18 times the amount of transactions.

Those types of things give us a different development paradigm. We have a lot of developers. A [common cloud approach] would give us consistency, and the ability to be clean in how we automate deployment; the testing side of it, the security checks, etc.

Before, there were a lot of different ways of doing development, depending on the language and the target. Bringing that together would allow increased velocity and reduced cost, in most cases. And what I mean by “most cases” is I can use only what I need and scale as I require. I don’t have to build for the worst possible day and then potentially never hit it. So, I could use my capacity more efficiently.

Gardner: Robert, it sounds like major financial applications, like a global real-time payment solution, are getting from the cloud what startups and cloud-native organizations have taken for granted. We’re now able to take the benefits of cloud to some of the most extreme and complex use cases. 

Cloud-driven global agility

Christiansen: That’s a really good observation, Dana. A healthcare organization could use the same technologies to leverage an industrial-strength transaction platform that allows them to deliver healthcare solutions globally. And they could deem it as a future-proof infrastructure solution. 

One of the big advantages of the public cloud has been the isolation of all those things that many central IT teams have had to do day-in and day-out. That is to patch releases, upgrade processes, constantly looking at the refresh. They call it painting the Golden Gate Bridge – where once you finish painting the bridge, you have to go back and do it all over again. And a lot of that effort and money goes into that refresh process. 

And so they are asking themselves, “Hey, how can we take our $3 or $4 billion IT spend, and take x amount of that and begin applying it toward innovation?”

Right now there is so much rigidity. Everyone is asking the same question, “How do I compete globally in a way that allows me to build the agility transformation into my organization?” 

And if someone can take a piece out of that equation, all things are eligible. Everyone is asking the same question, “How do I compete globally in a way that allows me to build the agility transformation into my organization?” Right now there is so much rigidity, but the balance against what Paolo was talking about — the industrial-grade network and transaction framework — to get this stuff done cannot be relinquished.

So people are asking a lot of the same questions. They come in and ask us at CTP, “Hey, what use-cases are actually in place today where I can start leveraging portions of the public cloud so I can start knocking off pieces?”

Paolo, how do you use your existing infrastructure, and what portion of cloud enablement can you bring to the table? Is it cloud-first, where you say, “Hey, everything is up for grabs?” Or are you more isolated into using cloud only in a certain segment?

Follow a paved path of patterns

Pelizzoli: Obviously, the endgame is to be in the cloud 100 percent. That’s utopian. How do we get there? There is analysis being done. It depends if we are talking about real-time payments, which is actually more prepared to go into the cloud than some of the core processing that handles most of North America and Europe from an individual credit card or debit card swipe. Some of those core pieces need more rewiring to take advantage of the cloud.

When we look at it, we are decomposing all of the legacy systems and seeing how well they fit in to what we call a paved path of patterns. If there is a paved path for a specific type of pattern, we put it on the list of things to transition to, as being built as a cloud-native service. And then we run it alongside its parent for a while, to test it, through stressful periods and through forced chaos. If the segment goes down, where does it flip over to? And what is the recovery time?

The one thing we cannot do is in any way increase latency. In fact, we have some very aggressive targets to reduce latency wherever we can. We also want to improve the recovery and security of the individual components, which we end up calling value-added services.

There are some basic services we have to provide, and then value-added services, which people can opt in or opt out of. We do have a plan and strategy to go through and prioritize that list.

Gardner: Paolo, as you master hybrid cloud, you must have visibility and monitoring across these different models. It’s a new kind of monitoring, a new kind of management.

What do you look to from CTP and HPE to help attain new levels of insight so you can measure what’s going on, and therefore optimize and automate?

Pelizzoli: CTP has been a very good and integral part of our first steps into the cloud. 

Now, I will give you one disclaimer. We have some companies that are Mastercard companies that are already in the cloud, and were born in the cloud. So we have experience with AWS, we have experience with Azure, and we have some experience with Google Cloud Platform.

It’s not that Mastercard isn’t in the cloud already, it is. But when you start taking the entire plant and moving it, we want to make sure that the security controls, which CTP has been helping ratify, get extended into the cloud — and where appropriate, actually removed, because there are better ones in the cloud today.

Extend the cloud management office 

Now, the next phase is to start building out a cloud management office. Our cloud management office was created early last year. It is now getting the appropriate checks and audits from finance, the application teams, the architecture team, security teams, and so on.

As that list of prioritized applications comes through, they have the appropriate paved path, checks, and balance. If there are any exceptions, it gets fiercely debated and will either get a pass or it will not. But even if it does not, it can still sit within our on-premises version of the cloud, it’s just more protected.

As we route all the traffic, that is where there is going to be a lot of checks within the different network hops that it has to take to prevent certain information from getting outside when it’s not appropriate.

Gardner: And is there something of a wish list that you might have for how to better fulfill the mandate of that cloud management office?

Pelizzoli: We have CTP, which HPE purchased along with RedPixie. They cover, between those two acquisitions, all of the public cloud providers.

Now, the cloud providers themselves are selling you the next feature-function to move themselves ahead of their competitor. CTP and RedPixie are taking the common denominator across all of them to make sure that you are not overstepping promises from one cloud provider into another cloud provider. You are not thinking that everybody is moving at the same pace.

They also provide implementation capabilities, migration capabilities, and testing capabilities through the larger HPE organization. The fact is we have strong relationships with Microsoft and with Amazon, and so does HPE. If we can bring the collective muscle of Mastercard, HPE, and the cloud providers together, we can move mountains.

Gardner: We hear folks like Paolo describe their vision of what’s possible when you can use the cloud providers in an orchestrated, concerted, and value-added approach. 

Other people in the market may not understand what is going on across multi-cloud management requirements. What would you want them to know, Robert?

O brave new hybrid world

Christiansen: A hybrid world is the true reality. Just the complexity of the enterprise, no matter what industry you are in, has caused these application centers of gravity. The latency issues between applications that could be moved to cloud or not, or impacted by where the data resides, these have created huge gravity issues, so they are unable to take advantage of the frameworks that the public clouds provide. 

So, the reality is that the public cloud is going to have to come down into the four walls of the enterprise. As a result of that, we are seeing an explosion of the common abstraction — there is going to be some open sourced framework for all clouds to communicate and to talk and behave alike.

Over the past decade, the on-premises and OpenStack world has been decommissioning the whole legacy technology stack, moving it off to the side as a priority, as they seek to adopt cloud. The reality now is that we have regional, government, and data privacy issues, we have got all sorts of things that are pulling it all back internally again. 

Out of all this chaos is going to rise the phoenix of some sort of common framework. There has to be. There is no other way out of this. We are already seeing organizations such as Paolo’s at Mastercard develop a mandate to take the agile step forward.

They want somebody to provide the ability to gain more business value versus the technology, to manage and keep track of infrastructure, and to future-proof that platform. But at the same time, they want a technology position where they can use common frameworks, common languages, things that give interoperability across multiple platforms. That’s where you are seeing a huge amount of investment. 

I don’t know if you recently saw that HashiCorp got $100 million inadditional funding, and they have a valuation of almost $2 billion. This is a company that specializes in sitting in that space. And we are going to see more of that.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

And as folks like Mastercard drive the requirements, the all-in on one public cloud mentality is going to quickly evaporate. These platforms absolutely have to learn how to play together and get along with on-premises, as well as between themselves.

Gardner: Paolo, any last thoughts about how we get cloud providers to be team players rather than walking around with sharp elbows?

Tech that plays well with others

Pelizzoli: I think it’s actually going to end up being a lot more of the technology that’s being allowed to run on these cloud platforms is going to take care of it.

I mentioned Kubernetes and Docker earlier, and there are others out there. The fact that they can isolate themselves from the cloud provider itself is where it will neutralize some of the sharp elbowing that goes on.

Now, there are going to be features that keep coming up that I think companies like ours will take a look at and start putting workloads where the latest cutting-edge feature gives us a competitive advantage and then wait for other cloud providers to go throughand catch up. And when they do, we can then deploy out on those. But those will be very conscious decisions. 

I don’t think that there is a one cloud fits all, but where appropriate we will go throughand be absolutely multi-cloud. Where there is defining difference, we will go throughand select the cloud provider that best suits in that area to cover that specific capability.

Gardner: It sounds like these extreme use cases and the very important requirements that organizations like Mastercard have will compel this marketplace to continue to flourish rather than become a one-size-fits-all. So an interesting time that we are seeing the maturation of the applications and use cases actually start to create more of a democratization of cloud in the marketplace.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, Cyber security, Docker, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, multicloud, Networked economy, Security | Tagged , , , , , , , , , , , | Leave a comment

Where the rubber meets the road: How users see the IT4IT standard building competitive business advantage

The next BriefingsDirect IT operations strategy panel discussion explores how the IT4IT[tm] Reference Architecture for IT management creates demonstrated business benefits – in many ways, across many types of organizations. 

Since its delivery in 2015 by The Open GroupIT4IT has focused on defining, sourcing, consuming, and managing services across the IT function’s value stream to its stakeholders. Among its earliest and most ardent users are IT vendors, startups, and global professional services providers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how this variety of highly efficient businesses and their IT organizations make the most of IT4IT – often as a complimentary mix of frameworks and methodologies — we are now joined by our panel:

The panel discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. Here are some excerpts:

Gardner: Big trends are buffeting business in 2019. Companies of all kinds need to attain digital transformation faster, make their businesses more intelligent and responsive to their markets, and improve end user experiences. So, software development, applications lifecycles, and optimizing how IT departments operate are more important than ever. And they need to operate as a coordinated team, not in silos.

Lars, why is the IT4IT standard so powerful given these requirements that most businesses face?

Rossen: There are a number of reasons, but the starting point is the fact that it’s truly end-to-end. IT4IT starts from the planning stage — how to convert your strategy into actionable projects that are being measured in the right manner — all the way to development, delivery of the service, how to consume it, and at the end of the day, to run it.

There are many other frameworks. They are often very process-oriented, or capability-oriented. But IT4IT gives you a framework that underpins it all. Every IT organization needs to have such a framework in place and be rationalized and well-integrated. And IT4IT can deliver that.

Gardner: And IT4IT is designed to help IT organizations elevate themselves in terms of the impact they have on the overall business.

Mark, when you encounter someone who says IT4IT, “What is that?” What’s your elevator pitch, how do you describe it so that a lay audience can understand it?

Bodman: I pitch it as a framework for managing IT and leave it at that. I might also say it’s an operating model because that’s something a chief information officer (CIO) or a business person might know.

If it’s an individual contributor in one of the value streams, I say it’s a broader framework than what you are doing. For example, if they are a DevOps guy, or a maybe a Scaled Agile Framework (SAFe) guy, or even a test engineer, I explain that it’s a more comprehensive framework. It goes back to the nature of IT4IT being a hub of many different frameworks — and all designed as one architecture.

Gardner: Is there an analog to other business, or even cultural, occurrences that IT4IT is to an enterprise? 

Rossen: The analogy I have is that you go to The Lord of the Rings, and IT4IT is the “one ring to rule them all.” It actually combines everything you need.

Gardner: Why do companies need this now? What are the problems they’re facing that requires one framework to rule them all?

Everyone, everything on the same page

Esler: A lot of our clients have implemented a lot of different kinds of software — automation software, orchestration software, and portals. They are sharing more information, more data. But they haven’t changed their operating model.

Using IT4IT is a good way to see where your gaps are, what you are doing well, what you are not doing not so well, and how to improve on that. It gives you a really good foundation on knowing the business of IT.

Bennett: We are hearing in the field is that IT departments are generally drowning at this point. You have a myriad of factors, some of which are their fault and some of which aren’t. The compliance world is getting nightmare-strict. The privacy laws that are coming in are straining what are already resource-constrained organizations. At the same time, budgets are being cut.

The other side of it is the users are demanding more from IT, as a strategic element as opposed to simply a support organization. As a result, they are drowning on a daily basis. Their operating model is — they are still running on wooden wheels. They have not changed any of their foundational elements.

If your family has a spending problem, you don’t stop spending, you go on a budget. You put in an Excel spreadsheet, get all the data into one place, pull it together, and you figure out what’s going on. Then you can execute change. That’s what we do from an IT perspective. It’s simply getting everything in the same place, on the same page, and talking the same language. Then we can start executing change to survive.

Peruse a Full Library of

IT4IT Reference Architecture

Publications

Gardner: Because IT in the past could operate in silos, there would be specialization. Now we need a team-sport approach. Mark, how does IT4IT help that?

Bodman: An analogy is the medical profession. You have specialists, and you have generalist doctors. You go to the generalist when you don’t really know where the problem is. Then you go to a specialist with a very specific skill-set and the tools to go deep. IT4IT has aimed at that generalist layer, then with pointers to the specialists.

Gardner: IT4IT has been available since October 2015, which is a few years in the market. We are now seeing different types of adoption patterns—from small- to medium-size businesses (SMBs) and up to enterprises. What are some “rubber meets the road” points, where the value is compelling and understood, that then drive this deeper into the organization?

Where do you see IT4IT as an accelerant to larger business-level improvements?

Success via stability

Vijaykumar: When we look at the industry in general there are a lot of disruptive innovations, such as cloud computing taking hold. You have other trends like big data, too. These are driving a paradigm shift in the way IT is perceived. So, IT is not only a supporting function to the business anymore — it’s a business enabler and a competitive driver.

Now you need stability from IT, and IT needs to function with the same level of rigor as a bank or manufacturer. If you look at those businesses, they have reference architectures that span several decades. That stability was missing in IT, and that is where IT4IT fills a gap — we have come up with a reference architecture.

What does that mean? When you implement new tooling solutions or you come up with new enterprise applications, you don’t need to rip apart and replace everything. You could still use the same underlying architecture. You retain most of the things — even when you advance to a different solution. That is where a lot of value gets created.

Esler: One thing you have to remember, too, is that this is not just about new stuff. It’s not just about artificial intelligence (AI)Internet of Things (IoT), big data, and all of that kind of stuff — the new, shiny stuff. There is still a lot of old stuff out there that has to be managed in the same way. You have to have a framework like IT4IT that allows you to have a hybrid environment to manage it all.

Gardner: The framework to rule all frameworks.

Rossen: That also goes back to the concept of multi-modal IT. Some people say, “Okay, I have new tools for the new way of doing stuff, and I keep my old tools for the old stuff.”

But, in the real world, these things need to work together. The services depend on each other. If you have a new smart banking application, and you still have a COBOL mainframe application that it needs to communicate with, if you don’t have a single way of managing these two worlds you cannot keep up with the necessary speed, stability, and security.

Gardner: One of the things that impresses me about IT4IT is that any kind of organization can find value and use it from the get-go. As a start-up, an SMB, Jerrod, where you are seeing the value that IT4IT brings?

Solutions for any size business

Bennett: SMBs have less pain, but proportionally it’s the same, exact problem. Larger enterprises have enormous pain, the midsize guys have medium pain, but it’s the same mess.

But the SMBs have an opportunity to get a lot more value because they can implement a lot more of this a lot faster. They can even rip up the foundation and start over, a greenfield approach. Most large organizations simply do not have that capability.

The same kind of change – like in big data, how much data is going to be created in the next five years versus the last five years? That’s universal, everyone is dealing with these problems.

Gardner: At the other end of the scale, Mark, big multinational corporations with sprawling IT departments and thousands of developers — they need to rationalize, they need to limit the number of tools, find a fit-for-purpose approach. How does IT4IT help them? 

Bodman: It helps to understand which areas to rationalize first, that’s important because you are not going to do everything at once. You are going to focus on your biggest pain points.

The other element is the legacy element. You can’t change everything at once. There are going to be bigger rocks, and then smaller rocks. Then there are areas where you will see folks innovate, especially when it comes to the DevOps, new languages, and new platforms that you deploy new capabilities on.

What IT4IT allows is for you to increasingly interchange those parts. A big value proposition of IT4IT is standardizing those components and the interfaces. Afterward, you can change out one component without disrupting the entire value chain.

Gardner: Rob, complexity is inherent in IT. They have a lot on their plate. How does the IT4IT Reference Architecture help them manage complexity?

Reference architecture connects everything

Akershoek: You are right, there is growing complexity. We have more services to manage, more changes and releases, and more IT data. That’s why it’s essential in any sized IT organization to structure and standardize how you manage IT in a broader perspective. It’s like creating a bigger picture.

Most organizations have multiple teams working on different tools and components in a whole value chain. I may have specialized people for security, monitoring, the service desk, development, for risk and compliance, and for portfolio management. They tend to optimize their own silo with their own practices. That’s what IT4IT can help you with — creating a bigger picture. Everything should be connected.

Esler: I have used IT4IT to help get rid of those very same kinds of silos. I did it via a workshop format. I took the reference architecture from IT4IT and I got a certain number of people — and I was very specific about the people I wanted — in the room. In doing this kind of thing, you have to have the right people in the room.

We had people for service management, security, infrastructure, and networking — just a whole broad range across IT. We placed them around the table, and I took them through the IT4IT Reference Architecture. As I described each of the words, which meant function, they began to talk among themselves, to say, “Yes, I had a piece of that. I had this piece of this other thing. You have a piece of that, and this piece of this.”

It started them thinking about the larger functions, that there are groups performing not just the individual pieces, like service management or infrastructure.

Peruse a Full Library of

IT4IT Reference Architecture

Publications

Gardner: IT4IT then is not muscling out other aspects of IT, such as Information Technology Infrastructure Library (ITIL), The Open Group Architecture Framework (TOGAF), and SAFe. Is there a harmonizing opportunity here? How does IT4IT fit into a larger context among these other powerful tools, approaches, and methodologies?

Rossen: That’s an excellent question, especially given that a lot of people into SAFe might say they don’t need IT4IT, that SAFe is solving their whole problem. But once you get to discuss it, you see that SAFe doesn’t give you any recommendation about how tools need to be connected to create the automated pipeline that SAFe relies on. So IT4IT actually compliments SAFe very well. And that’s the same story again and again with the other ones.

The IT4IT framework can help bring those two things – ITIL and SAFe — together without changing the IT organizations using them. ITIL can still be relevant for the helpdesk, et cetera, and SAFe can still function — and they can collaborate better.

Gardner: Varun, another important aspect to maturity and capability for IT organizations is to become more DevOps-oriented. How does DevOps benefit from IT4IT? What’s the relationship?

Go with the data flow

Vijaykumar: When we talk about DevOps, typically organizations focus on the entire service design lifecycle and how it moves into transition. But the relationship sometimes gets lost between how a service gets conceptualized to how it is translated into a design. We need to use IT4IT to establish traceability, to make sure that all the artifacts and all the information basically flows through the pipeline and across the IT value chain.

The way we position the IT4IT framework to organizations and customers is very important. A lot of times people ask me, “Is this going to replace ITIL?” Or, “How is it different from DevOps?”

The simplest way to answer those questions is to tell them that this is not something that provides a narrative guidance. It’s not a process framework, but rather an information framework. We are essentially prescribing the way data needs to flow across the entire IT value chain, and how information needs to get exchanged.

It defines how those integrations are established. And that is vital to having an effective DevOps framework because you are essentially relying on traceability to ensure that people receive the right information to accept services, and then support those services once they are designed.

Gardner: Let’s think about successful adoption, of where IT4IT is compelling to the overall business. Jerrod, among your customers where does IT4IT help them?

Holistic strategy benefits business

Bennett: I will give an example. I hate the word, but “synergy” is all over this. Breaking down silos and having all this stuff in one place — or at least in one process, one information framework — helps the larger processes get better.

The classic example is Agile development. Development runs in a silo, they sit in a black box generally, in another building somewhere. Their entire methodology of getting more efficient is simply to work faster.

So, they implement sprints, or Agile, or scrum, or you name it. And what you recognize is they didn’t have a resource problem, they had a throughput problem. The throughput problem can be slightly solved using some of these methodologies, by squeezing a little bit more out of their glides.

But what you find, really, is they are developing the wrong thing. They don’t have a strategic element to their businesses. They simply develop whatever the heck they decide is important. Only now they develop it really efficiently. But the output on the other side is still not very beneficial to the business.

If you input a little bit of strategy in front of that and get the business to decide what it is that they want you to develop – then all of a sudden your throughput goes through the roof. And that’s because you have broken down barriers and brought together the [major business elements], and it didn’t take a lot. A little bit of demand management with an approval process can make development 50 percent more efficient — if you can simply get them working on what’s important.

It’s not enough to continue to stab at these small problems while no one has yet said, “Okay, timeout. There is a lot more to this information that we need.” You can take inspiration from the manufacturing crisis in the 1980s. Making an automobile engine conveyor line faster isn’t going to help if you are building the wrong engines or you can’t get the parts in. You have to view it holistically. Once you view it holistically, you can go back and make the assembly lines work faster. Do that and sky is the limit.

Gardner: So IT4IT helps foster “simultaneous IT operations,” a nice and modern follow-on to simultaneous engineering innovations of the past.

Mark, you use IT4IT internally at ServiceNow. How does IT4IT help ServiceNow be a better IT services company?

IT to create and consume products

Bodman: A lot of the activities at ServiceNow are for creating the IT Service Management (ITSM) products that we sell on the market, but we also consume them. As a product manager, a lot of my job is interfacing with other product managers, dealing with integration points, and having data discussions.

As we make the product better, we automatically make our IT organization better because we are consuming it. Our customer is our IT shop, and we deploy our products to manage our products. It’s a very nice, natural, and recursive relationship. As the company gets better at product management, we can get more products out there. And that’s the goal for many IT shops. You are not creating IT for IT’s sake, you are creating IT to provide products to your customers.

Gardner: Rob, at Fruition Partners, a DXE company, you have many clients that use IT4IT. Do you have a use case that demonstrates how powerful it can be?

Akershoek: Yes, I have a good example of an insurance organization where they have been forced to reduce significantly the cost to develop and maintain IT services.

Initially, they said, “Oh, we are going to automate and monitor DevOps.” When I showed them IT4IT they said, “Well, we are already doing that.” And I said, “Why don’t you have the results yet? And they said, “Well, we are working on it, come back in three months.”

IT4IT saved time and created transparency. With that outcome they realized, “Oh, we would have never been able to achieve that if had continued the way we did it in the past.”

But after that period of time, they still were not succeeding with speed. We said, “Use IT4IT, take it to specific application teams, and then move to cloud, in this case, Azure Cloud. Show that you can do it end-to-end from strategy into an operation, end-to-end in three months’ time and demonstrate that it works.”

And that’s what has been done, it saved time and created transparency. With that outcome they realized, “Oh, we would have never been able to achieve that if we had continued the way we did it in the past.” 

Gardner: John, at HPE Pointnext, you are involved with digital transformation, the highest order of strategic endeavors and among the most important for companies nowadays. When you are trying to transform an organization – to become more digital, data-driven, intelligent, and responsive — how does IT4IT help?

Esler: When companies do big, strategic things to try and become a digital enterprise, they implement a lot of tools to help. That includes automation and orchestration tools to make things go faster and get more services out.

But they forget about the operating model underneath it all and they don’t see the value. A big drug company I worked with was expecting a 30 percent cost reduction after implementing such tools, and they didn’t get it. And they were scratching their heads, asking, “Why?”

We went in and used IT4IT as a foundation to help them understand where they needed change. In addition to using some tools that HPE has, that helped them to understand — across different domains, depending on the level of service they want to provide to their customers — what they needed to change. They were able to learn what that kind of organization looks like when it’s all said and done.

Gardner: Lars, Micro Focus has 4,000 to 5,000 developers and needs to put software out in a timely fashion. How has IT4IT helped you internally to become a better development organization?

Streamlining increases productivity

Rossen: We used what is by now a standard technique in IT4IT, to do rationalization. Over a year, we managed to convert it all into a single tool chain that 80 percent of the developers are on.

With that we are now much more agile in delivering products to market. We can do much more sharing. So instead of taking a year, we can do the same easily every three months. But we also have hot fixes and a change focus. We probably have 20 releases a day. And on top of that, we can do a lot more sharing on components. We can align much more to a common strategy around how all our products are being developed and delivered to our customers. It’s been a massive change.

Gardner: Before we close out, I’d like to think about the future. We have established that IT4IT has backward compatibility, that if you are a legacy-oriented IT department, the reference architecture for IT management can be very powerful for alignment to newer services development and use.

But there are so many new things coming on, such as AIOps, AI, machine learning (ML), and data-driven and analytics-driven business applications. We are also finding increased hybrid cloud and multi-cloud complexity across deployment models. And better managing total costs to best operate across such a hybrid IT environment is also very important.

So, let’s take a pause and say, “Okay, how does IT4IT operate as a powerful influence two to three years from now?” Is IT4IT something that provides future-proofing benefits?

The future belongs to IT4IT 

Bennett: Nothing is future-proof, but I would argue that we really needed IT4IT 20 years ago — and we didn’t have it. And we are now in a pretty big mess.

There is nothing magical here. It’s been well thought-out and well-written, but there is nothing new in there. IT4IT is how it ought to have been for a while and it took a group of people to get together and sit down and architect it out, end-to-end.

Theoretically it could have been done in the 1980s and it would still be relevant because they were doing the same thing. There isn’t anything new in IT, there are lots of new-fangled toys. But that’s all just minutia. The foundation hasn’t changed. I would argue that in 2040 IT4IT will still be relevant.

Peruse a Full Library of

IT4IT Reference Architecture

Publications

Gardner: Varun, do you feel that organizations that adopt IT4IT are in a better position to grow, adapt, and implement newer technologies and approaches? 

Vijaykumar: Yes, definitely, because IT4IT – although it caters to the traditional IT operating models — also introduces a lot of new concepts that were not in existence earlier. You should look at some of the concepts like service brokering, catalog aggregation, and bringing in the role of a service integrator. All of these are things that may have been in existence, but there was no real structure around them.

IT4IT provides a consolidated framework for us to embrace all of these capabilities and to drive improvements in the industry. Coupled with advances in computing — where everything gets delivered on the fly – and where end users and consumers expect a lot more out of IT, I think IT4IT helps in that direction as well.

Gardner: Lars, looking to the future, how do you think IT4IT will be appreciated by a highly data-driven organization?

Rossen: Well, IT4IT was a data architecture to begin with. So, in that sense it was the first time that IT itself got a data architecture that was generic. Hopefully that gives it a long future.

I also like to think about it as being like roads we are building. We now have the roads to do whatever we want. Eventually you stop caring about it, it’s just there. I hope that 20 years from now nobody will be discussing this, they will just be doing it.

The data model advantage

Gardner: Another important aspect to running a well-greased IT organization — despite the complexity and growing responsibility — is to be better organized and to better understand yourself. That means having better data models about IT. Do you think that IT4IT-oriented shops have an advantage when it comes to better data models about IT?

Bodman: Yes, absolutely. One of the things we just produced within the [IT4IT reference architecture data model] is a reporting capability for key performance indicators (KPI) guidance. We are now able to show what kinds of KPIs you can get from the data model — and be very prescriptive about it.

In the past there had been different camps and different ways of measuring and doing things. Of course, it’s hard to benchmark yourself comprehensively that way, so it’s really important to have consistency there in a way that allows you to really improve.

In the past there had been different camps and different ways of measuring and doing things. It’s hard to benchmark yourself that way. It’s really important to have consistency in a way that allows you to really improve.

The second part — and this is something new in IT4IT that is fundamental — is the value stream has a “request to fulfill (R2F)” capability. It’s now possible to have a top-line, self-service way to engage with IT in a way that’s in a catalog and that is easy to consume and focused on a specific experience. That’s an element that has been missing. It may have been out there in pockets, but now it’s baked in. It’s just fabric, taught in schools, and you just basically implement it.

Rossen: The new R2F capability allows an IT organization to transform, from being a cost center that does what people ask, to becoming a service provider and eventually a service broker, which is where you really want to be.

Esler: I started in this industry in the mainframe days. The concept of shared services was prevalent, so time-sharing, right? It’s the same thing. It hasn’t really changed. It’s evolved and going through different changes, but the advent of the PC in the 1980s didn’t change the model that much.

Now with hyperconvergence, it’s moving back to that mainframe-like thing where you define a machine by software. You can define a data center by software.

Peruse a Full Library of

IT4IT Reference Architecture

Publications

Gardner: For those listening and reading and who are intrigued by IT4IT and would like to learn more, where can they go and find out more about where the rubber meets the IT road?

Akershoek: The best way is going to The Open Group website. There’s a lot of information on the reference architecture itself, case studies, and video materials. 

How to get started is typically you can do that very small. Look at the materials, try to understand how you currently operate your IT organization, and plot it to the reference architecture.

That provides an immediate sense of what you may be missing, are duplicating areas, or have too much going on without governance. You can begin to create a picture of your IT organization. That’s the first step to try to create or co-create with your own organization a bigger picture and decide where you want to go next.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Bimodal IT, Business intelligence, Cloud computing, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, ITSM, Micro Focus, multicloud, professional services, The Open Group | Tagged , , , , , , , , , , , , , , | Leave a comment

IT kit sustainability: A business advantage and balm for the planet

The next BriefingsDirect sustainable resources improvement interview examines how more companies are plunging into the circular economyto make the most of their existing IT and business assets.

We’ll now hear how more enterprises are optimizing their IT kit and finding innovative means to reduce waste — as well as reduce energy consumption and therefore their carbon footprint. Stay with us as we learn how a circular economy mindset both improves sustainability as a benefit to individual companies as well as the overall environment.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore the latest approaches to sustainable IT is William McDonough, Chief Executive of McDonough Innovation and Founder of William McDonough and Partners, and Gabrielle Ginér, Head of Environmental Sustainability for BT Group, based in London. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: William, what are the top trends driving the need for reducing waste, redundancy, and inefficiency in the IT department and data center?

McDonough

McDonough: Materials and energy are both fundamental, and I think people who work in IT systems that are often optimized have difficulty with the concept of waste. What this is about is eliminating the entire concept of waste. So, one thing’s waste is another thing’s food — and so when we don’t waste time, we have food for thought.

A lot of people realize that it’s great to do the right thing, and that would be to not destroy the planet in the process of what you do every day. But it’s also great to do it the right way. When we see the idea of redesigning things to be safe and healthy, and then we find ways to circulate them ad infinitum, we are designing for next use — instead of end of life. So it’s an exciting thing.

Gardner: If my example as an individual is any indication, I have this closet full of stuff that’s been building up for probably 15 years. I have phones and PCs and cables and modems in there that are outdated but that I just haven’t gotten around to dealing with. If that’s the indication on an individual home level, I can hardly imagine the scale of this at the enterprise and business level globally. How big is it?

Devices designed for reuse

McDonough: It’s as big as you think it is, everywhere. What we are looking at is design is the first signal of human intention. If we design these things to be disassembled and reusable, we therefore design for next use. That’s the fundamental shift, that we are now designing differently. We don’t say we design for one-time use: Take, make, waste. We instead design it for what’s next. 

And it’s really important, especially in IT, because these things, in a certain way, they are ephemeral. We call them durables, but they are actually only meant to last a certain amount of time before we move onto the next big thing.

Learn How to Begin

Your IT Circular Economy Journey

If we design your phone in the last 25 years, the odds of you using the same phone for 25 years are pretty low. The notion that we can design these things to become useful again quickly is really part of the new system. We now see the recycling of phone boards that actually go all the way back to base materials in very cost-effective ways. You can mine gold at $210 a ton out there, or you can mine phone boards at about $27,000 a ton. So that’s pretty exciting. 

Gardner: There are clearly economic rationales for doing the right thing. Gabrielle, tell us why this is important to BT as a telecommunications leader.

Ginér

Ginér: We have seen change in how we deal with and talk to consumers about this. We actually encourage them now to return their phones. We are paying for them. Customers can just walk into a store and get money back. That’s a really powerful incentive for people to return their phones.

Gardner: This concept of design for reuse and recovery is part of the cradle-to-cradle design concept that you have helped establish, William. Tell us why your book, Cradle to Cradle, leads to the idea of a circular economy?

Reuse renews the planet

McDonough: When we first posited Cradle to Cradle, we said you can look at the Earth and realize there are two fundamental systems at play. One is the biological system of which we are a part, the natural systems. And in those systems waste equals food. It wants to be safe and healthy, including the things you wear, the water, the food, all those things, those are biological nutrients.

Then we have technology. Once we started banging on rocks and making metals and plastics and things like that, that’s really technical nutrition. It’s another metabolism. So we don’t want to get the two confused. 

When we talk about lifecycle, we like to refer it to living things have a lifecycle. But your telephone is not a living thing — and we talk about it having a lifecycle, and then an end of life. Well, wait a minute, it’s not alive. It talks to you, but it’s not alive. So really it’s a product or service. 

In Cradle to Cradlewe say there are things that our biology needs to be safe, healthy, and to go back to the soil safely. And then there is technology. Technology needs to come back into technology and to be used over and over again. It’s for our use.

And so, this brings up the concept we introduced, which is product-as-a-service. What you actually want from the phone is not 4,600 different kinds of chemicals. You want a telephone you can talk into for a certain period of time. And it’s a service you want, really. And we see this being products-as-services, and that becomes the circular economy.

Once you see that, you design it for that use. Instead of saying, “Design for end-of-life. I am going to throw it in a landfill,” or something, you say, “I design it for next use. That means it’s designed for disassembly. We know we are going to use it again. It becomes part of a circular economy, which will grow the economy because we are doing it again and again.

Gardner: This approach seems to be win-win-win. There are lots of incentives, lots of rationales for not only doing well, but for doing good as companies. For example, Hewlett Packard Enterprise (HPE) recently announced a big initiative about this.

Another part of this in the IT field that people don’t appreciate is the amount of energy that goes into massive data centers. The hyperscale cloud companies are investing billions of dollars each a year in these new data centers. It financially behooves them to consume less energy, but the amount of energy that data centers need is growing at a fantastic rate, and it’s therefore a larger percentage of the overall carbon footprint.

William, do carbon and energy also need to be considered in this whole circular economy equation?

Intelligent energy management

McDonough: Clearly with the issues concerning climate and energy management, yes. If our energy is coming from fossil fuels, we have fugitive carbon in the atmosphere. That’s something that’s now toxic. We know that. A toxin is material in the wrong place, wrong dose, wrong duration, so this has to be dealt with.

Some major IT companies are leading in this, including AppleGoogleFacebook, and BT. This is quite phenomenal, really. They are reducing their energy consumption by being efficient. They are also adding renewables to their mix, to the point that they are going to be a major part of the power use — but it’s renewably sourced and carbon-free. That’s really interesting.

Learn How to Begin

Your IT Circular Economy Journey

When we realize the dynamic of the energy required to move data — and that the people who do this have the possibility of doing it with renewably powered means – this is a harbinger for something really critical. We can do this with renewable energy while still using electricity. It’s not like asking some heating plant to shift gears quickly or some transportation system to change its power systems; those things are good too, but this industry is based on being intelligent and understanding the statistical significance of what you do.

Gardner: Gabrielle, how is BT looking at the carbon and energy equation and helping to be more sustainable, not only in its own operations, but across your supply chain, all the companies that you work with as partners and vendors?

Ginér: Back to William’s point, two things stand out. One, we are focused on being more energy efficient. Even though we are seeing data traffic grow by around 40 percent per year, we now have nine consecutive years of reducing energy consumption in our networks.

To the second point around renewable energy, we have an ambition to be using 100 percent renewable electricity by 2020. Last year we were at 81 percent, and I am pleased to say that we did a couple of new deals recently, and we are now up at 96 percent. So, we are getting there in terms of the renewables.

What’s been remarkable is how we have seen companies come together in coalitions that have really driven the demand and supply of renewable energy, which has been absolutely fantastic.

As for how we work with our suppliers like HPE, for example, as a customer we have a really important role to play in sending demand signals to our suppliers of what we are looking for. And obviously we are looking for our suppliers to be more sustainable. The initiatives that HPE announced recently in Madrid are absolutely fantastic and are what we are looking for.

Gardner: It’s great to hear about companies like BT that are taking a bellwether approach to this leadership position. HPE is being aggressive in terms of how it encourages companies to recycle and use more data center kit that’s been reconditioned so that you get more and more life out of the same resources.

But if you are not aggressive, if you are not on the leadership trajectory in terms of sustainability, what’s the likely outcome in a few years?

Smart, sustainable IT 

McDonough: This is a key question. When a supplier company like HPE says, “We are going to care about this,” what I like about that is it’s a signal that they are providing services. A lot of the companies — when they are trying to survive in business or trying to move through different agendas to manage modern commerce — they may not have time to figure out how to get renewably powered.

But the ones that do know how to manage those things, it becomes just part of a service. That’s a really elegant thing. So, if a company like HPE says, “Okay, how many problems of yours can we solve? Oh, we will solve that one for you, too. Here, you do what you do, we will all do what we do — and we will all do this together.” So, I think the notion that it becomes part of the service is a very elegant thing

As we see AI coming in, we have to remember there is this thing called human intelligence that goes with it, and there is natural intelligence that goes with being in the world.

Gardner: A lot of companies have sustainability organizations, like BT. But how closely are they aligned with the IT organization? Do IT organizations need to create their own sustainability leaders? How should companies drive a more of the point of the arrow in IT department direction?

McDonough: IT is really critical now because it’s at the core of operations. It touches all the information that’s moving through the system. That’s the place where we can inform the activities and our intentions. But the point today is that humans, as we see artificial intelligence (AI) coming in, we have to remember there is this thing called human intelligence that goes with it, and there is a natural intelligence that goes with being in the world.

We should begin with our values of what is the right thing to do. We talked about what’s right and wrong, or what’s good and bad. Aristotle talked about what is less and more; truth in number. So, when we combine these two, you really have to begin with your values first. Do the right thing, and then go to the value, and do it all the right way.

And that means, let’s not get confused. Because if you are being less bad and you think it’s good, you have to stop and think because you are being bad by definition, just less so. So, we get confused.

Circular Economy Report

Guides Users to 

Increased Sustainability

What we really want to be is more good. Let’s do less bad for sure, but let’s also go out and do more good. And the statistical reference points for data are going to come through the IT to help us determine that. So, the IT department is actually the traffic control for good corporate behavior. 

Gardner: Gabrielle, some thoughts about why sustainability is an important driver for BT in general, and maybe some insights into how the IT function in particular can benefit?

Ginér: I don’t think we need a separate sustainability function for IT. It comes back to what William mentioned about values. For BT, sustainability is part of the company’s ethos. We want to see that throughout our organization. I sit in a central team, but we work closely with IT. It’s part of sharing a common vision and a common goal.

Positive actions, profitable results

Gardner: For those organizations planning on a hybrid IT future, where they are making decisions about how much public cloud, private cloud, and traditional IT — perhaps they should be factoring more about sustainability in terms of a lifecycle of products and the all-important carbon and energy equation.

How do we put numbers on this in ways that IT people can then justify on that all-important total cost of ownership and return on investment types of factoring across hybrid IT choices?

McDonough: Since the only constant in modern business life is high-speed change, you have to have change built into your day-to-day operations. And so, what is the change? The change will have an impact. The question is will it have a positive impact or a negative impact? If we look at the business, we want a positive impact economically; for the environment, we would like to have a positive impact there, too.

Since the only constant in modern business life is high-speed change … for the environment we would like to have a positive impact there, too.

When you look at all of that together as one top-line behavior, you realize it’s about revenue generation, not just about profit derivation. So, you are not just trying to squeeze out every penny to get profit, which is what’s leftover. That’s the manager’s job; you are trying to figure out what’s the right thing to do and bring in the revenue, that’s the executive’s job. 

The executives see this and realize it’s about revenue generation actually. And so, we can balance our CAPEX and our OPEX and we can do optimization across it. That means a lot of equipment that’s sitting out there that might be suboptimal is still serviceable. It’s a valuable asset. Let it run but be ready to refurbish it when the time comes. In the meantime, you are going to shift to the faster, better systems that are optimized across the entire platform. Because then you start saving energy, you start saving money, and that’s all there is to it.

Circular Economy Report

Guides Users to 

Increased Sustainability

Gardner: It seems like we are at the right time in the economy, and in the evolution of IT, for the economics to be working in favor of sustainability initiatives. It’s no coincidence that we are seeing at HPE that they are talking more about the economics of IT as well as sustainability issues. They are very closely linked.

Do you have studies at BT that help you make the economic case for sustainability, and not just that it’s the good or proper thing to do?

Ginér: Oh, yes, most definitely. Just last year through our Energy Efficiency Program, we saved 29 million pounds, and since we began looking at this in 2009-2010, we have saved more than 250 million pounds. So, there is definitely an economic case for being energy efficient.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Government, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, Networked economy, procurement, supply chain | Tagged , , , , , , , , , , | Leave a comment

Industrial-strength wearables combine with collaboration cloud to bring anywhere expertise to intelligent-edge work

The next BriefingsDirect industrial-edge innovation use-case examines how RealWear, Inc. and Hewlett Packard Enterprise (HPE) MyRoom combine to provide workers in harsh conditions ease in accessing and interacting with the best intelligence.

Stay with us to learn how a hands-free, voice-activated, and multimedia wearable computer solves the last few feet issue for delivering a business’ best data and visual assets to some of its most critical onsite workers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to describe the new high-water mark for wearable augmented collaboration technologies are Jan Josephson, Sales Director for EMEA at RealWear, and John “JT” Thurgood, Director of Sales for UK, Ireland, and Benelux at RealWear. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: A variety of technologies have come together to create the RealWear solution. Tell us why nowadays a hands-free, wearable computer needs to support multimedia and collaboration solutions to get the job done.

Thurgood: Over time, our industrial workers have moved through a digitization journey as they find the best ways to maintain and manage equipment in the field. They need a range of tools and data to do that. So, it could be an engineer wearing personal protective equipment in the field. He may be up on scaffolding. He typically needs a big bundle of paperwork, such as visual schematics, and all kinds of authorization documents. This is typically what an engineer takes into the field. What we are trying to do is make his life easier.

Thurgood

You can imagine it. An engineer gets to an industrial site, gets permission to be near the equipment, and has his schematics and drawings he takes into that often-harsh environment. His hands are full. He’s trying to balance and juggle everything while trying to work his way through that authorization process prior to actually getting on and doing the job – of being an engineer or a technician.

We take that need for physical documentation away from him and put it on an Android device, which is totally voice-controlled and hands-free. A gyroscope built into the device allows specific and appropriate access to all of those documents. He can even freeze at particular points in the document. He can refer to it visually by glancing down, because the screen is just below eye-line.

The information is available but not interfering from a safety perspective, and it’s not stopping him from doing his job. He has that screen access while working with his hands. The speakers in the unit also help guide him via verbal instructions through whatever the process may be, and he doesn’t even have to be looking at documentation.

Learn More About Software-Defined

And Hybrid Cloud Solutions

That Reduce Complexity

He can follow work orders and processes. And, if he hits a brick wall — he gets to a problem where even after following work processes, going through documentation, and it this still doesn’t look right — what does he do? Well, he needs to phone a buddy, right? The way he does that is the visual remote guidance (VRG) MyRoom solution from HPE.

He gets the appropriate expert on the line, and that expert can be thousands of miles away. The expert can see what’s going on through the 16-megapixel camera on the RealWear device. And he can talk him through the problem, even in harsh conditions because there are four noise-canceling microphones on the device. So, the expert can give detailed, real-time guidance as to how to solve the problem.

You know, Dana, typically that would take weeks of waiting for an expert to be available. The cost involved in getting the guy on-site to go and resolve the issue is expensive. Now we are enabling that end-technician to get any assistance he needs, once he is at the right place, at the right time.

Gardner: What was the impetus to create the RealWear HMT-1? Was there a specific use case or demand that spurred the design?

Military inspiration, enterprise adoption

Thurgood: Our chief technology officer (CTO), Dr. Chris Parkinson, was working in another organization that was focused on manufacturing military-grade screens. He saw an application opportunity for that in the enterprise environment.

And it now has wide applicability — whether it’s in the oil and gas industry, automotive, and construction. I’ve even had journalists wanting to use this device, like having a mobile cameraman.

He foresaw a wide range of use-cases, and so worked with a team — with our chief executive officer (CEO), Andy Lowery — to pull together a device. That design is IP66-rated, it’s hardened, and it can be used in all weather, from -20C to 50C, to do all sorts of different jobs.

There was nothing in the marketplace that provides these capabilities. We now have more than 10,000 RealWear devices in the field in all sorts of vertical industries.

The impetus was that there was nothing in the marketplace that provides these capabilities. People today are using iPads and tablets to do their jobs, but their hands are full. You can’t do the rest of the tasks that you may need to do using your hands.

We now have more than 10,000 RealWear devices in the field in all sorts of industrial areas. I have named a few verticals, but we’re discovering new verticals day-by-day.

Gardner: Jan, what were some of the requirements that led you to collaborate with HPE MyRoom and VRG? Why was that such a good fit?

Josephson: There are a couple of things HPE does extremely well in this field. In these remote, expert applications in particular, HPE designed their applications really well from a user experience (UX) perspective.

Josephson

At the end of the day, we have users out there and many of them are not necessarily engineers. So the UX side of an application is very important. You can’t have a lot of things clogging up your screen and making things too complicated. The interface has to be super simple.

The other thing that is really important for our customers is the way HPE does compression with their networked applications. This is essential because many times — if you are out on an oil rig or in the middle of nowhere — you don’t have the luxury of Wi-Fi or a 4G network. You are in the field.

The HPE solution, due to the compression, enables very high-quality video even at very-low bandwidth. This is very important for a lot of our customers. HPE is also taking their platform and enabling it to operate on-premises. That is becoming important because of security requirements. Some of the large users want a complete solution inside of their firewall.

So it’s a very impressive piece of software, and we’re very happy that we are in this partnership with HPE MyRoom.

Gardner: In effect, it’s a cloud application now — but it can become a hybrid application, too.

Connected from the core to the edge

Thurgood: What’s really unique, too, is that HPE has now built-in object recognition within the toolset. So imagine you’re wearing the RealWear HMT-1, you’re looking at a pump, a gas filter, or some industrial object. The technology is now able to identify that object and provide you with the exact work orders and documentation related to it.

We’re now able to expand out from the historic use-case of expert remote visual guidance support into doing so much more. HPE has really pushed the boundaries out on the solution.

Gardner: It’s a striking example of the newfound power of connecting a core cloud capability with an edge device, and with full interactivity. Ultimately, this model brings the power of artificial intelligence (AI) running on a data center to that edge, and so combines it with the best of human intelligence and dexterity. It’s the best of all worlds.

JT, how is this device going to spur new kinds of edge intelligence?

Thurgood: It’s another great question because 5G is now coming to bear as well as Wi-Fi. So, all of a sudden, almost no matter where you are, you can have devices that are always connected via broadband. The connectivity will become ubiquitous.

Learn More About Software-Defined

And Hybrid Cloud Solutions

That Reduce Complexity

Now, what does that do? It means never having an offline device. All of the data, all of your Internet of Things (IoT) analytics and augmented and assisted reality will all be made available to that remote user.

So, we are looking at the superhuman versions of engineers and technicians. Historically you had a guy with paperwork. Now, if he’s always connected, he always has all the right documentation and is able to act and resolve tasks with all of the power and the assistance he needs. And it’s always available right now.

So, yes, we are going to see more intellectual value being moved down to the remote, edge user.

At RealWear, we see ourselves as a knowledge-transfer company. We want the user of this device to be the conduit through which you can feed all cloud-analyzed data. As time goes by, some of the applications will reside in the cloud as well as on the local device. For higher-order analytics there is a hell of a lot of churning of data required to provide the best end results. So, that’s our prediction.

Gardner: When you can extend the best intelligence to any expert around the world, it’s very powerful concept.

For those listening to or reading this podcast, please describe the HMT-1 device. It’s fairly small and resides within a helmet.

Using your headwear

Thurgood: We have a horseshoe-shaped device with a screen out in front. Typically, it’s worn within a hat. Let’s imagine, you have a standard cap on your head. It attaches to the cap with two clips on the sides. You then have a screen that protrudes from the front of the device that is held just below your eye-line. The camera is mounted on the side. It becomes a head-worn tablet computer.

It can be worn in hard hats, bump caps, normal baseball caps, or just with straps (and no hat). It performs regardless of the environment you are in — be that in wind, rain, gales, such as working out on an offshore oil and gas rig. Or if you are an automotive technician, working in a noisy garage, it simply complements the protective equipment you need to use in the field.

Gardner: When you can bring this level of intelligence and instant access of experts to the edge, wherever it is, you’re talking about new economics. These type of industrial use cases can often involve processes where downtime means huge amounts of money lost. Quickly intercepting a problem and solving it fast can make a huge difference.

Do you have examples that provide a sense of the qualitative and quantitative benefits when this is put to good use?

Thurgood: There are a number of examples. Take automotive to start with. If you have a problem with your vehicle today, you typically take it to a dealership. That dealer will try to resolve the issue as quickly as it can. Let’s say the dealership can’t. There is a fault on the car that needs some expert assistance. Today, the dealership phones the head office and says, “Hey, I need an expert to come down and join us. When can you join us?” And there is typically a long delay.

So, what does that mean? That means my vehicle is off the road. It means I have to have a replacement vehicle. And that expert has to come out from head office to spend time traveling to be on-site to resolve the issue.

What can happen now using the RealWear device in conjunction with the HPE VRG MyRoom is that the technician contacts the expert engineer remotely and gets immediate feedback and assistance on resolving the fault. As you can imagine, the customer experience is vastly improved based on resolving the issue in minutes – and not hours, days, or even weeks.

Josephson: It’s a good example because everyone can relate to a car. Also, nowadays the car manufacturers are pushing a lot more technology into the cars. They are almost computers on wheels. When a car has a problem, chances are very slim you will have the skill-set needed in that local garage.

The whole automotive industry has a big challenge because they have all of these people in the field who need to learn a lot. Doing it the traditional way — of getting them all into a classroom for six weeks — just doesn’t cut it. So, it’s now all about incident-based, real-time learning.

Another benefit is that we can record everything in MyRoom. So if I have a session that solves a particular problem, I can take that recording and I have a value of one-to-many rather than one-to-one. I can begin building up my intellectual property, my FAQs, my better customer service. A whole range of values are being put in front here.

Gardner: You’re creating an archive, not just a spot solution. That archive can then be easily accessible at the right time and any place.

Josephson: Right.

Gardner: For those listeners wondering whether RealWear and VRG are applicable to their vertical industry, or their particular problem set, what are couple of key questions that they might ask themselves?

Shared know-how saves time and money

Thurgood: Do your technicians and engineers need to use their hands? Do they need to be hands-free? If so, you need a device like this. It’s voice-controlled, it’s mounted on your head.

Do they wear personal protectant equipment (PPE)? Do they have to wear gloves? If so, it’s really difficult to use a stylus or poke the screen of a tablet. With RealWear, we provide a totally hands-free, eyes-forward, very safe deployment of knowledge-transfer technology in the field.

If you need your hands free in the field, or if you’re working outdoors, up on towers and so on, it’s a good use of the device.

Josephson: Also, if your business includes field engineers that travel, do you have many traveling days where you had to go back because you forgot something, or it wasn’t the right skill-set on the first trip?

If instead you can always have someone available via the device to validate what we think is wrong and actually potentially fix it, I mean, it’s a huge savings. Fewer return or duplicate trips. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, mobile computing | Tagged , , , , , , , , | Leave a comment

How the data science profession is growing in value and impact across the business world

The next BriefingsDirect business trends panel discussion explores how the role of the data scientist in the enterprise is expanding in both importance and influence.

Data scientists are now among the most highly sought-after professionals, and they are being called on to work more closely than ever with enterprise strategists to predict emerging trends, optimize outcomes, and create entirely new kinds of business value.  

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about modern data scientists, how they operate, and why a new level of business analysis professional certification has been created by The Open Group, we are joined by Martin Fleming, Vice President, Chief Analytics Officer, and Chief Economist at IBMMaureen Norton, IBM Global Data Scientist Professional Lead, Distinguished Market Intelligence Professional, and author of Analytics Across the Enterprise, and George Stark, Distinguished Engineer for IT Operations Analytics at IBM. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We are now characterizing the data scientist as a profession. Why have we elevated the role to this level, Martin? 

Fleming

Fleming: The benefits we have from the technology that’s now available allow us to bring together the more traditional skills in the space of mathematics and statistics with computer science and data engineering. The technology wasn’t as useful just 18 months ago. It’s all about the very rapid pace of change in technology.

Gardner: Data scientists used to be behind-the-scenes people; sneakers, beards, white lab coats, if you will. What’s changed to now make them more prominent?

Norton

Norton: Today’s data scientists are consulting with the major leaders in each corporation and enterprise. They are consultants to them. So they are not in the back room, mulling around in the data anymore. They’re taking the insights they’re able to glean and support with facts and using them to provide recommendations and to provide insights into the business.

Gardner: Most companies now recognize that being data-driven is an imperative. They can’t succeed in today’s world without being data-driven. But many have a hard time getting there. It’s easier said than done. How can the data scientist as a professional close that gap?

Stark

Stark: The biggest drawback in integration of data sources is having disparate data systems. The financial system is always separate from the operational system, which is separate from the human resources (HR) system. And you need to combine those and make sure they’re all in the same units, in the same timeframe, and all combined in a way that can answer two questions. You have to answer, “So what?” And you have to answer, “What if?” And that’s really the challenge of data science.

Gardner: An awful lot still has to go on behind the scenes before you get to the point where the “a-ha” moments and the strategic inputs take place.

Martin, how will the nature of work change now that the data scientist as a profession is arriving – and probably just at the right time?

Fleming: The insights that data scientists provide allow organizations to understand where the opportunities are to improve productivity, of how they can help to make workers more effective, productive, and to create more value. This enhances the role of the individual employees. And it’s that value creation, the integration of the data that George talked about, and the use of analytic tools that’s driving fundamental changes across many organizations.

Captain of the data team

Gardner: Is there any standardization as to how the data scientist is being organized within companies? Do they typically report to a certain C-suite executive or another? Has that settled out yet? Or are we still in a period of churn as to where the data scientist, as a professional, fits in?

Business women touching the certification screen

Norton: We’re still seeing a fair amount of churn. Different organizing approaches have been tried. For example, the centralized center of excellence that supports other business units across a company has a lot of believers and followers.

The economies of scale in that approach help. It’s difficult to find one person with all of the skills you might need. I’m describing the role of consultant to the presidents of companies. Sometimes you can’t find all of that in one individual — but you can build teams that have complimentary skills. We like to say that data science is a team sport.

Gardner: George, are we focusing the new data scientist certification on the group or the individual? Have we progressed from the individual to the group yet?

Stark: I don’t believe we are there yet. We’re still certifying at the individual level. But as Maureen said, and as Martin alluded to, the group approach has a large effect on how you get certified and what kinds of solutions you come up with. 

Gardner: Does the certification lead to defining the managerial side of this group, with the data scientist certified in organizing in a methodological, proven way that group or office?

Learn How to Become

Certified as a

Data Scientist

Fleming: The certification we are announcing focuses not only on the technical skills of a data scientist, but also on project management and project leadership. So as data scientists progress through their careers, the more senior folks are certainly in a position to take on significant leadership and management roles.

And we are seeing over time, as George referenced, a structure beginning to appear. First in the technology industry, and over time, we’ll see it in other industries. But the technology firms whose names we are all familiar with are the ones who have really taken the lead in putting the structure together.

Gardner: How has the “day in the life” of the typical data scientist changed in the last 10 years?

Stark: It’s scary to say, but I have been a data scientist for 30 years. I began writing my own Fortran 77 code to integrate datasets to do eigenvalues and eigenvectors and build models that would discriminate among key objects and allow us to predict what something was.

The difference today is that I can do that in an afternoon. We have the tools, datasets, and all the capabilities with visualization tools, SPSSIBM Watson, and Tableau. Things that used to take me months now take a day and a half. It’s incredible, the change.

Gardner: Do you as a modern data scientist find yourself interpreting what the data science can do for the business people? Or are you interpreting what the business people need, and bringing that back to the data scientists? Or perhaps both?

Collaboration is key

Stark: It’s absolutely both. I was recently with a client, and we told them, “Here are some things we can do today.” And they said, “Well, what I really need is something that does this.” And I said, “Oh, well, we can do that. Here’s how we would do it.” And we showed them the roadmap. So it’s both. I will take that information back to my team and say, “Hey, we now need to build this.”

Gardner: Is there still a language, culture, or organizational divide? It seems to me that you’re talking apples and oranges when it comes to business requirements and what the data and technology can produce. How can we create a Rosetta Stone effect here?

Norton: In the certification, we are focused on supporting that data scientists have to understand the business problems. Everything begins from that.

In the certification, we are focused on supporting that data scientists have to understand the business problems. Everything begins from that. Knowing how to ask the right questions, to scope the problem, and be able to then translate is essential.

Knowing how to ask the right questions, to scope the problem, and be able to then translate [is essential]. You have to look at the available data and infer some, to come up with insights and a solution. It’s increasingly important that you begin with the problem. You don’t begin with your solution and say, “I have this many things I can work with.” It’s more like, “How we are going to solve this and draw on the innovation and creativity of the team?”

Gardner: I have been around long enough to remember when the notion of a chief information officer (CIO) was new and fresh. There are some similarities to what I remember from those conversations in what I’m hearing now. Should we think about the data scientist as a “chief” something, at the same level as a chief technology officer (CTO) or a CIO?

Chief Data Officer defined 

Fleming: There are certainly a number of organizations that have roles such as mine, where we’ve combined economics and analytics. Amazon has done it on a larger scale, given the nature of their business, with supply chains, pricing, and recommendation engines. But other firms in the technology industry have as well.

We have found that there are still three separate needs, if you will. There is an infrastructure need that CIO teams are focused on. There are significant data governance and management needs that typically chief data officers (CDOs) are focused on. And there are substantial analytics capabilities that typically chief analytics officers (CAOs) are focused on.

It’s certainly possible in many organizations to combine those roles. But in an organization the size of IBM, and other large entities, it’s very difficult because of the complexity and requirements across those three different functional areas to have that all embodied in a single individual.

Gardner: In that spectrum you just laid out – analytics, data, and systems — where does The Open Group process for a certified data scientist fit in?

Fleming: It’s really on the analytics side. A lot of what CDOs do is data engineering, creating data platforms. At IBM, we use the term Watson Data Platform because it’s built on a certain technology that’s in the public cloud. But that’s an entirely separate challenge from being able to create the analytics tools and deliver the business insights and business value that Maureen and George referred to.

Gardner: I should think this is also going to be of pertinent interest to government agencies, to nonprofits, to quasi-public-private organizations, alliances, and so forth.

Given that this has societal-level impacts, what should we think about in improving the data scientists’ career path? Do we have the means of delivering the individuals needed from our current educational tracks? How do education and certification relate to each other?

Academic avenues to certification

Fleming: A number of universities have over the past three or four years launched programs for a master’s degree in data science. We are now seeing the first graduates of those programs, and we are recruiting and hiring.

I think this will be the first year that we bring in folks who have completed a master’s in data science program. As we all know, universities change very slowly. It’s the early days, but demand will continue to grow. We have barely scratched the surface in terms of the kinds of positions and roles across different industries. 

That growth in demand will cause many university programs to grow and expand to feed that career track. It takes 15 years to create a profession, so we are in the early days of this.

Norton: With the new certification, we are doing outreach to universities because several of them have master’s in data analytics programs. They do significant capstone-type projects, with real clients and real data, to solve real problems.

We want to provide a path for them into certification so that students can earn, for example, their first project profile, or experience profile, while they are still in school.

Gardner: George, on the organic side — inside of companies where people find a variety of tracks to data scientist — where do the prospects come from? How does organic development of a data scientist professional happen inside of companies?

Stark: At IBM, in our group, Global Services, in particular, we’ve developed a training program with a set of badges. They get rewarded for achievement in various levels of education. But you still need to have projects you’ve done with the techniques you’ve learned througheducation to get to certification.

Having education is not enough. You have to apply it to get certified.

Gardner: This is a great career path, and there is tremendous demand in the market. It also strikes me as a very fulfilling and rewarding career path. What sorts of impacts can these individuals have?

Learn How to Become

Certified as a

Data Scientist

Fleming: Businesses have traditionally been managed through a profit-and-loss statement, an income statement, for the most part. There are, of course, other data sources — but they’re largely independent of each other. These include sales opportunity information in a CRM system, supply chain information in ERP systems, and financial information portrayed in an income statement. These get the most rigorous attention, shall we say.

We’re now in a position to create much richer views of the activity businesses are engaged in. We can integrate across more datasets now, including human resources data. In addition, the nature of machine learning (ML) and artificial intelligence (AI) are predictive. We are in a position to be able to not only bring the data together, we can provide a richer view of what’s transpiring at any point in time, and also generate a better view of where businesses are moving to.

It may be about defining a sought-after destination, or there may be a need to close gaps. But understanding where the business is headed in the next 3, 6, 9, and 12 months is a significant value-creation opportunity.

Gardner: Are we then thinking about a data scientist as someone who can help define what the new, best business initiatives should be? Rather than finding those through intuition, or gut instinct, or the highest paid person’s opinion, can we use the systems to tell us where our next product should come from?

Pioneers of insight

Norton: That’s certainly the direction we are headed. We will have systems that augment that kind of decision-making. I view data scientists as pioneers. They’re able to go into big datadark data, and a lot of different places and push the boundaries to come out with insights that can inform in ways that were not possible before.

It’s a very rewarding career path because there is so much value and promise that a data scientist can bring. They will solve problems that hadn’t been addressed before.

It’s a very exciting career path. We’re excited to be launching the certification program to help data scientists gain a clear path and to make sure they can demonstrate the right skills.

I’s a very rewarding career path because there is so much value and promise that a data scientist can bring. They will solve problems that hadn’t been addressed before.

Gardner: George, is this one of the better ways to change the world in the next 30 years?

Stark: I think so. If we can get more people to do data science and understand its value, I’d be really happy. It’s been fun for 30 years for me. I have had a great time.

Gardner: What comes next on the technology side that will empower the date scientists of tomorrow? We hear about things like quantum computingdistributed ledger, and other new capabilities on the horizon? 

Future forecast: clouds

Fleming: In the immediate future, new benefits are largely coming because we have both public cloud and private cloud in a hybrid structure, which brings the data, compute, and the APIs together in one place. And that allows for the kind of tools and capabilities that necessary to significantly improve the performance and productivity of organizations. 

Blockchain is making enormous progress and very quickly. It’s essentially a data management and storage improvement, but then that opens up the opportunity for further ML and AI applications to be built on top of it. That’s moving very quickly. 

Quantum computing is further down the road. But it will change the nature of computing. It’s going to take some time to get there but it nonetheless is very important and is part of that what we are looking at over the horizon. 

Gardner: Maureen, what do you see on the technology side as most interesting in terms of where things could lead to the next few years for data science? 

Norton: The continued evolution of AI is pushing boundaries. One of the really interesting areas is the emphasis on transparency and ethics, to make sure that the systems are not introducing or perpetuating a bias. There is some really exciting work going on in that area that will be fun to watch going forward. 

Gardner: The data scientist needs to consider not just what canbe done, but what should be done. Is that governance angle brought into the certification process now, or something that it will come later?

Stark: It’s brought into the certification now when we ask about how were things validated and how did the modules get implemented in the environment? That’s one of the things that data scientists need to answer as part of the certification. We also believe that in the future we are going to need some sort of code of ethics, some sort of methods for bias-detection and analysis, the measurement of those things that don’t exist today and that will have to.

Gardner: Do you have any examples of data scientists doing work that’s new, novel, and exciting?

Rock star potential

Fleming: We have a team led by a very intelligent and aggressive young woman who has put together a significant product recommendation tool for IBM. Folks familiar with IBM know it has a large number of products and offerings. In any given client situation the seller wants to be able to recommend to the client the offering that’s most useful to the client’s situation. 

And our recommendation engines can now make those recommendations to the sellers.  It really hasn’t existed in the past and is now creating enormous value — not only for the clients but for IBM as well. 

Gardner: Maureen any examples jump to mind that illustrate the potential for the data scientist? 

Norton: We wrote a book, Analytics Across the Enterprise, to explain examples across nine different business units. There have been some great examples in terms of finance, sales, marketing, and supply chain.

Learn How to Become

Certified as a

Data Scientist

Gardner: Any use-case scenario come to mind where the certification may have been useful?

Norton: Certification would have been useful to an individual in the past because it helps map out how to become the best practitioner you can be. We have three different levels of certification going up to the thought leader. It’s designed to help that professional grow within it.

Stark: A young man who works for me in Brazil built a model for one of our manufacturing clients that identifies problematic infrastructure components and recommends actions to take on those components. And when the client implemented the model, they saw a 60 percent reduction in certain incidents and a 40,000-hour-a-month increase in availability for their supply chain. And we didn’t have a certification for him then — but we will have now. 

Gardner: So really big improvement. It shows that being a data scientist means you’re impactful and it puts you in the limelight.

IBM has built an internal process that matches with The Open Group. Other companies are getting accredited for running a version of the certification themselves, too.

Stark: And it was pretty spectacular because the CIO for that company stood up in front of his whole company — and in front of a group of analysts — and called him out as the data scientist that solved this problem for their company. So, yeah, he was a rock star for a couple days. 

Gardner: For those folks who might be more intrigued with a career path toward certification as a data scientist, where might they go for more information? What are the next steps when it comes to the process through The Open Group, with IBM, and the industry at large? 

Where to begin

Norton: The Open Group officially launched this in January, so anyone can go to The Open Group website and check under certifications. They will be able to read the information about how to apply. Some companies are accredited, and others can get accredited for running a version of the certification themselves. 

IBM recently went through the certification process. We have built an internal process that matches with The Open Group. People can apply either directly to The Open Group or, if they happen to be within IBM or one of the other companies who will certify, they can apply that way and get the equivalent of it being from The Open Group. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Enterprise architect, enterprise architecture, Enterprise transformation, Hadoop, IBM, machine learning, The Open Group | Tagged , , , , , , , , , , , , , | Leave a comment

Why enterprises should approach procurement of hybrid IT in entirely new ways

The next BriefingsDirect hybrid IT management strategies interview explores new ways that businesses should procure and consume IT-as-a-service. We’ll now hear from an IT industry analyst on why changes in cloud deployment models are forcing a rethinking of IT economics — and maybe even the very nature of acquiring and cost-optimizing digital business services.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore the everything-as-a-service business model is Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving change in the procurement of hybrid- and multi-cloud services?

Dillingham

Dillingham: What began as organic adoption — from the developers and business units seeking agility and speed — is now coming back around to the IT-focused topics of governance, orchestration across platforms, and modernization of private infrastructure.

There is also interest in hybrid cloud, as well as multi-cloud management and governance. Those amount to complexities that the public clouds are not set up for and are not able to address because they are focused on their own platforms.

Learn How to

Better Manage

Multi-Cloud Sprawl

Gardner: So the way you acquire IT these days isn’t apples or oranges, public or private, it’s more like … fruit salad. There are so many different ways to acquire IT services that it’s hard to measure and to optimize. 

Dillingham: And there are trade-offs. Some organizations are focused on and adopt a single public cloud vendor. But others see that as a long-term risk in management, resourcing, and maintaining flexibility as a business. So they’re adopting multiple cloud vendors, which is becoming the more popular strategic orientation.

Gardner: For those organizations that don’t want mismanaged “fruit salad” — that are trying to homogenize their acquisition of IT services even as they use hybrid cloud approaches — does this require a reevaluation of how IT in total is financed? 

Champion the cloud

Dillingham: Absolutely, and that’s something you can address, regardless of whether you’re adopting a single cloud or multiple clouds. The more you use multiple resources, the more you are going to consider tools that address multiple infrastructures — and not base your capabilities on a single vendor’s toolset. You are going to go with a cloud management vendor that produces tools that comprehensively address security, compliance, cost management, and monitoring, et cetera.

Gardner: Does the function of IT acquisitions now move outside of IT? Should companies be thinking about a chief procurement officer (CPO) or chief financial officer (CFO) becoming a part of the IT purchasing equation?

Dillingham: By virtue of the way cloud has been adopted — more by the business units – they got ahead of IT in many cases. This has been pushed back toward gaining the fuller financial view. That move doesn’t make the IT decision-maker into a CFO as much as turn them into a champion of IT. And IT goes back to being the governance arm, where traditionally they been managing cost, security, and compliance.

It’s natural for the business units and developers to now look to IT for the right tools and capabilities, not necessarily to shed accountability but because that is the traditional role of IT, to enable those capabilities. IT is therefore set up for procurement.

IT is best set up to look at the big picture across vendors and across infrastructures rather than the individual team-by-team or business unit-by-business unit decisions that have been made so far. They need to aggregate the cloud strategy at the highest organizational level.

Gardner: A central tenet of good procurement is to look for volume discounts and to buy in bulk. Perhaps having that holistic and strategic approach to acquiring cloud services lends itself to a better bargaining position? 

Learn How to

Make Hybrid IT

Simple

Dillingham: That’s absolutely the pitch of a cloud-by-cloud vendor approach, and there are trade-offs. You can certainly aggregate more spend on a single cloud vendor and potentially achieve more discounts in use by that aggregation.

The rebuttal is that on a long-term basis, your negotiating leverage in that relationship is constrained versus if you have adopted multiple cloud infrastructures and can dialogue across vendors on pricing and discounting.

Now, that may turn into more of an 80/20-, 90/10-split than a 50/50-split, but at least by having some cross-infrastructure capability — by setting yourself up with orchestration, monitoring, and governance tools that run across multiple clouds — you are at least in a strategic position from a competitive sourcing perspective.

The trade-off is the cost-aggregation and training necessary to understand how to use those different infrastructures — because they do have different interfaces, APIs, and the automation is different.

Gardner: I think that’s why we’ve seen vendors like Hewlett Packard Enterprise (HPE) put an increased emphasis on multi-cloud economics, and not just the capability to compose cloud services. The issues we’re bringing up force IT to rethink the financial implications, too. Are the vendors on to something here when it comes to providing insight and experience in managing a multi-cloud market?

Follow the multi-cloud tour guide

Dillingham: Absolutely, and certainly from the perspective that when we talk multi-cloud, we are not just talking multiple public clouds. There is a reality of large existing investments in private infrastructure that continue for various purposes. That on-premises technology also needs cost optimization, security, compliance, auditability, and customization of infrastructure for certain workloads.

Consultative input is very valuable when you see how much pattern-matching there is across customers — and not just within the same industry but cross industries.

That means the ultimate toolset to be considered needs to work across both public and private infrastructures. A vendor that’s looking beyond just public cloud, like HPE, and delivers a multi-cloud and hybrid cloud management orientation is set up to be a potential tour guide and strategic consultative adviser. 

And that consultative input is very valuable when you see how much pattern-matching there is across customers – and not just within same industry but across industries. The best insights will come from knowing what it looks like to triage application portfolios, what migrations you want across cloud infrastructures, and the proper set up of comprehensive governance, control processes, and education structures.

Gardner: Right. I’m sure there are systems integrators, in addition to some vendors, that are going to help make the transition from traditional IT procurement to everything-as-a service. Their lessons learned will be very valuable.

That’s more intelligent than trying to do this on your own or go down a dark alley and make mistakes, because as we know, the cloud providers are probably not going to stand up and wave a flag if you’re spending too much money with them.

How to Solve Cost and Utilization

Challenges of

Hybrid Cloud

Dillingham: Yes, and the patterns of progression in cloud orientation are clear for those consultative partners, based on dozens of implementations and executions. From that experience they are far more thoroughly aware of the patterns and how to avoid falling into the traps and pitfalls along the way, more so than a single organization could expect, internally, to be savvy about.

Gardner: It’s a fast-moving target. The cloud providers are bringing out new services all the time. There are literally thousands of different cloud service SKUs for infrastructure-as-a-service, for storage-as-a-service, and for other APIs and third-party services. It becomes very complex, very dynamic.

Do you have any advice for how companies should be better managing cloud adoption? It seems to me there should be collaboration at a higher level, or a different type of management, when it comes to optimizing for multi-cloud and hybrid-cloud economics.

Cloud collaboration strategy 

Dillingham: That really comes back to the requirements of the specific IT organization. The more business units there are in the organization, the more IT is critical in driving collaboration at the highest organizational level and in being responsible for the overall cloud strategy.

Remove Complexity

From Multi-Cloud

And Hybrid IT 

The cloud strategy across the topics of platform selection, governance, process, and people skills — that’s the type of collaboration needed. And it flows into these recommendations from the consultancies of how to avoid the traps and pitfalls, mismanage expectations and goals, resulting in clear outcomes on execution of projects. And it means making sure that security and compliance are considered and involved from a functional perspective – and all the way down the list on making it progress as a long-term success.

The decision of what advice to bring in is really about the topic and the selection on the menu. Have you considered the uber strategy and approach? How well have you triaged your application portfolio? How can you best match capabilities to apps across infrastructures and platforms?

Do you have migration planning? How about migration execution? Those can be similar or separate items. You also have development methodologies, and the software platform choices to best support all of that along with security and compliance expertise. These are all aspects certain consultancies will have expertise on more than others, and not many are going to be strong across all of them. 

Gardner: It certainly sounds like a lot of planning and perhaps reevaluating the ways of the past. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Cloud computing, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, multicloud, supply chain, User experience | Tagged , , , , , , , , | Leave a comment

Manufacturer gains advantage by expanding IoT footprint from many machines to many insights

The next BriefingsDirect manufacturing modernization and optimization case study centers on how a Canadian maker of containers leverages the Internet of Things (IoT) to create a positive cycle of insights and applied learning.

We will now hear how CuBE Packaging Solutions, Inc. in Aurora, Ontario has deployed edge intelligence to make 21 formerly isolated machines act as a single, coordinated system as it churns out millions of reusable package units per month.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we explore how harnessing edge data with more centralized real-time analytics integration cooks up the winning formula for an ongoing journey of efficiency, quality control, and end-to-end factory improvement.

Here to describe the challenges and solutions for injecting IoT into a plastic container’s production advancement success journey is Len Chopin, President at CuBE Packaging Solutions. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Len, what are the top trends and requirements driving the need for you to bring more insights into your production process?

Chopin

Chopin: The very competitive nature of injection molding requires us to stay ahead of the competition and utilize the intelligent edge to stay abreast of that competition. By tapping into and optimizing the equipment, we gain on downtime efficiencies, improved throughput, and all the things that help drive more to the bottom line.

Gardner: And this is a win-win because you’re not only improving quality but you’re able to improve the volume of output. So it’s sort of the double-benefit of better and bigger.

Chopin: Correct. Driving volume is key. When we are running, we are making money, and we are profitable. By optimizing that production, we are even that much more profitable. And by using analytics and protocols for preventive and predictive maintenance, the IoT solutions drive an increase the uptime on the equipment.

Gardner: Why are sensors in of themselves not enough to attain intelligence at the edge?

Chopin: The sensors are reactive. They give you information. It’s good information. But leaving it up to the people to interpret [the sensor data] takes time. Utilizing analytics, by pooling the data, and looking for trends, means IoT is pushing to us what we need to know and when.

Otherwise we tend to look at a lot of information that’s not useful. Utilizing the intelligent edge means it’s pushing to us the information we need, when we need it, so we can react appropriately with the right resources.

Gardner: In order to understand the benefits of when you do this well, let’s talk about the state at CuBE Packaging when you didn’t have sensors. You weren’t processing and you weren’t creating a cycle of improvement?

Learn How to Automate and

Drive Insights

From Your IIoT Data and Apps

Chopin: That was firefighting mode. You really have no idea of what’s running, how it’s running, is it trending down, is it fast enough, and is it about to go down. It equates to flying blind, with blinders on. It’s really hard in a manufacturing environment to run a business that way. A lot of people do it, and it’s affordable — but not very economical. It really doesn’t drive more value to the bottom line.

Gardner: What have been the biggest challenges in moving beyond that previous “firefighting” state to implementing a full IoT capability?

Chopin: The dynamic within our area in Canada is resources. There is lot of technology out there. We rise to the top by learning all about what we can do at the edge, how we can best apply it, and how we can implement that into a meaningful roadmap with the right resources and technical skills of our IT staff.

It’s a new venture for us, so it’s definitely been a journey. It is challenging. Getting that roadmap and then sticking to the roadmap is challenging, but as we go through the journey we learn the more relevant things. It’s been a dynamic roadmap, which it has to be as the technology evolves and we delve into the world of IoT, which is quite fascinating for us.

Gardner: What would you say has been the hardest nut to crack? Is it the people, the process, or the technology? Which has been the most challenging?

Trust the IoT process 

Chopin: I think the process, the execution. But we found that once you deploy IoT, and you begin collecting data and embarking on analytics, then the creative juices become engaged with a lot of the people who previously were disinterested in the whole process.

But then they help steer the ship, and some will change the direction slightly or identify a need that we previously didn’t know about – a more valuable way than the path we were on. So people are definitely part of the solution, not part of the problem. For us, it’s about executing to their new expectations and applying the information and technology to find solutions to their specific problems.

We have had really good buy-in with the people, and it’s just become about layering on the technical resources to help them execute their vision.

Gardner: You have referred to becoming, “the Google of manufacturing.” What do you mean by that, and how has Hewlett Packard Enterprise (HPE) supported you in gaining that capability and intelligence?

People are definitely part of the solution, not part of the problem. For us, it’s about executing to their new expectations and applying information and technology to find solutions to specific problems.

Chopin: “The Google of manufacturing” was first coined by our owner, JR. It’s his vision so it’s my job to bring it to fruition. The concept is that there’s a lot of cool stuff out there, and we see that IoT is really fascinating.

My job is to take that technology and turn it into an investment with a return on investment (ROI) from execution. How is that all going to help the business? The “Google of manufacturing” is about growth for us — by using any technology that we see fit and having the leadership to be open to those new ideas and concepts. Even without having a clear vision of the roadmap, it means focusing on the end results. It’s been a unique situation. So far it’s been very good for us.

Gardner: How has HPE helped in your ability to exploit technologies both at the edge and at the data center core?

Chopin: We just embarked on a large equipment expansion [with HPE], which is doubling our throughput. Our IT backbone, our core, was just like our previous equipment — very old, antiquated, and not cutting edge at all. It was a burden as opposed to an asset.

Part of moving to IoT was putting in a solid platform, which HPE has provided. We work with our integrator and a project team that mapped out our core for the future. It’s not just built for today’s needs — it’s built for expansion capabilities. It’s built for year-two, year-three. Even if we’re not fully utilizing it today — it has been built for the future.

HPE has more things coming down the pipeline that are built on and integrated to this core, so that there are no real limitations to the system. No longer will we have to scrap an old system and put a new one in. It’s now scalable, which we think of as the platform for becoming the “Google of manufacturing,” and which is going to be critical for us.

Gardner: Future-proofing infrastructure is always one of my top requirements. All right, please tell us about CuBE Packaging, your organization’s size, what you’re doing, and what end products are.

The CuBE takeaway

Chopin: We have a 170,000-square-foot facility, with about 120 employees producing injection-molded plastic containers for the food service industry, for home-meal replacement, and takeout markets, distributed to Canada as well as the US, which is obviously a huge and diverse market.

We also have a focus on sustainability. Our products are reusable and recyclable. They are a premier product that come with a premier price. They are also customizable and brandable, which has been a key to CuBE’s success. We partner with restaurants, with sophisticated customers, who see a value in the specific branding and of having a robust packaging solution.

Gardner: Len, you mentioned that you’re in a competitive industry and that margin is therefore under pressure. For you to improve your bottom line, how do you account for more productivity? How are you turning what we have described in terms of an IoT and data capability into that economic improvement to your business outcome?

Chopin: I refer to this as having a plant within a plant. There is always lot more you can squeeze out of an operation by knowing what it’s up to, not day-by-day, but minute-by-minute. Our process is run quite quickly and so slippage in machine cycle times can occur rapidly. We must grasp the small slippages, or predict failures, or when something is out of technical specifications from the injection molding standpoint or we could be producing a poor-quality product.

Getting a handle on what the machines are doing, minute-by-minute-by-minute gives us the advantage to utilize the assets and the people and so to optimize the uptime, as well as improve our quality, to get more of the best product to the market. So it really does drive value right to the bottom line.

Learn How to Automate and

Drive Insights

From Your IIoT Data and Apps

Gardner: A big buzzword in the industry now is artificial intelligence (AI). We are seeing lots of companies dabble in that. But you need to put in place certain things before you can take advantage of those capabilities that not only react but have the intelligence to prescribe new processes for doing things even more efficiently.

Are you working in conjunction with your integrator and HPE to allow you to exploit AI when it becomes mature enough for organizations like your own?

AI adds uptime 

Chopin: We are already embarking on using sensors for things that were seemingly unrelated. For example, we are picking up data points off of peripheral equipment that feed into the molding process. This provides us a better handle on those inputs to the process, inputs to the actual equipment, rather than focusing on the throughput and of how many parts we get in a given day.

For us, the AI is about that equipment uptime and of preventing any of it going down. By utilizing the inputs to the machines, it can notify us in advance, when we need to be notified.

Rather than monitoring equipment performance manually with a clipboard and a pen, we can check on run conditions or temperatures of some equipment up on the roof that’s critical to the operation. The AI will hopefully alert us to things that we don’t know about or don’t see because it could be at the far end of the operations. Yet there is a codependency on a lot of that pre-upstream equipment that feeds to the downstream equipment.

So for us to gain transparency into that end-to-end process and having intelligence built-in enough to say, “Hey, you have a problem — not yet, but you’re going to have a problem,” allows us to react before the problem occurs and causes a production upset.

Rather than monitoring equipment performance manually with a clipboard and a pen, we can check on run conditions or temperatures of some equipment up on the roof if that is critical to the operations.

Gardner: You can attain a total data picture across your entire product lifecycle, and your entire production facility. Having that allows you to really leverage AI.

Sounds like that means a lot of data over long period of time. Is there anything about what’s happening at that data center core, around storage, that makes it more attractive to do this sooner than later?

Chopin: As I mentioned previously, there are a lot of data points coming off the machines. The bulk of it is useless, other than from an historical standpoint. So by utilizing that data — not pushing forward what we don’t need, but just taking the relevant points — we piggyback on the programmable logic controllers to just gather the data that we need. Then we further streamline that data to give us what we’re looking for within that process.

It’s like pushing out only the needle from the haystack, as opposed to pushing the whole haystack forward. That’s the analogy we use.

Gardner: So being more intelligent about how you gather intelligence?

Chopin: Absolutely! Yes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, Internet of Things | Tagged , , , , , , , , , | Leave a comment

Why enterprises struggle with adopting public cloud as a culture

The next BriefingsDirect digital strategies interview examines why many businesses struggle with cloud computing adoption, and how they could improve by attaining a culture directed at cloud consumption and total productivity.

Due to inertia, a lack of skills, and even outright hostility, some enterprises are stumbling in their march to cloud use due to behavior and perception — and not the actual technology hurdles.

We will now hear from an observer of cloud adoption patterns on why a cultural solution to adoption may be more important than any other aspect of digital business transformation

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore why cloud inertia can derail business advancement is Edwin Yuen, Senior Analyst for Cloud Services and Orchestration, Data Protection, and DevOps at Enterprise Strategy Group (ESG). [Note: Since this podcast was recorded on Nov. 15, 2018, Yuen has become principal product marketing manager at Amazon Web Services.] The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Edwin, why are enterprises finding themselves unready for public cloud adoption culturally?

Yuen: Culturally the issue with public cloud adoption is whether IT is prepared to bring in public cloud. I bring up the IT issue because public cloud usage is actually really high within business organizations.

Yuen

At ESG, we have found that cloud use is pretty significant — well over 80 percent are using some sort of public cloud service. It’s very high for infrastructure- (IaaS) and platform-as-a-service (PaaS).

But the key here is, “What is the role of IT?” We see a lot of business end-users and others essentially doing Shadow IT– of going around IT if they feel like their needs are not met. That actually increases the friction between IT and the business.

It also leads to people going into the public cloud before they are ready, before there’s been a proper evaluation – from a technical, cost, or even a convenience point of view. That can potentially derail things.

But there is an opportunity for IT to reset the boundaries and meet the needs of the end users, of thoughtfully getting into the public cloud.

Gardner: We may be at the point of having too much of a good thing. Even if people are doing great things with cloud computing inside of an organization, unless it’s strategically oriented to process, fulfillment, and a business outcome, then the benefits be can lost. 

Plan before you go to public cloud

Yuen: When line of business (LOB) or other groups are not working with core IT in going to the public cloud, they get some advantages from it — but they are not getting the full advantage. It’s like going from an old piece of smartphone technology, at 7 or 8 years old, and then only going up to the fifth or sixth best phone. It’s a significant upgrade, but it’s not the optimal way to do it. They’re not getting the full benefits. 

The question is, “Are you getting the most out of it, and is that thoughtfulness there?” You want to maximize the capabilities and advantages you get from public cloud and minimize the inconvenience and cost. Planning is absolutely critical for that — and it involves core IT.

So how do you bring about a cultural shift that says, “Yes, we are going into the public cloud. We are not trying to stop you. We are not being overly negative. But what we are trying to do is optimize it for everybody across the board, so that we as a company can get the most out of it because there are so many benefits — not just incremental benefits that you get from immediately jumping in.”

Learn About Comparing

 App Deployment

Options

Gardner: IT needs to take a different role, of putting in guardrails in terms of cloud services consumption, compliance, and security. It seems to me that procurement is also a mature function in most enterprises. They may also want to step in.

When you have people buying goods individually on a credit card, you don’t necessarily take advantage of volume purchasing, or you don’t recognize that you can buy things in bulk and distribute them and get better deals or terms. Yet procurement groups are very good at that.

Is there an opportunity to better conduct cloud consumption like with procuring any other business service, with all the checks, balances, and best practices?

Cut cloud costs, buy in bulk

Yuen: Absolutely, and that’s an excellent point. I think people will often leave out procurement, auditing, acquisitions or whatever department that there is for cloud. It becomes critically important organizationally, especially from the end-user point of view.

From the organizational point of view, you can lose economies of scale. A lot of the cloud providers will provide those economies of scales via an enterprise agreement. That allows for purchasing power to be taken.

Yet if individuals go out and leave procurement behind, it’s like shopping at a retailer for groceries without ever checking for sales or coupons. Buying in volume is just a smarter way to centralize the entire organization so you can leverage it. It becomes a better cost for the line of business, obviously. Cloud is really a consumption-based model, so planning needs to be there.

We’ve talked to a lot of organizations. As they jump into cloud, they expect cost savings, but sometimes they get an increase in cost because once you have that consumption model available, people just go ahead and consume.

We’ve talked to a lot of organizations. As they jump into cloud, they expect cost savings, but sometimes they get an increase in cost because once you have that consumption model available, people just go ahead and consume.

And what that generates is a variability in the cost of consumption; a variability in cost of cloud. A lot of companies very quickly realize that they don’t have variable budgets — they have fixed budgets. So they need to think about how they use cloud and the consumption cost for an entire year. You can’t just go about your work and have some flexibility but then find that you are out of budget when you get to the second half of the third or fourth quarter of the fiscal year.

You can’t budget on open-ended consumption. It requires a balance across the organization, where you have flexibility enough to be active — and go into the cloud. But you also need to understand what the costs are throughout the entire lifecycle, especially if you have fixed budgets.

Gardner: If you get to the fourth quarter and you run out of funds, you can’t exactly turn off the mission-critical applications either. You have to find a way to pay for that, and that can wreak havoc, particularly in a public company.

In the public sector, in particular, they are very much geared to a CAPEX budget. In cities, states, and the federal government, they have to bond large purchases, and do that in advance. So, there is dissonance culturally in terms of the economics around cloud and major buying patterns.

Yuen: We absolutely see that. There was an assumption by many that you would simply want to go to an OPEX model and leave the CAPEX model behind. Realistically, what you’re doing is leaving the CAPEX model behind from a consumption point of view — but you’re not leaving it behind from a budgeting and a planning point of view.

The economic reality is that it just doesn’t work that way. People need to be more flexible, and that’s exactly what the providers have been adapting to. But the providers will also have to allow you to consume in a way that allows you to lock down costs. But that only occurs when the organization works together in terms of its total requirements as opposed to just simply going out and using it.

The key for organizational change is to drive a culture where you have flexibility and agility but work within the culture to know what you want to do ahead of time. Then the organization can do the proper planning to be fiscally responsible, and fiscally execute on the operational plan.

Gardner: Going to a cloud model really does force behavioral changes at the holistic business level. IT needs to think more like procurement. Procurement needs to get more technical and savvier about how to acquire and use cloud computing services. This gets complex. There are literally thousands, if not tens of thousands, of SKUs, different types of services, you could acquire from any of the major public cloud providers. 

Then, of course, the LOB people need to be thinking differently about how they use and consume services. They need to think about whether they should coordinate with developers for customization or not. It’s rather complex.

So let’s identify where the cultural divide is. Is it between IT of the old caliber and the new version of IT? Is it a divide between the line of business people and IT? Between development and operations? All the above? How serious is this cultural divide?

Holistic communication plans 

Yuen: It really is all of the above, and in varying areas. What we are seeing is that the traditional roles within an organization have really been monolithic roles. End-users were consumers, the central IT was a provider, and finances were handled by acquisitions and the administration. Now, what we are seeing, is that everybody needs to work together, and to have a much more holistic plan. There needs to be a new level of communication between those groups, and more of a give-and-take.

It’s similar to the running of a restaurant. In the past, we had a diner, that was the end user, and they said: “I want this food.” The chef says, “I am going to cook this food.” The management says, “This food costs this much.” They never really talked to each other.

They would do some back-and-forth dialog, but there wasn’t a holistic understanding of the actual need. And, to be perfectly honest, not everybody was totally satisfied. The diners were not totally satisfied with the meal because it’s wasn’t made the way they wanted. They weren’t going to pay for something they didn’t actually want. Finance fixed the menu prices, but they would have liked to charge a little bit more. The chef really wanted to cook a little bit differently or have the ability to shift things around.

Read Why IT Operations

 And Developers

Should Get Along

The key for improved cloud adoption is opening the lines of communication, bridging the divides, and gaining new levels of understanding. As in the restaurant analogy, the chef says, “Well, I can add these ingredients, but it will change the flavor and it might increase the cost.” And then the finance people say, “Well, if we make better food, then more people will eat it.” Or, “If we lower prices, we will get more economies of scale.” Or, “If we raise prices we will reduce volume of diners down.” It’s all about that balance — and it’s an open discussion among and between those three parts of the organization.

This is the digital transformation we are seeing across the board. It’s about IT being more flexible, listening to the needs of the end users, and being willing to be agile in providing services. In exchange, the end users come to IT first, understand where the cloud use is going, and can IT be responsive. IT knows better what the users want. It becomes not just that they want solutions faster, but by how much. They can negotiate based on actual requirements. 

And then they all work with operations and other teams and say, “Hey, can we get those resources? Should we put them on-premises or off-premises? Should we purchase it? Should we use CAPEX, or should we use OPEX?” It becomes about changing the organization’s communication across the board and having the ability to look at it from more than just one point of view. And, honestly, most organizations really need help in that. 

It’s not just scheduling a meeting and sitting at a table. Most organizations are looking for solutions and software. They need to bridge the gap, provide a translation of where management of software can come together and say, “Hey, here are the costs related to the capacity that we need.” So everyone sits together and says, “Okay, well, if we need more capacity and the cost turns into thisand the capacity turns into that, you can do the analysis. You can determine if it’s better in the cloud, or better on-premises. But it’s about more than just bringing people together and communicating. It has to provide them the information they need in order to have a similar discussion and gain common ground to work together.

Gardner: Edwin, tell us about yourself and ESG.

Pathway to the cloud 

Yuen: Enterprise Strategy Group is a research and analyst firm. We do a lot of work with both vendors and customers, covering a wide range of topics. And we do custom research and syndicated research. And that backs a lot of the findings that we have when we have discussions about where the market is going.

I cover cloud orchestration and services, data protection, and DevOps, which is really the whole spectrum of how people manage resources and how to get the most out of the cloud — the great optimization of all of that. 

As background, I have worked at Hewlett Packard Enterprise (HPE), Microsoft, and at several startups. I have seen this whole process come together for the growth of the cloud, and I have seen different changes — when we had virtualization, when we had great desktops, and seeing how IT and end-users have had to change.

This is a really exciting time as we get public cloud going; more than just an idea. It’s like when we first had the Internet. We are not just talking about cloud, we are talking what we are doing in the cloud and how the cloud helps us. And that’s the sign of the maturity of the market, but also the sign of what we need to do in order to change, in order to take the best out of it.

This is a really exciting time as we get public cloud going. It’s more than an idea. We are talking about how the cloud helps us. That’s a sign of maturity and what we need to do to take the best out of it.

Gardner: Your title is even an indicator that you have to rethink things — not just in slices or categories or buckets — but in the holistic sense. Your long, but very apropos, job title really shows what we have been talking about that companies need to be thinking differently. 

So that gets me to the issue about skills. So maybe the typical IT person — and I don’t want to get into too much of a generalization or even stereotyping — seems to be content to find a requirement set, beaver along in their cubicle, maybe not be too extroverted in terms of their skills or temperament, and really get the job done. It is detail-oriented, it is highly focused.

But in order to accomplish what companies need to do now — to cross-pollinate, break down boundaries, think outside of the box — that requires different skills, not just technical but business; not just business but extroverted or organizationally aggressive in terms of opening up channels with other groups inside the company, even helping people get out of their comfort zone.

So what do you think is the next step when it comes to finding the needed talent and skills to create this new digitally transformed business environment?

Curious IT benefits business

Yuen: In order to find that skill set, you need to expand your boundaries in two ways.

One is the ability to take your natural interest in learning and expand it. I think a lot of us, especially in the IT industry, have been pushed to specialize in certain things and get certifications, and you need to get as deep as possible. We have closed our eyes to having to learn about other technologies or other items.

Most technical people, in general, are actually fairly inquisitive. We have the latest iPhone or Android. We are generally interested. We want to know the market because we want to make the best decisions for ourselves.

We need to apply that generally within our business lives and in our jobs in terms of going beyond IT. We need to understand the different technologies out there. We don’t have to be masters of them, we just need to understand them. If we need to do specialization, we go ahead. But we need to take our natural curiosity — especially in our private lives — and expand that into our work lives and get interested in other areas.

The second area is accepting that you don’t have to be the expert in everything. I think that’s another skill that a lot in business should have. We don’t want to speak up or learn if we fear we can’t be the best or we might get corrected if we are wrong.

But we really need to go ahead and learn those new areas that we are not expert in. We may never be experts, but we want to get that secondary perspective. We want to understand where finance is coming from in terms of budgetary realities. We need to learn about how they do the budget, what the budget is, and what influences the costs.

If we want to understand the end users’ needs, we need to learn more about what their requirements are, how an application affects them, and how it affects their daily lives. So that when we go to the table and they say, “I need this,” you have that base understanding and know their role.

By having greater knowledge and expanding it, that allows you to go ahead and learn a lot more and as you expand from that area. You will discover areas that you might become interested in or that your company needs. That’s where you go ahead, double-down, and take your existing learning capabilities and go really, really deep.

A good example is if I have a traditional IT infrastructure. Maybe I have learned virtual machines, but now I am faced with such things as cloud virtual machines, containers, and Kubernetes, and with serverless. You may not be sure in what direction to go, and with analysis paralysis — you may not do anything. 

What you should do is learn about each of those, how it relates, and what your skills are. If one of those technologies booms suddenly, or it becomes an important point, then you can very quickly make a pivot and learn it — as opposed to just isolating yourself.

So, the ability to learn and expand the skills gap creates opportunities for everybody.

Gardner: Well, we are not operating in a complete vacuum. The older infrastructure vendors are looking to compete and remain viable in the new cloud era. They are trying to bring out solutions that automate. And so are the cloud vendors.

What are you seeing from cloud providers and the vendors as they try to ameliorate these issues? Will new tools, capabilities, and automation help gain that holistic, strategic focus on the people and the process?

Cloud coordinators needed 

Yuen: The providers and vendors are delivering the tools and interfaces to do what we call automation and orchestration. Sometimes those two terms get mixed together, but generally I see them as separate. Automation is taking an existing task, or a series of tasks or process, and making it into a single, one-button-click type of thing. The best way I would describe it is almost like an Excel macro. You have steps 1, 2, 3, and 4, I am going to go ahead and do 1, 2, 3 and 4 as a script with a single button.

But orchestration is taking those processes and coordinating them. What if I need to have decision points in coordination? What if I need to decide when to run this and when not to run that? The cloud providers are delivering the APIs, entry points, and the data feedback so you have the process information. You can only automate based on the information coming in. We are not blindly saying we are going to do one, two and three or A, B and C; we are going to react based on the situation.

So we really must rely on the cloud providers to deliver that level of information and the APIs to execute on what we want to do. 

And, meanwhile, the vendors are creating the ability to bring all of those tools together as an information center, or what we traditionally have called a monitoring tool. But it’s really cloud management where we see across all of the different clouds. We can see all of the workloads and requirements. Then we can build out the automation and orchestration around that.

The vendors are creating the ability to bring all of those tools together as an information center, what we traditionaly called a monitoring tool. But it’s really cloud management across all of the different clouds. 

Some people are concerned that if we build a lot of automation and orchestration, that they will automate themselves out of a job. But realistically what we have seen is with cloud and with orchestration is that IT is getting more complex, not less complex. Different environments, different terminologies, different way to automate, the complexities of integrating more than just the steps that IT has – this has created a whole new area for IT professionals to get into. Instead of deciding what button to press and doing the task, they will automate the tests. Then we are left to focus on determining the proper orchestration, of coordinating amongst all the other areas.

So as the management has gone up a level, the skills and the capabilities for the operators are also going to go up.

Gardner: It seems to me that this is a unique time in the long history of IT. We can now apply those management principles and tools not just to multicloud or public cloud, but across private cloud, legacy, bare-metal, virtualization, managed service providers, and SaaS applications. Do you share my optimism that if you can, in effect, adjust to cloud heterogeneity that you can encompass all of IT heterogeneity and get comprehensive, data-driven insights and management for your entire IT apparatus regardless of where it resides, how it operates, and how it’s even paid for?

Seeing through the clouds

Yuen: Absolutely! I mean that’s where we are going to end up. It’s an inverse of the mindset that we currently have in IT, which is we maintain a specific type of infrastructure, we optimize and modify it, and then the end result is it’s going to impact the application in a positive way, we hope.

What we are doing now is we are inverting that thinking. We are managing applications and the applications help deliver the proper experience. That’s what we are monitoring the most, and it doesn’t matter what the underlying infrastructure or the systems are. It’s not that we don’t care, we just don’t care necessarily what the systems are.

How to Put the Right People

 In the Right Roles 

For Digital Transformation

Once we care about the application, then we look at the underlying infrastructure, and then we optimize that. And that infrastructure could be in the public cloud, across multiple providers, it could be in a private cloud, or a traditional backend and large mainframe systems.

It’s not that we don’t care about those backend systems. In fact, we care just as much as we did before – it’s just that we don’t have to have that alignment. Our orientation isn’t system-based or application-based. Now, there potentially could be anything — and the vendors with systems management software, they are extending that out.

So it doesn’t matter if it’s a VMware system, or a bare metal system, or a public cloud. We are just managing the end-result relative to how those systems operate. We are going to let the tools go ahead and make sure they execute.

Our ability to understand and monitor is going to be critical. It’s going to allow us to extend out and manage across all the different environments effectively. But most importantly, it’s all affecting the application at the top. So you’re becoming a purveyor and providing better skills to the end-users and to finance.

Gardner: While you’re getting a better view application-by-application, you’re also getting a better opportunity to do data analysis across all of these different deployments. You can find ways of corralling that data and its metadata and move the workloads into the proper storage environment that best suits your task at the moment under the best economics of the moment.

Not only is there an application workload benefit, but you can argue that there is an opportunity to finally get a comprehensive view of all of the IT data and then manage that data into the right view — whether it’s a system of record benefit, application support benefit or advanced analytics, and even out to the edge.

Do you share my view that the applications revolution you are describing also is impacting how data is going to be managed, corralled, and utilized?

Data-driven decision making

Yuen: It is, and that data viewpoint is applicable in many ways. It’s one of the reasons why data protection and analysis of that data becomes incredibly important. From the positive side, we are going to get a wealth of data that we need in order to do the optimizations.

If I want to know the best location for my apps, I need all the information to understand that. Now that we are getting that data in, it can be passed to machine learning (ML) or artificial intelligence (AI) systems that can make decisions for us going forward. Once we train the models, they can be self-learning, self-healing, and self-operating. That’s going to relieve a lot of work from us.

Data also impacts the end-users. People are taking in data, and they understand that they can use it for secondary users. It can be used for development, it can be used for sales. I can make copies of that data, so I don’t want to touch the production data all the time. There is so much insight I can provide to the end users. In fact, the explosion of data is a leading cause of increased IT complexity.

We want to maximize the information that we get out of all that data, to maximize the information the end-users are getting out of it, and also leverage our tools to minimize the negative impact it has for management.

Gardner: What should enterprises be doing differently in order to recognize the opportunity, but not fall to the wayside in terms of these culture and adoption issues?

Come together, right now, over cloud

Yuen: The number one thing is to start talking and developing a measured, sustainable approach to going into the cloud. Come together and have that discussion, and don’t be afraid to have that discussion. Whether you’re ready for cloud or you’ve already gone in and need to rein it back in. No matter what you need to do, you always have that centralized approach because that approach is not going to be a one-time thing. You don’t make a cloud plan and then not revisit it for 20 years — you live it. It’s an ongoing, living, breathing thing — and you’re always going to be making adjustments.

But bring the team together, develop a plan, build an approach to cloud that you’re going to be living with. Consider how you want to make decisions and bring that together with how you want to interact with each other. That plan is going to help build the communication plan and build the organization to help make that cultural shift.

Companies honestly need to do an assessment of what they have. It’s surprising that a lot of companies just don’t know how much cloud they are using. They don’t know where it’s going. And even if it’s not in the cloud yet, they don’t know what they need.

A lot of the work is understanding what you have. Once you build out the plan of what you want to do, you essentially get your house in order, understand what you have, then you know where you want to go, where you are, and then you can begin that journey.

The biggest problem we have right now is companies that try and do both at the same time. They move forward without planning it out. They may potentially move forward without understanding what they already have, and that leads to inefficiencies and cultural conflicts, and the organizational skills gaps issues that we talked about.

Learn About Smooth Transitions to Multicloud

Gain Insights to

Easier Digital Transformations

So again, lay out a plan and understand what you have, those are the first two steps. Then look for solutions to help you understand and capture the information about the resources you already have and how you are using them. By pulling those things together, you can really go forward and get the best use out of cloud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Data center transformation, DevOps, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, multicloud, procurement, serverless, User experience | Tagged , , , , , , , , | Leave a comment

Who, if anyone, is in charge of multi-cloud business optimization?