How REI used automation to cloudify infrastructure and rapidly adjust its digital pandemic response

Like many retailers, Recreational Equipment, Inc. (REI) was faced with drastic and rapid change when the COVID-19 pandemic struck. REI’s marketing leaders wanted to make sure that their online e-commerce capabilities would rise to the challenge. They expected a nearly overnight 150 percent jump in REI’s purely digital business.

Fortunately REI’s IT leadership had already advanced their systems to heightened automation, which allowed the Seattle-based merchandiser to turn on a dime and devote much more of its private cloud to the new e-commerce workload demands.

The next BriefingsDirect Voice of Innovation interview uncovers how REI kept its digital customers and business leadership happy, even as the world around them was suddenly shifting.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To explore what works for making IT agile and responsive enough to re-factor a private cloud at breakneck speed, we’re joined by Bryan Sullins, Senior Cloud Systems Engineer at REI in Seattle. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: When the pandemic required you to hop-to, how did REI manage to have the IT infrastructure to actually move at the true pace of business? What put you in a position to be able to act as you did?

Digital retail demands rise 

Sullins: In addition to the pandemic stay-at-home orders a couple months ago, we also had a large sale previously scheduled for the middle of May. It’s the largest sale of the year, our anniversary sale.

And ramping up to that, our marketing and sales department realized that we would have a huge uptick in online sales. People really wanted to get outside, because people could go outside without breaking any of the social distancing rules.

For example, bicycle sales were up 310 percent compared to the same time last year. So in ramping up for that, we anticipated our online presence at rei.com was going to go up by 150 percent, but we wanted to scale up by 200 percent to be sure. In order to do that, we had to reallocate a bunch of ESXi hosts in VMware vSphere. We either had to stand up new ones or reallocate from other clusters and put them into what we call our digital retail presence.

As a result of our fully automated process, using Hewlett Packard Enterprise (HPE) OneViewSynergy, and Image Streamer, we were able to reallocate 6 out of the 17 total hosts needed. We were able to do that in 18 minutes, all at once — and that’s single touch, that’s launching the automation and then pulling them from one cluster and decommissioning them and placing them all the way into the digital retail clusters.

We also had to move some from our legacy platform, they aren’t at HPE Synergy yet, and those took an additional three days. But those are in transition, we are moving through to that fully automated platform all around. 

Gardner: That’s amazing because just a few years ago that sort of rapid and automated transition would have been unheard of. Even at a slow pace you weren’t guaranteed to have the performance and operations you wanted.

If you were not able to do this using automation – if the pandemic had hit, heaven forbid, five or seven years ago – what would have been the outcome?

We needed to make sure we had the infrastructure capacity so that nothing failed under a heavy load. We were able to do it in the time-frame, and be able to get some sleep.

Sullins: There were actually two outcomes from this. The first is the fairly obvious issue of not being able to handle the online traffic on our rei.com retail presence. It could have been that people weren’t able to put stuff into a shopping cart, or inventory decrement, and so on. It could have been a very broad range of things. We needed to make sure we had the infrastructure capacity so that none of that fails under a heavy load. That was the first part.

Gardner: Right, and when you have people in the heat of a purchasing moment, if you’re not there and it’s not working, they have other options. Not only would you lose that sale, you might lose that customer, and your brand suffers as well.

Sullins: Oh, without a doubt, without a doubt.

The other issue, of course, would have been if we did not meet our deadline. We had just under a week to get this accomplished. And if we had to do this without a fully automated approach, we would have had to return to our managers and say, “Yeah, so like we can’t do it that quickly.” But with our approach, we were able to do it all in the time frame — and be able to get some sleep in the interim. So it was a win-win.

Gardner: So digital transformation pays off after all?

Sullins: Without a doubt.

Gardner: Before we learn more about your journey to IT infrastructure automation, tell us about REI, your investments in advanced automation, and why you consider yourself a data-driven digital business?

Automation all the way 

Sullins: Well, a lot of that precedes me by quite a bit. Going back to the early 2000s, based on what my managers tell me, there was a huge push for REI become an IT organization that just happens to do retail. The priority is on IT being a driving force behind everything we do, and that is something that, at the time, REI really needed to do. There are other competitors, which we won’t name, but you probably know who they are. REI needed to stay ahead of that curve.

So since then there have been constant sweeping and cyclical changes for that digital transformation. The most recent one is the push for automating all things. So that’s the priority we have. It’s our marching orders.

Gardner: In addition to your company, culture, and technology, tell us about yourself, Bryan. What is it about your background and personal development that led you to be in a position to act so forthrightly and swiftly?

Sullins: I got my start in IT back in 1999. I was a public school teacher before that, and then I made the transition to doing IT training. I did IT training from 1999 to about 2012. During those years, I got a lot of technology certifications,because in the IT training world you have to.

I began with what was, at the time, called the Microsoft Certified Solutions Expert (MCSE) certification. Then I also did the Linux Professional Institute. I really glommed on to Linux. I wanted to set myself apart from the rest of the field back then, so I went all-in on Linux.

And then, 2008-2009-ish, I jumped on the VMware train and went all-in on VMware and did the official VMware curriculum. I taught that for about three years. Then, in 2012, I made the transition from IT training into actually doing this for real as an engineer working at Dell. At the time, Dell had an infrastructure-as-a-service (IaaS) healthcare cloud that was fairly large – 1,200-plus ESXi hosts. We were also responsible for the storage and for the 90-plus storage area network (SAN) arrays as well.

In a large environment, you really have to automate. It’s been the focus of my career. I typically jump right into new technology. 

In an environment that large, you really have to automate. I cut my teeth on automating through PowerCLI and Ansible. Since then, about 2015, it’s been the focus of my career. I’m not saying I’m a guru, by any means, but it’s been a focus of my career.

Then, in 2018, REI came calling. I jumped on that opportunity because they are a super-awesome company, and right off the bat I got free reign over: if you want to automate it, then you automate it. And I have been doing that ever since August of 2018.

Gardner: What helped you make the transition from training to cloud engineer? 

Sullins: I typically jump right into new technology. I don’t know if that comes from the training or if that’s just me as a person. But one of the positives I’ve gotten from the training world is that you learn a 100 percent of the feature base that’s available with said technology. I was able to take what I learned and knew from VMware and then say, “Okay, well, now I am going to get the real-world experience to back that up as well.” So it was a good transition.

Gardner: Let’s look at how other organizations can anticipate the shift to automation. What are some of the challenges that organizations typically face when it comes to being agile with their infrastructure?

Manage resistance to cloud 

Sullins: The challenges that I have seen aren’t usually technical. Usually the technology that people use to automate things are ready at hand. Many are free; like Ansible, for example, is free. PowerCLI is free. Jenkins is free. 

So, people can start doing that tomorrow. But the real challenge is in changing people’s mindset about a more automated approach. I think that it’s tough to overcome. It’s what I call provisioning by council. More traditional on-premises approaches have application owners who want to roll out x number of virtual machines (VMs), with all their particular specs and whatnot. And then a council of people typically looks at that and kind of scratches their chin and says, “Okay, we approve.” But if you need to scale up, that council approach becomes a sort of gate-keeping process.

With a more automated approach, like we have at REI, we use a cloud management platform to automate the processes. We use that to enable self-service VMs instead of having a roll out by council, where some of the VMs can take days or weeks roll out because you have a lot of human beings touching it along the way. We have a lot of that process pre-approved, so everybody has already said, “Okay, we are okay with the roll out. We are okay with the way it’s done.” And then we can roll that out in 7 to 10 minutes rather than having a ticket-based model where somebody gets to it when they can. Self-service models are able to do that much better.

But that all takes a pretty big shift in psychology. A lot of people are used to being the gatekeeper. It can make them uncomfortable to change. Fortunately for me, a lot of the people at REI are on-board with this sort of approach. But I think that resistance can be something a lot of people run into.

Gardner: You can’t just buy automation in a box off of a shelf. You have to deal with an accumulation of manual processes and habits. Why is moving beyond the manual processes culture so important?

Sullins: I call it a private cloud because that means there is a healthy level of competition between what’s going in the public cloud and what we do in that data center.

The public cloud team has the capability of “selling” their solution side-by-side with ours. When you have application owners who are technically adept — and pretty much all of them are at REI — they can be tempted to say, “Well, I don’t want to wait a week or two to get a VM. I want to create one right now out on the public cloud.”

There is a healthy level of competition between what’s going in the public cloud and what we do in the date center. We offer our customers a spectrum of services. And now they can do that in an automated way. That’s a big win. 

That’s a big challenge for us. So what we are trying to accomplish — and we have had success so far through the transition – is to offer our customers a spectrum of services. So that’s great.

The stakeholders consuming that now gain flexibility. They can say, “Okay, yeah, I have this application. I want to run it in the public cloud, but I can’t based on the needs for that application. We have to run it on-premises.” And now they can do that in an automated way. That’s a big win, and that’s what people expect now, quite honestly.

Gardner: They want the look and feel of a public cloud but with all the benefits of the private cloud. It’s up to you to provide that. Let’s find out how you did.

How did you overcome the challenges that we talked about and what are the investments that you made in tools, platforms, and an ecosystem of players that accomplished it?

Sullins: As I mentioned previously, a lot of our utilities are “free,” the Ansibles of the world, PowerCLI, and whatnot. We also use Morpheus to do self-service and the implications behind automating things on what I call the front end, the customer face. The issue you have there is you don’t get that control of scaling up before you provision the VM. You have to monitor and then roll it out on the backend. So you have to monitor for usage and then scale up on the backend, and seamlessly. The end users aren’t supposed to know that you are scaling up. I don’t want them to know. It’s not their job to know. I want to remain out of their way.

In order to do that, we’ve used a combination of technologies. HPE actually has a GitHub link for a lot of Ansible playbooks that plug right in. And then the underlying hardware adjacent management ecosystem platform is HPE OneView with HPE Synergy and Image Streamer. With a combination of all of those technologies we were able to accomplish that 18-minute roll-out of our various titles.

Gardner: Even though you have an integrated platform and solutions approach, it sounds like you have also made the leap from ushering pets through the process into herding cattle. If you understand my metaphor, what has allowed you to stop treating each instance as a pet into being able to herd this stuff through on an automated basis?

From brittle pets to agile cattle 

Sullins: There is a psychological challenge with that. In the more traditional approach – and the VMware shop listeners are going to be very well aware of this — I may need to have a four-node cluster with a number of CPUs, a certain amount of RAM, and so on. And that four-node cluster is static. Yes, if I need to add a fifth down the line I can do that, but for that four-node cluster, that’s its home, sometimes for the entire lifecycle of that particular host.

With our approach, we treat our ESXi hosts as cattle. The HPE OneView-Synergy-Image Streamer technology allows us to do that in conjunction with those tools we mentioned previously, for the end point in particular.

So rather than have a cluster, and it’s static and it stays that way — it might have a naming convention that indicates what cluster it’s in and where — in reality we have cattle-based DNS names for ESXi hosts. At any time, the understanding throughout the organization, or at least for the people who need to know, is that any host can be pulled from one cluster automatically and placed into another, particularly when it comes to resource usage on that cluster. My dream is that the robots will do this automatically.

So if you had a cluster that goes into the yellow, with its capacity usage based on a threshold, the robot would interpret that and say, “Oh, well, I have another cluster over here with a host that is underutilized. I’m going to pull it into the cluster that’s in the yellow and then bring it back into the green again.” This would happen all while we sleep. When we wake up in the morning, we’d say, “Oh, hey, look at that. The robots moved that over.”

Gardner: Algorithmic operations. It sounds very exciting.

Automation begets more automation 

Sullins: Yes, we have the push-button automation in place for that. It’s the next level of what that engine is that’s going to make those decisions and do all of those things.

Gardner: And that raises another issue. When you take the plunge into IT automation, you are making your way down the Chisholm Trail with your cattle, all of a sudden it becomes easier along the way. The automation begets more automation. As you learn and grow, does it become more automated along the way?

Sullins: Yes. Just to put an exclamation point on this topic, imagine the situation we opened the podcast with, which is, “Okay, we have to reallocate a bunch of hosts for rei.com.” If it’s fully automated, and we have robots making those decisions, the response is instantaneous. “Oh, hey, we want to scale up by 200 percent on rei.com.” We can say, “Okay, go ahead, roll out your VM. The system will react accordingly. It will add physical hosts as you see fit, and we don’t have to do anything, we have already done the work with the automation.” Right?

But to the automation begetting automation, which is a great way of putting it, by the way, there are always opportunities for more automation. And on a career side note, I want to dispel the myth that you automate your way out of a job. That is a complete and total myth. I’m not saying it doesn’t happen, where people get laid off as a result of automation. I’m not saying that doesn’t happen, but that’s relatively rare because when you automate something, that automation is going to need to be maintained because things change over time.

The other piece of that is a lot of times you have different organizations at various states of automation. Once you get your head above water to where it’s, “Okay, we have this process and now it’s become trivial because it’s been automated.” We can now concentrate on automating either more things — or you have new things that need to be automated. And whether that’s the process for only VMs, a new feature base, monitoring, or auto-scaling — whatever it is — you have the capability of from day one to further automate these processes.

Gardner: What was it specifically about the HPE OneView and Synergy that allowed you to move past the manual processes, firefighting, and culture of gatekeeping into more herding of cattle and being progressively automated?

Sullins: It was two things. The Image Streamer was number one. To date, we don’t run PXE boots infrastructure, not that we can’t, it’s just not something that we have traditionally done. We needed a more standard process for doing that, and Image Streamer fit that and solved that problem.

The second piece is the provided Ansible playbooks that HPE has to kick off the entire process. If you are somewhat versed in how HPE does things through OneView, you have a server profile that you can impose on a blade, and that can be fully automated through Ansible.

Image Streamer allows us to say, “Okay, we build a gold image. We can apply that gold image to any frame in the cluster.” We needed a more standard process, and Image Streamer solved that problem.

And, by the way, you don’t have to use Image Streamer to use Ansible automation. This is really more of an HPE OneView approach, whereby you can actually use it to do automated profiles and whatnot. But the Image Streamer is really what allows us to say, “Okay, we build a gold image. We can apply that gold image to any frame in the cluster.” That’s the first part of it, and the rest is configuring the other side.

Gardner: Bryan, it sounds like the HPE Composable Infrastructure approach works well with others. You are able to have it your way because you like Ansible, and you have a history of certain products and skills in your organization. Does the HPE Composable Infrastructure fit well into an ecosystem? Is it flexible enough to integrate with a variety of different approaches and partners?

Sullins: It has been so far, yes. We have anticipated leveraging HPE for our bare metal Linux infrastructure. One of the additional driving forces and big initiatives right now is Kubernetes. We are going all-in on Kubernetes in our private cloud, as well as in some of our worker nodes. We eventually plan on running those as bare metal. And HPE OneView, along with Image Streamer, is something that we can leverage for that as well. So there is flexibility, absolutely, yes.

Coordinating containers 

Gardner: It’s interesting, you have seen the transition from having VMware and other hypervisor sprawl to finding a way to manage and automate all of that. Do you see the same thing playing out for containers, with the powerful endgame of being able to automate containers, too?

Sullins: Right. We have been utilizing Rancher as part of our coordination tool for our Kubernetes infrastructure and utilizing vSphere for that. So we are using that.

As far as the containerization approach, REI has been doing containers before containers was a big thing. Our containerization platform has been around since at least 2015. So REI has been pretty cutting edge as far as that is concerned.

And now that Kubernetes has won the orchestration wars, as it were, we are looking to standardize that for people who want to do things online, which is to say, going back to the digital transformation journey.

Basically, the industry has caught up with what our super-awesome developers have done with containerization. But we are looking to transition the heavy lifting of maintaining a platform away from the developers. Now that we have a standard approach with Kubernetes, they don’t have to worry so much about it. They can just develop what they need to develop. It will be a big win for us.

Gardner: As you look back at your automation journey, have you developed a philosophy about automation? How this should this best work in the future?

Trust as foundation of automation 

Sullins: Right. Have you read Gene Kim’s The Unicorn Project? Well, there is also his The Phoenix ProjectMy take from that is the whole idea of trust, of trusting other people. And I think that is big.

I see that quite a bit in multiple organizations. For REI, we are going to work as a team and we trust each other. So we have a pretty good culture. But I would imagine that in some places that is still big challenge.

And if you take a look at The Unicorn Project, a lot of the issues have to do with trusting other human beings. Something happened, somebody made a mistake, and it caused an outage. So they lock it up and lock it away and say only certain people can do that. And then if you multiply that happening multiple times — and then different individuals walking that down — it leads to not being able to automate processes without somebody approving it, right?

Gardner: I can’t imagine you would have been capable, when you had to transition your private cloud for more online activity, if you didn’t have that trust built into your culture.

Sullins: Yes, and the big challenge that might still come up is the idea of trusting your end users, too. Once you go into the realm of self-service, you come up on the typical what-ifs. What if somebody adds a zero and they meant to only roll out 4 VMs but they roll out 40? That’s possible. How do you create guardrails that are seamless? If you can, then you can trust your users. You decrease the risk and can take that leap of faith that bad things won’t happen.

Gardner: Tell us about your wish list for what comes next. What you would like HPE to be doing? 

Small steps and teamwork rewards 

Sullins: My approach is to first automate one thing and then work out from there. You don’t have to boil the ocean. Start with something small and work your way up.

As far as next steps, we want auto scaling a physical layer and having the robots do all of that. The robots will scale up and down our requesters while we sleep.

We will continue to do application programming interface (API)-capable automation with anything that has a REST API. If we can connect to that and manipulate it, we can do pretty much whatever automation we want. 

We are also containerizing all things. So if any application can be containerized properly, containerize it if you can.

As far as what decision-making engine we have to do the auto-scaling on the physical layer, we haven’t really decided upon what that is. We have some ideas but we are still looking for that.

Gardner: How about more predictive analytics using artificial intelligence (AI) with the data that you have emanating from your data center? Maybe AIOps?

Sullins: Well, without a doubt. I, for one, haven’t done any sort of deep dive into that, but I know it’s all the rage right now. I would be open to pretty much anything that will encompass what I just talked about. If that’s HPE InfoSight, then that’s what it is. I don’t have a lot of experience quite honestly with InfoSight as of yet. We do have it installed in a proof of concept (POC) form, although a lot of the priorities for that have been shifted due to COVID-19. We hope to revisit that pretty soon, so absolutely.

Gardner: To close out, you were ahead of the curve on digital transformation. That allowed you to be very agile when it came time to react to the COVID-19 pandemic.  What did that get you? Do you have any results?

Sullins: Yes, as a matter of fact, our boss’s boss, his boss — so three bosses up from me — he actually sits in on our load testing. It was an all-hands-on-deck situation during that May online sale. He said that it was the most seamless one that he had ever seen. There were almost no issues with this one.

We had done what we needed on the infrastructure side to make sure that we met dynamic demands. It was very successful. We went past our goals, so it was a win-win all the way around.

What I attribute that to is, yes, we had done what we needed on the infrastructure side to make sure that we met dynamic demands. Also, everybody worked as a team. Everybody, all the way up the stacks, from our infrastructure contribution, to the hypervisor and hardware layer, all the way on up to the application layer and the containers, and all of our DevOps stuff. It was very successful. We went past our goals of what we had thought for the sale, so it was a win-win all the way around.

Gardner: Even though you were going through this terrible period of adjustment, that’s very impressive.

Sullins: Yes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, data center, Data center transformation, Enterprise architect, Hewlett Packard Enterprise, marketing, retail, User experience, Virtualization, VMware | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

How the right data and AI deliver insights and reassurance on the path to a new normal

The next BriefingsDirect Voice of AI Innovation podcast explores how businesses and IT strategists are planning their path to a new normal throughout the COVID-19 pandemic and recovery.

By leveraging the latest tools and gaining data-driven inferences, architects and analysts are effectively managing the pandemic response — and giving more people better ways to improve their path to the new normal. Artificial intelligence (AI) and data science are proving increasingly impactful and indispensable.

Stay with us as we examine how AI forms the indispensable pandemic response team member for helping businesses reduce risk of failure and innovate with confidence. To learn more about the analytics, solutions, and methods that support advantageous reactivity — amid unprecedented change — we are joined by two experts.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Please welcome Arti Garg, Head of Advanced AI Solutions and Technologies, at Hewlett Packard Enterprise (HPE), and Glyn Bowden, Chief Technologist for AI and Data, at HPE Pointnext Services. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We’re in uncharted waters in dealing with the complexities of the novel coronavirus pandemic. Arti, why should we look to data science and AI to help when there’s not much of a historical record to rely on?  

Garg: Because we don’t have a historical record, I think data science and AI are proving to be particularly useful right now in understanding this new disease and how we might potentially better treat it, manage it, and find a vaccine for it. And that’s because at this moment in time, raw data that are being collected from medical offices and through research labs are the foundation of what we know about the pandemic.

This is an interesting time because, when you know a disease, medical studies and medical research are often conducted in a very controlled way. You try to control the environment in which you gather data, but unfortunately, right now, we can’t do that. We don’t have the time to wait.

And so instead, AI — particularly some of the more advanced AI techniques — can be helpful in dealing with unstructured data or data of multiple different formats. It’s therefore becoming very important in the medical research community to use AI to better understand the disease. It’s enabling some unexpected and very fruitful collaborations, from what I’ve seen.

Gardner: Glyn, do you also see AI delivering more, even though we’re in uncharted waters?

Bowden: The benefits of something like machine learning (ML), for example, which is a subset of AI, is very good at handling many, many features. So with a human being approaching these projects, there are only so many things you can keep in your head at once in terms of the variables you need to consider when building a model to understand something.

But when you apply ML, you are able to cope with millions or billions of features simultaneously — and then simulate models using that information. So it really does add the power of a million scientists to the same problem we were trying to face alone before.

Gardner: And is this AI benefit something that we can apply in many different avenues? Are we also modeling better planning around operations, or is this more research and development? Is it both?

Data scientists are collaborating directly with medical science researchers and learning how to incorporate subject matter expertise into data science models. 

Garg: There are two ways to answer the question of what’s happening with the use of AI in response to the pandemic. One is actually to the practice of data science itself.

One is, right now data scientists are collaborating directly with medical science research and learning how to incorporate subject matter expertise into data science models. This has been one of the challenges preventing businesses from adopting AI in more complex applications. But now we’re developing some of the best-practices that will help us use AI in a lot of domains.

In addition, businesses are considering the use of AI to help them manage their businesses and operations going forward. That includes things such as using computer vision (CV) to ensure that social distancing happens with their workforce, or other types of compliance we might be asked to do in the future.

Gardner: Are the pressures of the current environment allowing AI and data science benefits to impact more people? We’ve been talking about the democratization of AI for some time. Is this happening more now?

More data, opinions, options 

Bowden: Absolutely, and that’s both a positive and a negative. The data around the pandemic has been made available to the general public. Anyone looking at news sites or newspapers and consuming information from public channels — accessing the disease incidence reports from Johns Hopkins University, for example — we have a steady stream of it. But those data sources are all over the place and are being thrown to a public that is only just now becoming data-savvy and data-literate.

As they consume this information, add their context, and get a personal point of view, that is then pushed back into the community again — because as you get data-centric you want to share it.

So we have a wide public feed — not only from universities and scholars, but from the general public, who are now acting as public data scientists. I think that’s creating a huge movement. 

Garg: I agree. Making such data available exposes pretty much anyone to these amazing data portals, like Johns Hopkins University has made available. This is great because it allows a lot of people to participate.

It can also be a challenge because, as I mentioned, when you’re dealing with complex problems you need to be able to incorporate subject matter expertise into the models you’re building and in how you interpret the data you are analyzing.

And so, unfortunately, we’ve already seen some cases — blog posts or other types of analysis — that get a lot of attention in social media but are later found to be not taking into account things that people who had spent their careers studying epidemiology, for example, might know and understand.

Gardner: Recently, I’ve seen articles where people now are calling this a misinformation pandemic. Yet businesses and governments need good, hard inference information and data to operate responsibly, to make the best decisions, and to reduce risk.

What obstacles should people overcome to make data science and AI useful and integral in a crisis situation?

Garg: One of the things that’s underappreciated is that a foundation, a data platform, makes data managed and accessible so you can contextualize and make stronger decisions based on it. That’s going to be critical. It’s always critical in leveraging data to make better decisions. And it can mean a larger investment than people might expect, but it really pays off if you want to be a data-driven organization.

Know where data comes from 

Bowden: There are a plethora of obstacles. The kind that Arti is referring to, and that is being made more obvious in the pandemic, is the way we don’t focus on the provenance of the data. So, where does the data come from? That doesn’t always get examined, and as we were talking about a second ago, the context might not be there.

All of that can be gleaned from knowing the source of the data. The source of the data tends to come from the metadata that surrounds it. So the metadata is the data that describes the data. It could be about when the data was generated, who generated it, what it was generated for, and who the intended consumer is. All of that could be part of the metadata.

Organizations need to look at these data sources because that’s ultimately how you determine the trustworthiness and value of that data.

We don’t focus on the provenance of the data. Where does the data come from? That doesn’t always get examined and he context might not be there.

Now it could be that you are taking data from external sources to aggregate with internal sources. And so the data platform piece that Arti was referring to applies to properly bringing those data pieces together. It shouldn’t just be you running data silos and treating them as you always treated them. It’s about aggregation of those data pieces. But you need to be able to trust those sources in order to be able to bring them together in a meaningful way.

So understanding the provenance of the data, understanding where it came from or where it was produced — that’s key to knowing how to bring it together in that data platform.

Gardner: Along the lines of necessity being the mother of invention, it seems to me that a crisis is also an opportunity to change culture in ways that are difficult otherwise. Are we seeing accelerants given the current environment to the use of AI and data?

AI adoption on the rise 

Garg: I will answer that question from two different perspectives. One is certainly the research community. Many medical researchers, for example, are doing a lot of work that is becoming more prominent in people’s eyes right now.

I can tell you from working with researchers in this community and knowing many of them, that the medical research community has been interested and excited to adopt advanced AI techniques, big data techniques, into their research. 

It’s not that they are doing it for the first time, but definitely I see an acceleration of the desire and necessity to make use of non-traditional techniques for analyzing their data. I think it’s unlikely that they are going to go back to not using those for other types of studies as well.

In addition, you are definitely going to see AI utilized and become part of our new normal in the future, if you will. We are already hearing from customers and vendors about wanting to use things such as CV to monitor social distancing in places like airports where thermal scanning might already be used. We’re also seeing more interest in using that in retail.

So some AI solutions will become a common part of our day-to-day lives.

Gardner: Glyn, a more receptive environment to AI now?

Bowden: I think so, yes. The general public are particularly becoming used to AI playing a huge role. The mystery around it is beginning to fade and it is becoming far more accepted that AI is something that can be trusted.

It does have its limitations. It’s not going to turn into Terminator and take over the world.

The fact that we are seeing AI more in our day-to-day lives means people are beginning to depend on the results of AI, at least from the understanding of the pandemic, but that drives that exception.

The general public are particularly becoming used to AI playing a huge role. The mystery around it is beginning to fade and it is becoming far more accepted that AI is something that can be trusted.

When you start looking at how it will enable people to get back to somewhat of a normal existence — to go to the store more often, to be able to start traveling again, and to be able to return to the office — there is that dependency that Arti mentioned around video analytics to ensure social distancing or temperatures of people using thermal detection. All of that will allow people to move on with their lives and so AI will become more accepted.

I think AI softens the blow of what some people might see as a civil liberty being eroded. It softens the blow of that in ways and says, “This is the benefit already and this is as far as it goes.” So it at least forms discussions whenever it was formed before.

Garg: One of the really valuable things happening right now are how major news publications have been publishing amazing infographics, very informative, both in terms of the analysis that they provide of data and very specific things like how restaurants are recovering in areas that have stay-in-place orders.

In addition to providing nice visualizations of the data, some of the major news publications have been very responsible by providing captions and context. It’s very heartening in some cases to look at the comments sections associated with some of these infographics as the general public really starts to grapple with the benefits and limitations of AI, how to contextualize it and use it to make informed decisions while also recognizing that you can go too far and over-interpret the information.

Gardner: Speaking of informed decisions, to what degree you are seeing the C-suite — the top executives in many businesses — look to their dashboards and query datasets in new ways? Are we seeing data-driven innovation at the top of decision-making as well?

Data inspires C-suite innovation 

Bowden: The C-suite is definitely taking a lot of notice of what’s happening in the sense that they are seeing how valuable the aggregation of data is and how it’s forwarding responses to things like this.

So they are beginning to look internally at what data sources are available within their own organizations. I am thinking now about how do we bring this together so we can get a better view of not only the tactical decisions that we have to make, but using the macro environmental data, and how do we now start making strategic decisions, and I think the value is being demonstrated for them in plain sight.

So rather than having to experiment, to see if there is going to be value, there is a full expectation that value will be delivered, and now the experiment is how much they can draw from this data now.

Garg: It’s a little early to see how much this is going change their decision-making, especially because frankly we are in a moment when a lot of the C-suite was already exploring AI and opening up to its possibilities in a way they hadn’t even a year ago.

And so there is an issue of timing here. It’s hard to know which is the cause and which is just a coincidence. But, for sure, to Glyn’s point, they are dealing with more change.

Gardner: For IT organizations, many of them are going to be facing some decisions about where to put their resources. They are going to be facing budget pressures. For IT to rise and provide the foundation needed to enable what we have been talking about in terms of AI in different sectors and in different ways, what should they be thinking about?

How can IT make sure they are accelerating the benefits of data science at a time when they need to be even more choosy about how they spend their dollars?

IT wields the sword to deliver DX 

Bowden: With IT particularly, they have never had so much focus as right now, and probably budgets are responding in a similar way. This is because everyone has to now look at their digital strategy and their digital presence — and move as much as they can online to be able to be resistant to pandemics and at-risk situations that are like this.

So IT has to have the sword, if you like, in that battle. They have to fix the digital strategy. They have to deliver on that digital promise. And there is an immediate expectation of customers that things just will be available online.

With the pandemic, there is now an AI movement that will get driven purely from the fact that so much more commerce and business are going to be digitized. We need to enable that digital strategy. 

If you look at students in universities, for example, they assume that it will be a very quick fix to start joining Zoom calls and to be able to meet that issue right away. Well, actually there is a much bigger infrastructure that has to sit behind those things in order to be able to enable that digital strategy.

So, there is now an AI movement that will get driven purely from the fact that so much more commerce and business is going to be digitized.

Gardner: Let’s look to some more examples and associated metrics. Where do you see AI and data science really shining? Are there some poster children, if you will, of how organizations — either named or unnamed — are putting AI and data science to use in the pandemic to mitigate the crisis or foster a new normal?

Garg: It’s hard to say how the different types of video analytics and CV techniques are going to facilitate reopening in a safe manner. But that’s what I have heard about the most at this time in terms of customers adopting AI.

In general, we are at very early stages of how an organization is going to decide to adopt AI. And so, for sure, the research community is scrambling to take advantage of this, but for organizations it’s going to take time to further adopt AI into any organization. If you do it right, it can be transformational. Yet transformational usually means that a lot of things need to change — not just the solution that you have deployed.

Bowden: There’s a plethora of examples from the medical side, such as how we have been able to do gene analysis, and those sorts of things, to understand the virus very quickly. That’s well-known and well-covered.

The bit that’s less well covered is AI supporting decision-making by governments, councils, and civil bodies. They are taking not only the data from how many people are getting sick and how many people are in hospital, which is very important to understand where the disease is but augmenting that with data from a socioeconomic situation. That means you can understand, for example, where an aging population might live or where a poor population might live because there’s less employment in that area.

The impact of what will happen to their jobs, what will happen if they lose transport links, and the impact if they lose access to healthcare — all of that is being better understood by the AI models.

As we focus on not just the health data but also the economic data and social data, we have a much better understanding of how society will react, which has been guiding the principles that the governments have been using to respond.

So when people look at the government and say, “Well, they have come out with one thing and now they are changing their minds,” that’s normally a data-driven decision and people aren’t necessarily seeing it that way.

So AI is playing a massive role in getting society to understand the impact of the virus — not just from a medical perspective, but from everything else and to help the people.

Gardner: Glyn, this might be more apparent to the Pointnext organization, but how is AI benefiting the operational services side? Service and support providers have been put under tremendous additional strain and demand, and enterprises are looking for efficiency and adaptability.

Are they pointing the AI focus at their IT systems? How does the data they use for running their own operations come to their aid? Is there an AIOps part to this story? 

AI needs people, processes 

Bowden: Absolutely, and there has definitely become a drive toward AIOps.

When you look at an operational organization within an IT group today, it’s surprising how much of it is still human-based. It’s a personal eyeball looking at a graph and then determining a trend from that graph. Or it’s the gut feeling that a storage administrator has when they know their system is getting full and they have an idea in the back of their head that last year something happened seasonally from within the organization making decisions that way. 

We are therefore seeing systems such as HPE’s InfoSight start to be more prominent in the way people make those decisions. So that allows plugging into an ecosystem whereby you can see the trend of your systems over a long time, where you can use AI modeling as well as advanced analytics to understand the behavior of a system over time, and how the impact of things — like everybody is suddenly starting to work remotely – does to the systems from a data perspective. 

So the models-to-be need to catch up in that sense as well. But absolutely, AIOps is desirable. If it’s not there today, it’s certainly something that people are pursuing a lot more aggressively than they were before the pandemic. 

Gardner: As we look to the future, for those organizations that want to be more data-driven and do it quickly, any words of wisdom with 20/20 hindsight? How do you encourage enterprises — and small businesses as well — to better prepare themselves to use AI and data science?

Garg: Whenever I think about an organization adopting AI, it’s not just the AI solution itself but all of the organizational processes — and most importantly the people in an organization and preparing them for the adoption of AI. 

I advise organizations that want to use AI and corporate data-driven decision-making to, first of all, make sure you are solving a really important problem for your organization. Sometimes the goal of adopting AI becomes more important than the goal of solving some kind of problem. So I always encourage any AI initiative to be focused on really high-value efforts. 

Use your AI initiative to do something really valuable to your organization and spend a lot of time thinking about how to make it fit into the way your organization currently works. Make it enhance the day-to-day experience of your employees because, at the end of the day, your people are your most valuable assets. 

Photo of light bulbs with shining fibers in a shape of CHANGE MANAGEMENT concept related words isolated on black background

Those are important non-technical things that are non-specific to the AI solution itself that organizations should think about if they want the shift to being AI-driven and data-driven to be successful. 

For the AI itself, I suggest using the simplest-possible model, solution, and method of analyzing your data that you can. I cannot tell you the number of times where I have heard an organization come in saying that they want to use a very complex AI technique to solve a problem that if you look at it sideways you realize could be solved with a checklist or a simple spreadsheet. So the other rule of thumb with AI is to keep it as simple as possible. That will prevent you from incurring a lot of overhead. 

Gardner: Glyn, how should organizations prepare to integrate data science and AI into more parts of their overall planning, management, and operations? 

Bowden: You have to have a use case with an outcome in mind. It’s very important that you have a metric to determine whether it’s successful or not, and for the amount of value you add by bringing in AI. Because, as Arti said, a lot of these problems can be solved in multiple ways; AI isn’t the only way and often isn’t the best way. Just because it exists in that domain doesn’t necessarily mean it should be used.

AI isn’t an on/off switch; it’s an iteration. You can start with something small and then build into bigger and bigger components that bring more data to bear on the problem, and then add new features that lead to new functions and outcomes.

The second part is AI isn’t an on/off switch; it’s an iteration. You can start with something small and then build into bigger and bigger components that bring more and more data to bear on the problem, as well as then adding new features that lead to new functions and outcomes.

The other part of it is: AI is part of an ecosystem; it never exists in isolation. You don’t just drop in an AI system on its own and it solves a problem. You have to plug it into other existing systems around the business. It has data sources that feed it so that it can come to some decision.

Unless you think about what happens beyond that — whether it’s visualizing something to a human being who will make a decision or automating a decision – it could really just be hiring the smartest person you can find and locking them in a room.

Pandemic’s positive impact

Gardner: I would like to close out our discussion with a riff on the adage of, “You can bring a horse to water but you can’t make them drink.” And that means trust in the data outcomes and people who are thirsty for more analytics and who want to use it.

How can we look with reassurance at the pandemic as having a positive impact on AI in that people want more data-driven analytics and will trust it? How do we encourage the perception to use AI? How is this current environment impacting that? 

Garg: The fact that so many people are checking the trackers of how the pandemic is spreading and learning through a lot of major news publications as they are doing a great job of explaining this. They are learning through the tracking to see how stay-in-place orders affect the spread of the disease in their community. You are seeing that already.

We are seeing growth and trust in how analyzing data can help make better decisions. As I mentioned earlier, this leads to a better understanding of the limitations of data and a willingness to engage with that data output as not just black or white types of things. 

As Glyn mentioned, it’s an iterative process, understanding how to make sense of data and how to build models to interpret the information that’s locked in the data. And I think we are seeing that.

We are seeing a growing desire to not only view this as some kind of black box that sits in some data center — and I don’t even know where it is — that someone is going to program, and it’s going to give me a result that will affect me. For some people that might be a positive thing, but for other people it might be a scary thing.

People are now much more willing to engage with the complexities of data science. I think that’s generally a positive thing for people wanting to incorporate it in their lives more because it becomes familiar and less other, if you will. 

Gardner: Glyn, perceptions of trust as an accelerant to the use of yet more analytics and more AI?

Bowden: The trust comes from the fact that so many different data sources are out there. So many different organizations have made the data available that there is a consistent view of where the data works and where it doesn’t. And that’s built up the capability of people to accept that not all models work the first time, that experimentation does happen, and it is an iterative approach that gets to the end goal. 

I have worked with customers who, when they saw a first experiment fall flat because it didn’t quite hit the accuracy or targets they were looking for, they ended the experiment. Whereas now I think we are seeing in real time on a massive scale that it’s all about iteration. It doesn’t necessarily work the first time. You need to recalibrate, move on, and do refinement. You bring in new data sources to get the extra value.

What we are seeing throughout this pandemic is the more expertise and data science you throw in an instance, the much better the outcome at the end. It’s not about that first result. It’s about the direction of the results, and the upward trend of success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cyber security, data analysis, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, machine learning, User experience, video delivery | Leave a comment

Data science helps hospitals improve patient payments and experiences while boosting revenue

The next BriefingsDirect healthcare finance insights discussion explores new ways of analyzing healthcare revenue trends to improve both patient billing and services.

Stay with us as we explore new approaches to healthcare revenue cycle management and outcomes that give patients more options and providers more revenue clarity.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the next generation of data-driven patient payments process improvements, we’re joined by Jake Intrator, Managing Consultant for Data and Services at Mastercard, and Julie Gerdeman, CEO of HealthPay24. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Julie, what’s driving healthcare providers to seek new and better ways of analyzing data to better manage patient billing? What’s wrong with the status quo?

Gerdeman: Dana, we are in such an interesting time, particularly in the US, with this being an election time. There is such a high level of visibility — really a spotlight on healthcare. There is a lot of change happening, such as in regulations, that highlights interoperability of data and price transparency for patients.

And there’s ongoing change on the insurance reimbursement side, with payer plans that seem to change and evolve every year. There are also trends changing provider compensation, including value-based care and pay-for-performance.

Gerdeman

On the consumer-patient side, there is significant pressure in the market. Statistics show that 62 percent of patients say knowing their out-of-pocket costs in advance will impact their likelihood of pursuing care. So the visibility and transparency of costs — that price expectation — is very, very important and is driving consumerism into healthcare like we have never seen before due to rising costs to patients.

Finally, there is more competition. Where I live in Pennsylvania, I can drive a five-mile radius and access a multitude of different health providers in different systems. That level of competition is unlike anything we have seen before.

Healthcare’s sea change

Gardner: Jake, why is healthcare revenue management difficult? Is it different from other industries? Do they lag in their use of technology? Why is the healthcare industry in the spotlight, as Julie pointed out?

Intrator: The word that Julie used that was really meaningful to me was consumerism. There is a shift across healthcare where patients are responsible for a much larger proportion of their bills than they ever used to be.

And so, as things shift away from hospitals working with payers to receive dollars in an efficient, easy process — now the revenue is coming from patients. That means there needs to be new processes and new solutions to make it a more pleasant experience for patients to be able to pay. We need to enable people to pay when they want to pay, in the ways that they want to pay.

Intrator

That’s something we have keyed on to, as a payments organization. That’s also what led us to work with HealthPay24. 

Gardner: It’s fascinating. If we are going to a consumer-type model for healthcare, why not take advantage of what consumers have been doing with their other financing, such as getting reports every month on their bills? It seems like there is a great lesson to be learned from what we all do with our credit cards. Julie, is that what’s going to happen?

IConsumer in driver’s seat 

Gerdeman: Yes, definitely. It’s interesting that healthcare has been sitting in a time warp. Historically, there remain many manual processes and functions in the health revenue cycle. That’s attributed to a piecemeal approach — different segments of the revenue cycle were tackled either at different times or acquisitions impacted that. I read recently that there are still eight billion faxes happening in healthcare.

So that consumer-level experience, as Jake indicated, is where it’s going — and where we need to go even faster.

Technology provides the transparency and interoperability of data. Investment in IT is happening, but it needs to happen even more.

Gardner: Wherever there is waste, inefficiency, and a lack of clarity is an opportunity to fix that for all involved. But what are the stakes? How much waste or mismanagement are we talking about? 

Intrator: The one statistic that sticks out to me is that care providers aren’t collecting as much as 80 percent of balances from older bills. So that’s a pretty substantial amount — and a large opportunity. Julie, do you have more? 

Gerdeman: I actually have a statistic that’s staggering. There is waste of $265 billion spent on administrative complexity. And then another $230 to $240 billion attributed to what’s termed pricing failure, which means price increases that aren’t in line with the current market. The stakes are very high and the opportunity is very large.

We have data that shows more than 50 percent of chief financial officers (CFOs) want better access to data and better dashboards to understand the scope of the problem. As we were talking about consumerism, Mastercard is just phenomenal in understanding consumer behavior. Think about the personalized experiences that organizations like Mastercard provide — or GoogleAmazonDisney, and Netflix. Everything is becoming so personalized in our consumer lives.

But healthcare? We are not there yet. It’s not a personalized experience where providers know in advance what a consumer or patient wants. HealthPay24 and Mastercard are coming together to get us much closer to that. But, truly, it’s a big opportunity.

Intrator: I agree. Payers and providers haven’t figured out how they enable personalized experiences. It’s something that patients are starting to expect from the way they interact with companies like Netflix, Disney, and Mastercard. It’s becoming table-stakes. It’s really exciting that we are partnering to figure out how to bring that to healthcare payers and providers alike.

Gardner: Julie, you mentioned that patients want upfront information about what their procedures are going to cost. They want to know their obligation before they go through a medical event. But oftentimes the providers don’t know in advance what those costs are going to be.

So we have ambiguity. And one of the things that’s always worked great for ambiguity in other industries is to look at the data, extrapolate, and get analytics involved. So, how are data-driven analytics coming to the rescue? How will that help?

Data to the rescue 

Gerdeman: Historical data allows for a forward-looking view. For HealthPay24, for example, we have been involved in patient payments for 20 years. It makes us a pioneer in the space. It gives us 20 years of data, information, and trends that we can look at. To me, data is absolutely critical.

Having come out of the spend management technology industry I know that in the categories of direct and indirect materials there have long been well-defined goods and services that are priced and purchased accordingly.

But, the ambiguity of patient healthcare payments and patient responsibility presents a new challenge. What artificial intelligence (AI) and algorithms provide are the capability to help anticipate and predict. That offers something much more applicable to a patient at a consumer level.

Gardner: Jake, when you have the data you can use it. Are we still at the point of putting the data together? Or are we now already able to deliver those AI- and machine learning (ML)-driven outcomes?

Intrator: Hospitals still don’t feel like they are making the best use of data. They tie that both to not having access to the data and not yet having the talent, resources, and tools to leverage it effectively. This is top of mind for many people in healthcare.

In seeking to help them, there are two places where I divide the use of analytics. The first is ahead of time. By using patient estimator tools, can you understand what somebody might owe? That’s a really tricky question. We are grappling with it at Mastercard.

By working with HealthPay24, we have developed a solution that is ready and working today. Answering the questions gets a lot smarter when you incorporate the data and analytics.

By working with HealthPay24, we have developed a solution that is ready and working today on the other half of the process. For example, somebody comes to the hospital. They know that they have some amount of patient payment responsibility. What’s the right way for a hospital to interact with that person? What are the payment options that should be available to them? Are they paying upfront? Are they paying over a period of time? What channels are you using to communicate? What options are you giving to them? Answering those questions gets a lot smarter when you incorporate data and analytics. And that’s exactly what we are doing today.

Gardner: Well, we have been dancing around and alluding to the joint-solution. Let’s learn more about what’s going on between HealthPay24 and Mastercard. Tell us about your approach. Are we in a proof of concept (POC) or is this generally available?

Win-win for patients and providers 

Gerdeman: We are currently in a POC phase, working with initial customers on the predictive analytic capability that marries the Mastercard Test and Learn platform with HealthPay24’s platform and executing what’s recommended through the analytics in our platform.

Jake, go ahead and give an overview of Test and Learn, and then we can talk about how we have come together to do some great work for our customers.

Intrator: Sure. Test and Learn is a platform that Mastercard uses with a large number of partner clients to measure the impact of business decisions. We approach that through in-market experiments. You can do it in a retail context where you are changing prices or you can do it in the healthcare context where you are trying different initiatives to focus on patient payments. 

That’s how we brought it to bear within the HealthPay24 context. We are working together along with their provider partners to understand the tactics that they are using to drive payments. What’s working, what’s working for the right patient, and what’s working at the right time for the right patients? 

Gerdeman: It’s important for the audience to understand that the end-goal is revenue collection and the big opportunity providers have to collect more. The marriage of Test and Learn with HealthPay24 provides the intelligence to allow providers to collect more, but it also offers more options to patients based on that intelligence and creates a better patient experience in the end.

The marriage of Test and Learn with HealthPay24 provides the intelligence to allow providers to collect more, but it also offers more options to patients based on that intelligence, and creates a better patient experience.

If a particular patient will always take a payment plan and make those payments consistently – that is versus when they are presented with a big amount and wouldn’t pay it off – the intelligence through the platform will say, “This patient should be offered a payment plan consistently,” and the provider ends up collecting all of the revenue.

That’s what we are super-excited about. The POC is showing greater revenue collection by offering flexibility in the options that patients truly want and need.

Gardner: Let’s unpack this a little bit. So we have HealthPay24 as chocolate and Mastercard’s Test and Learn platform as peanut butter, and we are putting them together to make a whole greater than the sum of the parts. What’s the chocolate? What’s the peanut butter? And what’s the greater whole?

Like peanut butter and chocolate 

Intrator: One of the things that’s made working with HealthPay24 so exciting for us is that they sit in the center of all of the data and the payment flows. They have the capability to directly guide the patient to the best possible experience.

They are hands-on with the patients. They can implement all of these great learnings through our analytics. We can’t do that on our own. We can do the analytics, but we are not the infrastructure that enables what’s happening in the real world.

That’s HealthPay24. They are in the real world. When you have the data flowing back and forth, we can help measure what’s working and come up with new ideas and hypotheses about how to try different payment programs. 

It’s been a really important chocolate and peanut butter combination where you have HealthPay24 interacting with patients and us providing the analytics in the background to inform how that’s happening.

Gerdeman: Jake said it really well. It is a beautiful combination because years ago, the hot thing was propensity to pay. And, yes, providers still talk about that. It was best practice many years ago, of pulling a soft or even hard credit check on a patient to determine their propensity to pay and potentially offer financial assistance, even charity, given the needs of the patient.

But this takes it to a whole other level. That’s why the combination is magical. What makes it so different is there doesn’t need to be that old way of thinking. It’s truly proactive through the data we have in working with providers and the unique capabilities of Mastercard Test and Learn. We bring those together and offer proactively the right option for that specific patient-consumer.

It’s super exciting because payment plans are just one example. The platform is phenomenal and the capabilities are broad. The next financial application is discounts.

Through HealthPay24, providers could configure discounts based on their own policies and thresholds. But, if you know that a particular patient will pay the amount when offered the discount through the platform, that should be offered every time. The intelligence gives us the capability to know that, to offer it, and for the provider to collect that discounted amount, which might be more than that amount going to bad debt and never being collected.

Intrator: If you are able to drive behavior with those discounts, is it 10 percent or 20 percent? If you give away an additional 10 percent, how does that change the number of people reacting to it? If you give away more, you had better hope that you are getting more people to pay more quickly.

Those are exactly the sorts of analytical questions we can answer with Test and Learn and with HealthPay24 leading the charge on implementing those solutions. I am really excited to see how this continues to solve more problems going forward.

Gardner: It’s interesting because in the state of healthcare now, more and more people, at least in the United States, have larger bills regardless of their coverage. There are more co-pays, more often there are large deductibles, with different deductibles for each member of a family, for example, and varying deductibles depending on the type of procedures. So, it seems like many more people will be facing more out-of-pocket items when it comes to healthcare. This impacts literally tens of millions of people. 

So we have created this new chocolate confection, which is wonderful, but the proof is in the eating. When are patient-consumers going to get more options, not only for discounts, but perhaps for financing? If you would like to spread the payments out, does it work in both ways, both discounts as well as in payment plans with interest over time? 

Flexibility plus privacy

Gerdeman: In HealthPay24, we currently have all of the above — depending on what the provider wants to offer, their patient base, and the needs and demographics. Yes, they can offer payment plans, discounts, and lines of credit. That’s already embedded in the platform. It creates an opportunity for all the different options and the flexibility we talked about. 

Earlier I mentioned personalization, and this gets us much closer to personalization of the financial experience in healthcare. There is so much happening on the clinical side, with great advances around clinical care and how to personalize it. This combination gets us to the personalization of offers and options for patients and payments like we have never seen in the past.

Gardner: Jake, for those listening and reading, who maybe are starting to feel a little concerned that all this information — about not just their healthcare, but now their finances — being bandied about among payers, providers, and insurers, are we going to protect that financial information? How should people feel about this in terms of a privacy or a comfort level?

We aspire and really do put a lot of work and effort into being a leader in data privacy and allowing people to have ownership of their data and to feel comfortable. 

Intrator: That is a question and a problem near and dear to Mastercard. We aspire and really do put a lot of work and effort into being a leader in data privacy and allowing people to have ownership of their data and to feel comfortable. I think that’s something that we deeply believe in. It’s been a focus throughout our conversations with HealthPay24 to make sure that we are doing it right on both sides.

Gardner: Now that you have this POC in progress, what have been some of the outcomes? It seems to me over time the more you deal with more data, the more benefits, and then the more people adopt it, and so on. Where are we now, and do we have some insight into how powerful is this?

Gerdeman: We do. In fact, one example is a 400-bed hospital in the Northeast US that, through the combination of Mastercard Test and Learn and HealthPay24, were able to look at and identify 25,000 unpaid accounts. Just by targeting 5,000 of the 25,000, they were able to identify an incremental $1 million in collections to the hospital.

That is very significant in that they are just targeting the top 5,000 in a conservative approach. They now know that they have the capability through this intelligence and by offering the right plans to the right people to be able to collect $1 million more to their bottom line.

Intrator: That certainly captures the big picture and the big story. I can also zoom in on a couple of specific numbers that we saw in the POC. As we tackled that, we wanted to understand a couple of different metrics, such as increases in payments. We saw substantial increases from payment plans. As a result, people are paying more than 60 percent more on their bills compared to similar patients that haven’t received the payment plans. 

Then we zoomed in a step farther. We wanted to understand the specific types of patients who benefited more from receiving a payment plan and how that potentially could guide us going forward. We were able to dig in, to build a predictive model, and that’s exactly what Julie was talking about. Those top 25,000 accounts, how much we think they are going to pay and the relative prioritization. Hospitals have limited resources. So how do you make sure that you are focusing most appropriately?

Gardner: Now that we have gotten through this trial period, does this scale? Is this something you can apply to almost any provider organization? If I am a provider organization, how might I start to take advantage of this? How does this go to market?

Personalized patient experiences 

Gerdeman: It absolutely does scale. It applies across all providers; actually, it applies across many industries as well. Any provider who wants to collect more wants additional intelligence around their patient behavior, patient payments and collection behavior — it really is a terrific solution. And it scales as we integrate the technologies. I am a huge believer in best-of-breed ecosystems. This technology integrates into the HealthPay24 solution. The recommendations are intelligent and already in the platform for providers.

Gardner: And how about that grassroots demand? Should people start going into their clinics and emergency departments and say, “Hey, I want the plan that I heard about. I want to have financing. I want you to give me all my options.” Should people be advocating for that level of consumerism now when they go into a healthcare environment?

Gerdeman: You know, Dana, they already are. We are at a tipping point in the disruption of healthcare. This kind of grassroots demand of consumerism and a consumer personalized experience — it’s only a matter of time. You mentioned data privacy earlier. There is a very interesting debate happening in healthcare around the balance between sharing data, which is so important for care, billing, and payment, with the protection of privacy. We take all of that very seriously. 

Nonetheless, I feel the demand from providers as well as patients will only get greater.

Gardner: Before we close out let’s extrapolate on the data we have. How will things be different in two or three years from now when more organizations embrace these processes and platforms?

Intrator: The industry is going to be a lot smarter in a couple of years. The more we learn from these analytics, the more we incorporate it into the decisions that are happening every day, then it’s going to feel like it fits you as a patient better. It’s going to improve the patient experience substantially.

The industry is going to be a lot smarter in a couple of years. The more we learn from these analytics, the more we incorporate it into the decisions that are happening every day. It’s going to improve the patient experience substantially. 

Personally, I am really excited to see where it goes. There are going to be new solutions that we haven’t heard about yet. I am closely following everything that goes on.

Gerdeman: This is heading to an experience for patients where from the moment they seek care, they research care, they are known. They are presented with a curated, personalized experience from the clinical aspect of their encounter all the way through the billing and payment. They will be presented with recommendations based on who they are, what they need, and what their expectations are. 

That’s the excitement around AI and ML and how it’s going to be leveraged in the future. I am with Jake. It’s going to look very different in healthcare experiences for consumers over the next few years.

Gardner: And for those interested in learning more about this pilot program, about the Mastercard Test and Learn platform and HealthPay24’s platform, where might they go? Are there any press releases, white papers? What sort of information is available?

Gerdeman: We have a great case study from the POC that we are currently running. We are happy to work with anyone who is interested, just contact us via our website at HealthPay24 or through Mastercard.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

Posted in artificial intelligence, big data, Business intelligence, data analysis, electronic medical records, healthcare, machine learning, risk assessment, Security, User experience | Leave a comment

How IT modern operational services enables self-managing, self-healing, and self-optimizing

General digital business transformation and managing the new normal around the COVID-19 pandemic have hugely impacted how businesses and IT operate. Faced with mounting complexity, rapid change, and striking budgets, IT operational services must be smarter and more efficient than ever.

The next BriefingsDirect Voice of Innovation podcast examines how Hewlett Packard Enterprise (HPE) Pointnext Services is reinventing the experience of IT support to increasingly rely on automation and analytics to help enable continued customer success as IT enters a new era. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the HPE Pointnext Services vision for the future of IT operational services are Gerry Nolan, Director of Portfolio Product Management, Operational Service Portfolio, at HPE Pointnext Services, and Ronaldo Pinto, Director of Portfolio Product Management, Operational Service Portfolio, at HPE Pointnext Services. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gerry, is it fair to say that IT has never had a more integral part of nearly all businesses and that therefore the intelligent support of IT has never been more critical?

Nolan: We’ve never seen a time like this, Dana. Pretty much every aspect of our life has now moved to digital. It was already moving that way. Everyone is spending more hours per day in various collaboration platforms, going through various digital interactions, and we’re seeing that in our business as well.

That applies to whether you are ordering a pizza, booking time at your gym, getting your morning coffee — pretty much your life has changed forever. We see that dramatically impacting the IT space and the customers we deal with.

Nolan

So, yes, it’s a unique time, we have never seen it before, and we believe things will never be the same again.

Gardner: So, we are reliant on technology for commerce, healthcare, finance, all across many of these scientific activities to combat the pandemic, not to mention more remote education and more remote work — basically every facet of our modern life.

Consequently, how enterprise IT uses services and support has entered a new phase, a new era. Please explain why a digital environment requires more tools and opportunity to the people delivering the new operational services.

Nolan: The IT landscape is very dynamic. There is an expanding array of technology choices, which brings more complexity. Of course, the move to cloud and edge computing introduces new requirements from an IT operations point of view.

Then we got hit with COVID-19 and a whole new set of challenges — huge increases in remote workforce, and all creating problems with networks, performance, and security.

For example, a retail customer that I just met with — they don’t even have a four-walls data center anymore, most of their IT is distributed throughout their retail stores — and another customer, a large telco, is installing edge-related servers on their electricity pylons on the sides of mountains in very remote areas. These types of use-cases need very different operational processes, approaches, and skills. 

Then we got hit with COVID-19, and that brings a whole new set of challenges, with locking down of IT environments, huge increases in remote workforces, all creating problems with network capacity, performance, and security challenges.

As a result, we are seeing customers needing more help than ever while they try and maintain their businesses. At the same time, they need to plan and evolve for the medium- to long-term. They need solutions both for today — to help in this unique lockdown mode — but also to accelerate transformation efforts to move to a digitally enabled customer experience.

Gardner: Ronaldo, this obviously requires more than a traditional helpdesk and telephone support. Where does the operational experience, of even changing the culture around support, kick in? How do we get to a new experience?

Pinto: Dana, many people associate traditional support to telephone support, but today it needs to be much more. As Gerry described, we are moving toward a very distributed, remote, low-touch to no-touch world, and COVID-19, the pandemic, just accelerated that.

To operate in such an environment, companies depend on an increasing number of tools and technologies. You have more variables today, just to control and maintain your performance. So it’s extremely important to arm the people that provide technical support with the latest artificial intelligence (AI) tools and digital infrastructure so they continue to be effective in the work they do.

Pinto

Gardner: Gerry, how has the pandemic and emphasis on remote services accelerated people’s willingness to delve into the newer technologies around automation, AI, predictive analytics and AIOps? Are people more open now to all of that?

Nolan: No question, Dana. Consider any great customer experience that you have today — from dealing with your mobile phone provider to, in my case recently, my utility company. The great experiences offer a variety of ways to access the information and the help you may need on a 24-7 basis. Typically, this has involved a whole range of different elements — from a portal or an app, to some central hub — for how you engage. That can include getting a more personalized dashboard of information and help. Those experiences also often have different engagement options, including access to live people who can answer questions and provide guidance to solve issues. That central hub also provides a wealth of helpful, useful information and can be AI-enabled to provide predictive alerts via dashboards.

There are companies that still provide only a single channel, such as, for example, the utility company I had to call yesterday, which kept me on hold for 45 minutes until I hung up. I tried the website, and they had multiple websites. I sent an e-mail; I am still waiting for a response!

The great customer experiences have multiple elements and dimensions to them. They have great people you can talk to. You have multiple ways of getting to those people. They have a great app or website with all sorts of information and help available, personalized to your needs.

That’s the way of the future. Those companies that are successful and have already started on that path are seeing great success. Those that have not are struggling — especially in this climate. Now, not only is there more need to go digital, the pressure on revenue limits the investment dollars available to move in that direction if you haven’t already done so.

So, yes, there’s a multitude of different challenges here we are dealing with.

Gardner: It’s amazing nowadays when you deal, as a customer, with companies, how you can recognize almost instantly the ones that have invested in digital business transformation and are able to do a lot of different things under duress — and those who didn’t. It’s rather stark.

Ronaldo, dealing with these complexities isn’t just a technology issue. Oftentimes it includes a multi-vendor aspect, a large ecosystem of suppliers. Pointing fingers isn’t going to help if you’re in a time-constrained situation, a crisis situation.

How do the new operational experiences include the capability to bring in many vendors and even so provide a seamless experience back to that customer?

Seamless collaborations succeed 

Pinto: HPE has historically collaborated. If you look at our customers today, they have best-of-breed environments and there are many emerging tools that make those environments more efficient. We also have several startups.

So, it’s extremely important for us to serve our customers by being able to collaborate seamlessly with all of those companies. We have done that in the past and we are expanding the operational capabilities, including tools we have today, to better understand performance, integration between our products, and with third-party products. We can streamline all of that collaboration.

Gardner: And, of course, the complexity extends across hybrid environments, from edge to cloud — multi-cloud, private cloud, hybrid cloud. Is that multi-vendor and multi-architecture mix something that you’re encountering a lot?

Nolan: Today, every customer has a multi-vendor IT landscape. There are various phases of maturity in terms of dealing with legacy environments. But they are dealing with new IT on-premises technologies, they are trying to deploy cloud, or they may be moving to public cloud. There’s a plethora of use cases we see globally with our customers.

The classic issue is when there’s a problem, the finger-pointing or blame-game starts. Even triaging and isolating problems in these environments can be a challenge, let alone the expertise to fix the issue. The more vendors you work with the more dimensions you have to manage.

And the classic issue, as you point out, is when there’s a problem, the finger-pointing or the blame-game starts. Even triaging and isolating problems in these types of environments can be a challenge, let alone having the expertise to fix the issue. Whether it’s in the hardware, software layer, or on somebody else’s platform, it’s difficult. Most vendors, of course, have different service level agreements (SLAs), different role names, different processes, and different contractual and pricing structures.

So, the whole engagement model, even the vocabulary they use, can be quite different; ourselves included, by the way. So, the more vendors you have to work with, the more dimensions you have to manage.

And then, of course, COVID-19 hits and our customers working with multiple vendors have to rely on how all those vendors are reacting to the current climate. And they’re not all reacting in a consistent fashion. The more vendors you have, the more work and time it’s going to take — and the more cost involved.

We call it the power of one. Our customers see huge value in working with a partner who provides a single point of contact, that single throat to choke or hand to shake, and a single focal point for dealing with issues. You can have a single contract, a single invoice, and a single team to work with. It saves a lot of time and it saves a lot of money.

Organizations already in that position are seeing significant benefits. Our multi-vendor business is growing very, very well. And we see that moving into the future as companies try to focus on their core business, whatever that might be, and let IT take care of itself.

Edge to cloud to data center 

Pinto: To your question, Dana, on hybrid environments, it’s not only hybrid, it’s edge to cloud and to the data center. I can give you two examples.

We have a large department store customer with the technology in each of the many stores. We support not only the edge environments in those stores but all the way through to their data center. There are also hybrid environments for data management where you typically have primary storage, secondary storage, and your archiving strategy. All of that is managed by a multitude of backup and data-movement software.

The customer should not be worried with component by component, but with a single, end-to-end solution. We help customers abstract that by supporting the end-to-end data environment and collaborating with the third-party software vendors or platform vendors that will inevitably be a part of the solution.

Gardner: Gerry, earlier you mentioned your own experience with a utility company. You were expecting a multi-channel opportunity to engage with them. How does the IT operational services as an experience become inclusive of such things? Why does that need to be all-inclusive across the solutions and support approaches?

Have it your way

Nolan: An alternative example that I can give is my bank. I have a couple of different banks that I work with, but one in particular invested early in a digital platform. They didn’t replace their brick and mortar models. They still have lots of branches, lots of high-tech ATMs that allow for all types of self-serve.

But they also have a really cool app and website, which they’ve had for a number of years. They didn’t introduce digital as a way of closing down their branches, they keep all of those options available because different people like to integrate and work with their service providers in different ways, and we see that in IT, too.

The key elements to delivering a successful experience in the IT space, an AI-enabled experience, includes having lots of expertise and knowledge available across the IT environment, not just on a single set of products.

Of course, a digital platform provides that personalized view. It includes things like dashboards of what’s in my environment, ongoing alerts and predictions — maybe capacity is running out or maybe costs are beyond what was forecast. Or maybe I have ways of optimizing my costs, some insights around updates to my software, licenses or some systems might be reaching the end of their support life. There is all sorts of information that should be available to me in a personalized way.

And then in terms of accessing experts, the old model is to get on the phone, like I was yesterday for 45 minutes talking to somebody, and in my case, I wasn’t successful. But customers, in some cases, they like to deal with the experts through a chat window or maybe live on the phone. Others like to watch expert technical tips and technique videos. So, we have developed an extensive video library of experts wherein you can pick and choose and listen to some tips and techniques about how to deal with certain key topics we see that customers are interested in.

Moderated forums: Customers actually like sharing their experiences with each other. And then our experts get involved and you mix and match with partners and end-customers and you get this very rich dialogue that goes on around particular topics, best practices, ideas, or there could be problems that somebody else has solved.

AI is at the heart of all of this because it’s constantly learning. It’s like a self-propelling mechanism that just gets better over time. The more knowledge it gains, the more answers are provided.

AI is at the heart of all of this because it’s constantly learning. It’s like a self-propelling mechanism that just gets better over time. The more people come on board, the more knowledge it gains, the more questions they ask, the more answers are provided.

The whole thing just gets better and better over time. It’s key, of course, to have that wide portfolio of help for customers. If they have a strategy, make it work better; if they don’t have a strategy and need help building one, we can help them do that all the way through to designing and implementing those solutions.

And then they can get the ongoing support, which is where Ronaldo and I spend most of our life. But it’s important as a vendor or as a partner to be able to offer customers help across the value chain or across the lifecycle, depending on where they need that help.

Gardner: Ronaldo, let’s dig more deeply into the specifics of the new HPE Pointnext Services’ operational services’ approach, modernizing operations for the future of IT. What does it include?

Meet customers’ modernization terms 

Pinto: We are doing all of this modernization with the customer in mind. What is really important for us is not only accomplishing something, but how you accomplish it. At the end of any interaction the customer needs to feel that their time was used effectively. HPE shows a legitimate concern with the customer success and in feeling positive at the end of the interaction. 

Gerry mentioned the AI tools and alerts. We are integrating all of the sensors, telemetry we get from products in the field, all the way up to our operational processes in the back end so that customers can accomplish whatever they need with us on their own terms.

For example, if there’s an alert or a performance degradation in a product, we provide tools to dig deeper and understand why. “Hey, maybe it’s a component in the infrastructure that needs to be updated or replaced?” We are integrating all of that. We see into our back end operational processes so that we can even detect issues before the customer does. Then we just notify the customer that an action needs to be performed and, if needed, we dispatch the part replacement.

If the customer needs someone at the site to do the replacement, no problem. The customer can schedule that visit easily in a web interface and we will show up in the window that the customer chooses.

It’s offering the customer, as Gerry mentioned, multiple channels and multiple ways to interact. For customers, it means they may prefer a remote automated web interface or the personal touch of a support engineer, but it should be on the customers’ own terms. 

Gardner: I have seen in the release information you provide to analysts like myself the concept of a digital customer platform. What do you mean by a digital customer platform when it comes to operational services?

A focused digital platform 

Nolan: It’s all of the things that Ronaldo just mentioned coming together in a single place. Going back to my bank example, they give you a credit card where you typically have a single place that you go from a digital point of view. It’s either an app and/or a website and that provides you all of this personalized information that’s honed to your specific needs and your specific use case.

For us, from a digital point of view and from a customization platform, we want to provide a single place regardless of your use case. So, whether you are a warranty level customer or a consumption customer, buying your IT on a pay-as-you-go basis, all of the help you need, all of the information, dashboards, all of the ways of engaging with us as a partner, it’s all through a single portal. That’s what we mean when we say the digital platform, that central place that brings it all to life for you as a customer.

Gardner: Why is the consumption-based approach important? How has that changed the game?

Pinto: It’s the same idea, to provide customers the option to consume IT and to use IT on their own terms. HPE pioneered the hybrid IT consumption model. Behind that is Pointnext through all the services we provide — whether the customer chooses to consume or not, on an as-a-service basis, consuming an outcome, or if the customer wants to consume the traditional way, where the customer takes ownership of their underlying infrastructure. We automate those more transactional, repeatable tasks and help the customer focus on innovation and meeting their business objectives through IT. So that is going to be consistent across all the consumption models.

Nolan: What’s important to recognize here is, as a customer, you want choice and choice is good. If the only option you have is, for example, a public cloud solution, then guess what? Every problem you as a customer have, then that public cloud provider has one toolbox. It’s a public cloud solution. 

I have just been speaking with a large insurance company and they are moving toward a cloud-first strategy, which their board decided they needed. So, everything in their mind needs to move to the cloud. And it’s interesting because they decided the way they are going to partner to get that done is directly with a public cloud vendor. And guess what? Every problem, every workload in that organization is now directed toward moving to public cloud, even where that may not be the best outcome for the customer. To Ronaldo’s point, you want to be assessing all of your workloads and deciding where is the best placement of that workload.

You might want to do that work inside your firewall and on your network because certain work will get done better, more cost effectively, and for all sorts of security, network latency, and data regulatory reasons. Having multiple different choices — on-premises, you can do CAPEX, you can do as-a-service — is important. Your partner should be able to offer all those choices. We at HPE, as Ronaldo said, pioneered the IT as-a-service mode. We already have that in place.

Our HPE GreenLake offering allows you to buy and consume all of your IT on a pay-as-you-go basis. We just send you a monthly bill for whatever IT you have used. Everything is included in that bill — your hardware, software, and all of the services, including support. You don’t really need to worry about it.

You care instead about the outcomes. You just want the IT to take care of itself, and you get your bill. Then you can easily align that cost with the revenue coming in. As the revenue goes up, you are using more IT, you pay more; revenue goes down, you are using less IT, you pay less. Fairly simple, but very powerful and very popular.

Gardner: Yes, in the past we have heard so many complaints about unexpected bills, maintenance add-ons, and complex licensing. So, this is something that’s been an ongoing issue for decades. 

Now with COVID-19 and so many people working remotely, can you provide an example of bringing the best minds on the solutions side to wherever a problem is?

Room with a data center view 

Nolan: One that comes to mind sounds like a simplistic use case, but it’s valuable in today’s climate, with the IT lockdown. Inside of HPE, we use multiple collaboration environments. But we own our own collaboration platform, HPE MyRoom.

We launched a feature in that collaboration platform called Visual Remote Guidance, which allows us to collaborate like we are in the customer’s data center virtually. We can turn on the smart device on the customer side, and they can be enabled, through the camera, to actually see the IT situation they are dealing with. 

It might be an installation of some hardware. It could be resolving some technical problem. There are a variety of different use cases we are seeing. Of course, when a system causes a problem and the company has locked-down their entire IT department, they don’t want to see engineers coming in from either HPE or one of our partners.

Visual Remote Guidance allows us to collaborate like we are in the customer’s data center virtually. We can turn on the smart device on the customer side and they can be enabled to see the IT situation that they are all dealing with.

This solution immediately became very useful in helping customers because we now have thousands of remote experts available in various locations around the world. Now, they can instantly connect with the customer. They can be the hands and eyes working with the customer. Then the customer can perform the action, guided all the way through the process by their remote HPE expert. And that’s using a well-proven digital collaboration platform that we have had for years. By just introducing that one new additional feature, it has helped tremendously. 

Many customers were struggling with installing complex solutions. Because they needed to get it done and yet didn’t want to bring anybody onto their site, we can take advantage of our remote experts and get the work done. Our experts guide them through, step by step, and test the whole thing. It’s proving to be very effective. It’s used extensively now around the world. All of our agents have this on their desktop and they can initiate with any customer, in any conversation. So, it’s pretty powerful.

Gardner: Yes, so you have socialized isolation, but you have intense technology collaboration at the same time. 

Ronaldo, HPE InfoSight and automation have gone a long way to helping organizations get in front of maintenance issues, to be proactive and prescriptive. Can you flesh out any examples of where the combination of automation, AI, AIOps, and HPE InfoSight have come together in a way that helps people get past a problem before it gets out of hand?

Stop problems before they start

Pinto: Yes, absolutely. We are integrating all our telemetry from the sensors in our technology with our back-end operational processes. That is InfoSight, a very powerful, AI and machine learning (ML) tool. By collecting from sensors — more than 100 data points from our products every few seconds — and processing all of that data on the back end, we can be informed by trends we see in our installed base and gather knowledge from our product experts and data scientists. 

That allows us to get in front of situations that could result in outages in the environment. For example, a virtual storage volume could be running out of capacity. That could lead to storage devices going offline, bringing down the whole environment. So, we can get ahead of that and fix the problem for the customer before it gets to a business-degradation situation. 

We are expanding the InfoSight capabilities on a daily basis throughout the HPE portfolio. We also should be able to identify, based on the telemetry of the products, what workloads the customer is running and help the customer better utilize all those underlying resources in the context of a specific workload. We also could even identify an improvement opportunity in the underlying infrastructure to improve the performance of that workload. 

Gardner: So, it is problem solving as well as a drive for continual IT improvement, refinement, and optimization, which is a lot different than a break-fix mentality. How will the whole concept of operational services shift in your opinion from break-fix to more of optimization and continuous improvement? 

Pinto: I think you just touched on probably the most important point, Dana. Data centers today and technology are increasingly redundant and resilient. So really break-fix is becoming table stakes very quickly. 

The metaphor that I use many times is airlines. In the past, security or safety of the airline was something very important. Today it’s basically table stakes. You assume that the airline operates at the highest standards of safety. So, with break-fix it’s the same. HPE is automating all of the break-fix operations to allow customers to focus on what adds the most value to their businesses, which is delivering the business outcomes based on the technology — and further innovating. 

The pace of innovation in this business is unprecedented, both in terms of tools and technologies available to operate your environment as well as time-to-market of the digital applications that are the revenue generators for our customers. 

Gardner: Gerry, anything additional to offer in terms of the vision of an optimization drive rather than a maintenance drive? 

Innovate to your ideal state 

Nolan: Totally. It’s all about trying to stay ahead of the business requirements.

For example, last night Ronaldo and I were speaking with a customer with a global footprint. They happen to be in a pretty good state, but it was interesting talking to them about what would a new desired state look like. We work closely with customers as we innovate and build better service capabilities. We are trying to find out from our customers what is their ideal state, because it’s all about delivering the customer experience that maps to each customer’s use case — and every customer is different. 

I also just met with a casino operator, which at the moment is in a bit of a tough space, but they have been acquiring other casinos and opening new casinos in different parts of the world. Their challenge is completely different than my friend in the insurance industry who was going to cloud-first.

The casino business is all about security and regulation. They are really not in the business of IT — but IT is still critical to their success. They are trying to understand all the IT that they have.

The casino business is all about security, and a lot of regulation. In his case, they were buying companies, so they are also buying all of this IT. They need help controlling it. They are in the casino business, they are not really in the business of IT, but IT is still critical to their success. And now they are in a pandemic-driven shutdown, so they are trying to figure out how to manage and understand all of the IT they have.

For others in this social isolation climate, they need to keep the business running. Now as they are starting to open up, they need help with all sorts of issues around how to monitor customers coming into their facilities. How do they keep staff safe in terms of making sure they stay six feet apart? And HPE has a wealth of new offerings in that space. We can help customers deal with opening up and getting back to work. 

Whether you are operating an old environment, a new environment, or are in a post COVID-19 journey — trying to get through this pandemic situation, which is going to take years — there are all sorts of different aspects you need to consider as an organization. Trying to paint your new vision for what an ideal IT experience feels like — and then finding partners like HPE who can help deliver that — is really the way forward. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, Cyber security, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, professional services, Security, User experience | Leave a comment

Work in a COVID-19 world: Back to the office won’t mean back to normal

Businesses around the globe now face uncharted waters when it comes to planning the new normal for where and how their employees return to work.

The complex maze of risk factors, precautions, constantly changing pandemic impacts — and the need for boosting employee satisfaction and productivity — are proving a daunting challenge.

The next BriefingsDirect new normal of work discussion explores how companies can make better decisions and develop adept policies on where and how to work for safety, peace of mind, and economic recovery.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To share their recent findings and chart new ways to think about working through a pandemic, we’re joined by Donna Kimmel, Executive Vice President and Chief People Officer at Citrix, and Tony Gomes, Executive Vice President and Chief Legal Officer at Citrix. The discussion is moderated by  Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Donna, the back-to-work plans are some of the most difficult decisions businesses and workers have faced. Workers are not only concerned for themselves; they are worried about the impacts on their families and communities. Businesses, of course, are facing unprecedented change in how they manage their people and processes.

So even though there are few precedents — and we’re really only in the first few months of the COVID-19 pandemic — how has Citrix begun to develop guidelines for you and your customers for an acceptable return to work.

Move forward with values

Kimmel: It really starts with a foundation that’s incredibly important to Tony, me, and our leadership team. It starts with our values and culture. Who are we? How do we operate? What’s important to us? Because that enables us to frame the responses to everything we do. As you indicated, this is a daunting task — and it’s a humbling task.

When we focus on our culture and our values — putting our people, their health, and their safety first — that enables us to focus on business continuity and ultimately our customers and communities. Without that focus on values we wouldn’t be able to make as easily the decisions we’re making. We also realized as a part of this that there are no easy answers and no one-size-fits-all solutions.

We recognized that creating a framework that we utilize around the world has to be adopted based on the various sites, locations, and business needs across our organization. And, as we’ve acknowledged in the past, we also realized that this really means, it “takes a village.”

The number of employees partnering with us across multiple disciplines in the organization is tremendous. We have partnership not only from legal and human resources (HR), but also our finance organization, IT, real estate and facilities, our global risk team, procurement, communications, our travel organization, and all of our functional leaders and site leaders. It’s a tremendous effort to put this together, to create what we believe is the right thing to do — in terms of managing the health and safety of our employees — as we bring them back into the workforce.

Gomes: Yes, we’ve tapped the entire Citrix global organization for talent. And we have found two things. One, when it comes to going through this, be open to innovation and answers from all parts of the organization.

For example, our sites in the Asia-Pacific region that have been dealing the longest with COVID-19 and are in the process of returning to the office, they have been innovative. They are teaching the rest of our site leaders the best ways to go about reopening their sites. So even as the corporate leaders are here in the US, we’re learning an awful lot from our colleagues on the ground in the Asia-Pacific region.

Ready They’re Not–Survey Reveals

Employees Reluctant to Return to Office 

Two, be aware that this process is going to call upon you to have skills on your team that you may not have had on your team before. So that means experts on business continuity, for example, but also medical experts.

One of the decisions Donna and I made early on is that we needed to bring medical expertise to our team. Donna, through her relationships and her team, along with a top-notch benefits consultant, found great medical resources and expertise for us to rely on. That’s an example of calling upon new talents, and it’s causing us to look for innovation in every corner of the organization.

Gardner: Citrix has conducted some recent studies that help us understand where the employees are coming from. Tell us about the state of their thinking right now.

Get comfortable to get to work

Kimmel: Citrix did a study with one poll of about 2,000 US workers. We found that at least 67 percent of the respondents did not feel comfortable returning to the office for at least one month.

And in examining the sentiment of what it would take for them to feel comfortable coming back into the office, some 51 percent indicated that there has to be testing and screening. Another 46 percent prefer to wait until a [novel coronavirus] vaccine is ready. And 82 percent were looking for some kind of contact-tracing to make sure that we could at least connect with other individuals if there was an issue.

This was an external study, but as we talk with our own employees — our own surveys of roundtable discussions, group dialogues, and feedback we get from our managers — we are finding similar results. Many of our employees, though they would like to be able to come back to the office, recognize that coming back immediately, post-COVID-19, is not going to be to the same office that they left. We recognize that we need to make sure we’re creating a safe environment, one conducive for them to be productive in the office.

Gardner: Tony, what jumped out to you as interesting and telling in these recent findings?

Gomes: Donna hit on it, which is how aligned the results of this external study are coming in with our own experiences; what we’re listening for and hearing from our global workforce and what our own internal surveys are telling us.

We’ve been taking that feedback and building that into the way we’re approaching the reopening decision-making process.

For example, we know that employees are concerned about whether the cities, states, and countries they live and work in have adequate testing. Is there adequate contact-tracing? Are the medical facilities capable of supporting COVID-19 patients in a non-crisis mode?

So we built all of that into our decision-making. Every time we analyze whether an office or campus is ready for a phased reopening approach, we first look for those factors, along with governmental lifting and governmental lockdown orders.

We’re trying to be clear, communicating with employees that, “Hey, we are looking at all of this.” In that way it becomes a feedback loop. We hear the concern. We build the concern into our processes. We communicate back to the employees that our decisions are being made on the basis of what they express to us and are concerned about.

But it’s really amazing to see the alignment of the external study and what we’re hearing internally.

Kimmel: What Tony is acknowledging is right-on about understanding the concerns of our employees. They want to have a sense of confidence that the setup of the office will be appropriate for them.

We’re also trying to provide choice to our employees. Even as we’ll be looking at the critical roles that need to come back, we want to make sure that employees have the opportunity to self-select in terms of understanding what it will be like to work in the office in that environment.

Back to Office Won’t Mean Back to Normal. 

Poll Shows Workers Demand Strict Precautions 

We also know that employees have specific concerns: Maybe they have their own health concerns, or family members that live with them have health issues where they would be at greater risk, or they’re not back to normal societal functioning so at-home caregiving is still an issue. Parents just came through homeschooling, but they still may need to provide summer day camps or provide other support for elder care.

We also recognize that some people are just nervous and don’t feel comfortable. So we’re trying to put our employees’ minds at ease by providing them a good look at what it will be like — and feel like — to come back to the office. They should know the safety and security that we’re putting into place on their behalf, but still also providing them with a feeling of comfort to make a decision on what they think is right based on their own circumstances.

Gardner: It strikes me that organizations, while planning, need to also stay agile, to be flexible, and perhaps recognize that being able to react is more important than coming up with the final answer quickly. Is that your understanding as well, that organizations need to come up with new ways of being able to adapt rapidly and do whatever the changing circumstances require?

Cross-train your functionality 

Gomes: Absolutely, Dana. What Donna and I have tried to do is build a strong cross-functional team that has a lot of capacity across all of the functional areas. Then we try to create decision-making frameworks from the top down.

We then set some basic planning assumptions, or answer some of the big questions, especially in terms of the level of care that we’re going to provide to each employee across the globe. Those include areas such as social distancing, personal protective equipment (PPE), things like that, that we’re going to make sure that every employee has across the globe. 

But then it’s a different decision based on how that gets implemented at each and every site, when, where, and who leads. Who has a bigger or smaller team, and how do they influence or control the process? How much support from corporate headquarters versus local initiatives are taken?

Those are very different from site-to-site, along with the conditions they are responding to. The public health conditions are dynamic and different in every location — and they are constantly changing. And that’s where you need to give your teams the ability to respond and foster that active-response capacity.

Kimmel: We’ve worked really hard to make sure that we’re making faster, timely decisions, but we also recognize that we may not have all the information. We’ve done a lot of digging, a lot of research, and have great information. We’re very transparent with our employees in terms of where we are, what information we have at the time that we’re making the decisions, and we recognize that because it’s moving so quickly we may have to adapt those decisions.

As Tony indicated, that can be based on a site, a region, a country, or medical circumstances and new medical information. So, again, it goes back to our ability to live our values and what’s important to us. That includes transparency of decisions, of bringing employees along on the journey so that they understand how and why we’ve arrived at those decisions. And then when we need to shift them, they will understand why we’ve made a shift.

One of the positive byproducts or outcomes of this situation is being able to pivot to make good and fast decisions and being transparent about where and why we need to make them so that we can continue to pivot if necessary.

One of the positive byproducts of the situation is being able to pivot to make good and fast decisions and being transparent about where and why we need to make them so that we can continue to pivot if necessary.

Gardner: Of course, some of those big decisions initially meant having more people than ever working remotely and from their homes. A lot of business executives weren’t always on board with that. Now that we’ve gone through it, what have we learned?

Are people able to get their work done? They seem to be cautious about wanting to come back without the proper precautions in place. But even if we continue to work remotely, the work seems to be getting done.

Donna, what’s your impression about letting people continue to work at home? Has that been okay at Citrix?

Work from home, the office, or hybrid 

Kimmel: Tony and I and the rest of the leadership team certainly recognized as we were all thrust into this that we would be 100 percent work-from-home (WFH). We all realized and learned very quickly that there were very few, if any, roles that were so critical that they had to be done in the office.

Because remote work is part of the Citrix brand, we were able to enable employees to work securely and access their information from anywhere, anytime. We recognized, all of a sudden, that we were capable of doing that in more areas than we had recognized.

Help All Employees Feel Safe,

It Matters More Than Ever 

We’re now able to say, “Okay, what might be the new normal beyond this?” We recognize that there will be re-integration back into our worksites done in the current COVID-19 environment.

But beyond COVID, post-vaccines, as we think about our business continuity going forward, I do think that we will be moving into, very purposefully, a more hybrid work arrangement. That means new, innovative, in-office opportunities because we still want people to be working face-to-face and have those in-person sort of collisions, as we call them. Those you can’t do at all or they are harder to do on videoconferencing.

But there can be a new balance between in-office and remote work — and fine-tuning our own practices – that will enable us to be as effective as possible in both environments.

So, no doubt, we have already started to undertake that as a post-COVID approach. We are asking what it will look like for us, and then how do we then make sure from a philosophical and a strategy perspective that the right practices are put into place to enable it.

This has been a big, forced experiment. We looked at it and said, “Wow, we did it. We’ve done really well. We’ve been very fortunate.”

Home is where the productivity is

Gomes: Donna’s team has designed some great surveys with great participation across the global workforce. It’s revealed that a very high percentage of our employees feel as productive — if not even more productive — working from home rather than working from the office.

And the thing is, when you peel back the onion and you look at specific teams and specific locations, and what they can accomplish through this, it’s just really amazing.

For example, Donna and I, earlier this morning, were on a videoconference with our site leadership team in Bangalore, India where we have our second-largest office, which has quite a few functions. That campus represents all of the Citrix functions, spread across a number of buildings. We were looking at detailed information about the productivity of our product engineering teams over their last agile planning interval, their continuous integration interval, and how they are planning for their next interval.

We looked at information coming from our order processing team in Bangalore and also from our support team. And what we saw is increased productivity across those teams. We’re looking at not just anecdotal information, but objective data showing that more co-checks occurred, fewer bugs, and more on-time delivery of new functionality occurred within the interval that we had just completed.

We are just tremendously proud of what our teams are accomplishing during this time of global, personal, family, and societal stress. But there is something more here. Donna has put her finger on it, which is there is a way to drive increased productivity by creating these environments where more people can work from home.

Ready They’re Not–Survey Reveals

Employees Reluctant to Return to Office 

There are challenges, and Donna’s team is especially focused on the challenges of remote management. How do you replace the casual interactions that can lead to innovation and creative thinking? How do you deal with team members or teams that rely on more in-person interaction for their team dynamic?

So there are challenges we need to address. But we have also uncovered something I think here that’s pretty powerful — and we are seeing it, not just anecdotally, but through actual data. 

Gardner: As we query more companies about their productivity over the past few months, we will probably see more instances where working at home has actually been a benefit. 

I know from the employee perspective that many people feel that they save money and time by not commuting. They are not paying for transportation. They have more of a work balance with their lives. They have more control, in a sense, over their lives.

The technology has been there for some time, Donna, to allow this. It was really a cultural hurdle we had to overcome that the pandemic has endowed us with. Not that a pandemic is a good thing, but the results allow us to test models that now show how technology and processes can allow for higher productivity when working from home. 

Will what you are experiencing at Citrix follow through to other companies? 

Kimmel: Oh, yes, definitely. I have been on a number of calls with my peers at other companies. Everyone is talking about what’s next and how they design this into their organizations.

We recognize all of the benefits, Dana, that you just indicated. We recognize that those benefits are things that we want to be able to capture. New employees coming into the workforce, the Gen Zs and the Millennials, are looking for flexibility to be able to balance that work and life and integrate it in a more productive way for themselves. Part of that is a bit of a push in terms of what we are hearing from employees. 

It also enables us to tap into new talent pools. Folks that may not live near a particular office but have tremendous skills that they can offer. There are those who may have varying disabilities who may not be able to commute or don’t live near offices. There are a number of ways for us to tap into more workers that have the skills that we are looking for who don’t actually live in near offices. So again, all of that I think is quite helpful to us. 

Legal lessons for employers

Gardner: Tony, what are some of the legal implications if we have a voluntary return to work? What does that mean for companies? Are there issues about not being able to force people, or not being able to fire them, or flexibly manage them? 

Gomes: One of things that we have seen, Dana, during this pandemic, is significant changes in employee relations laws around the globe. This is not just in the United States, but everywhere. Governments are trying to both protect employees, preserve jobs, and provide guidance to employers to clarify how existing legal requirements apply in this pandemic. 

For example, here in the United States both the Occupational Safety and Health Administration (OSHA) and the Equal Employment Opportunity Commission (EEOC) have put out guidelines that address things such as PPE. What criteria do employers need to meet when they are providing PPE to employees? How do you work within the Americans with Disabilities Act (ADA) requirements when offering employees the ability to come back to the office? How do you permit them to opt out without calling them out, without highlighting that they may have an underlying medical condition that you as an employer are obligated to maintain as confidential and allow the employee to keep confidential?

Back to Office Won’t Mean Back to Normal. 

Poll Shows Workers Demand Strict Precautions 

Another big area that impacts the employer-employee relationship, that is changing in this environment, is privacy laws – especially laws and regulatory requirements that impact the way that employers request, manage, and store personal health information about employees. 

Just recently a new bill was introduced in the US Congress to try and address these issues and provide employees greater protection, and provide employers more certainty, especially in areas such as the digital processing and storage of personal health information through things such as contact-tracing apps

Gardner: Donna, we have only been at this for a few months, adjusting to this new world, this new normal. What have we learned about what works and what doesn’t work?

Is there anything that jumps out to you that says this is a definite thing you want to do – or something you should probably avoid — when it comes to managing the work-balance new normal?

Place trust in the new normal 

Kimmel: One, we learned that this can be done. That shifts the mental models that some had come into, that for any employment engagement you would prefer to have face-to-face-only interactions. And so this taught us something. 

It also helped us build trust in each other, and trust in leadership, because we continue to make decisions based on our values. We have been very transparent with employees, with phenomenal amounts of communication we put out there — two-way, with high empathy, and building better relationships. That also means better collaboration and relationship-building, not only between team members, but between managers and employees. It has been a really strong outcome.

It helped us build trust in each other and in leadership because we continue to make decisions based on our values. We have been very transparent with employees, with high empathy, and building better relationships. That also means better collaboration. It has been a really strong outcome.

And again, that’s part of the empathy, the opportunity for empathy, as you learn more about each other’s families. You are meeting them as they run by on the video. You are hearing about the struggles that people face. And so managers, employees, and team members are working with each other to help mitigate those as much as possible. 

Those are some big aspects of what we have learned. And, as I mentioned earlier, we have benefitted from our ability to make decisions faster, acknowledging various risks, and using the detailed information such as what Tony’s team brings to the table to help us make good decisions at any given time. Those are some of the benefits and positive outcomes we have seen. 

The challenges are when we go into the post-COVID-19 phase, we recognize that children may be back to school. Caregiving resources may be in place, so we may not be dealing with as many of those challenges. But we recognize there is sometimes still isolation and loneliness that can arise from working remotely.

People are human. We are creatures who want to be near each other and with each other. So we still need to find that balance to make sure everyone feels like they are included, involved, and contributing to the success of the organization. We must increase and improve our managers’ ability to lead productively in this environment. I think that is also really important.

And we must look for ways to drive collaboration, not only when people come back into the office — because we know how to do that well — but how do we have the right technology tools to enable us to collaborate well while we are away – from white-boarding techniques and things that enable us to collaborate even more from a WFH and remote perspective. 

So it will be about the fine-tuning of enabling success, stronger success, more impactful success in that environment.

Gardner: Tony, what do you see as things that are working and maybe some things that are not that people should be thinking about? 

Level-up by listening to people 

Gomes: One of the things that’s really working is a high level of productivity that we are seeing — unexpectedly high – even though about 98 percent of our company has been working from home for eight weeks-plus. So that’s one. 

The other thing that is really working is our approach to investing in our employees and listening to our employees. I mean this very tangibly, whether it’s the stipend that we provide our employees to go out and buy the equipment that they need to more comfortably and more productively work from home or to support charities and organizations or small businesses. This is truly tangibly investing in employees, truly tangibly, in integrated, multichannel ways, listening to the feedback from employees and processing that, putting that into your processes and feeding it back to them. That’s really worked. 

And again, the proof is in the high-level productivity and the very high level of satisfaction despite the very challenging environment. Donna mentioned some of them. One of the bigger challenges that we see right now is obviously the challenge of employees who have families, who have childcare, and other family care responsibilities in the middle of this pandemic while trying to work and many times being even more productive than they ever have been for us when working in the office.

So again, it’s nice to say we invest in our employees and we expect our employees to reciprocate, but we are actually seeing this in action. We have made very tangible investments and we see it coming back to us. 

Mind and body together win the race 

On the other hand, we have to be really careful about a couple of things. One, this is a long-term game, an ultramarathon, where we are only in the first quarter, if you will. It feels like we should be down at the two-minute warning, but we are really in the first quarter of this game. We have a long way to go before we get to viable therapeutics and viable, widely available effective vaccines that will allow us to truly come back to the work and social life we had before this crisis. So we have got to be prepared mentally to run this ultramarathon, and we have to help and coach our teams to have that mindset.

Help All Employees Feel Safe,

It Matters More Than Ever 

As Donna alluded to, this is also going to be a challenge in mental health. This is going to be very difficult because of its length, severity, and multifaceted impact — not just on employees but across society. So being supportive and empathetic to the mental health challenges many of us are going to face is going to be very important.

View this as a long-term challenge and pay attention to the mental health of your employees and teams as much as you are paying attention to their physical health. 

Kimmel: It’s been incredibly important for us to focus on mental health for our employees. We have tried to pull together as many resources as possible, not only for our employees but for our managers who tend to be in the squeeze point, because they themselves may be experiencing some of these same issues and pressures. 

And then they also carry that caring sense of responsibility for their employees, which adds to the pressure. So, for us, paying attention to that and making sure we have the right resources is really important to our strategy. I can’t agree more, this is absolutely a marathon, not a sprint. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

Posted in Citrix, Cloud computing, cloud messaging, Cyber security, Enterprise transformation, healthcare, risk assessment, Security, User experience | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

How HPE Pointnext Services ushers businesses to the new normal via an inclusive nine-step plan

HPE COVIDThe next edition of the BriefingsDirect Voice of Innovation podcast series explores a new model of nine steps that IT organizations can take amid the COVID-19 pandemic to attain a new business normal.

As enterprises develop an IT response to the novel corona virus challenge, they face both immediate and longer-term crisis management challenges. There are many benefits to simultaneously steadying the business amid unprecedented disruption — and readying the company to succeed in a changed world.

Join us as we examine a Hewlett Packard Enterprise (HPE) Pointnext Services nine-step plan designed to navigate the immediate crisis and — in parallel — plan for your organization’s optimum future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to share the Pointnext model and its positive impact on your business’ ongoing health are Rohit Dixit, Senior Vice President and General Manager, Worldwide Advisory and Professional Services at HPE Pointnext Services, and Craig Partridge, Senior Director, Worldwide Advisory and Transformation Practice, HPE Pointnext Services. The timely discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Rohit, as you were crafting your nine-step model, what was the inspiration? How did this come about?

Dixit: We had been working, obviously, on engaging with our customers as the new situation was evolving, with conversations about how they should react. We saw a lot of different customers and clients engaging in very different ways. Some showed some best practices, but not others.

Rohit Dixit

Dixit

We heard these conversations and observed how people were reacting. We compared that to our experiences managing large IT organizations and with working with many customers in the past. We then put all of those learnings together and collated them into this nine-step model.

It comes a bit out of our past experience, but with a lot of input and conversations with customers now, and then structuring all of that into a set of best practices.

Gardner: Of course, at Pointnext Services you are used to managing risk, thinking a lot about security incident management, for example. How is reacting to the pandemic different? Is this a different type of risk?

Dixit: Oh, it’s a very different kind of risk, for sure, Dana. It’s hitting businesses from so many different directions. Usually the risk is either a cyber threat, for example, or a discontinuity, or some kind of disruption you are dealing with. This one is coming at us from many, many different directions at the same time.

Then, on top of that, customers are seeing cybersecurity issues pop up. Cyber-attacks have actually increased. So yeah, it’s affecting everybody — from end-users all the way to the core of the business and to the supply chain. It’s definitely multi-dimensional.

Gardner: You are in a unique position, working with so many different clients. You can observe what’s working and what’s not working and then apply that back rather quickly. How is that going? Are you able to turn around rapidly from what you are learning in the field and apply that to these steps?

Dixit: Dana, by using the nine steps as a guide, we have focused immediately to what we call the triage step. We can understand what is the most important thing that we should be doing right now for the safety of employees, and how we can contribute that back to the community and keep the most essential business operations running.

Nine Steps to the New Normal

For IT to Follow in Two Phases 

That’s been the primary area of focus. But now as that triage step stabilizes a little bit, what we are seeing is the customers trying to think, if not long-term, at least medium-term. What does this lead to? What are the next steps? Those are the two conversations we are having with our customers — and within ourselves as well, because obviously we are as impacted as everybody else is. Working through that in a step-by-step manner is the basis of the nine steps for the new normal model.

Gardner: Craig, I imagine that as these enterprises and IT departments are grappling with the momentary crisis, they might tend to lose that long-term view. How do you help them look at both the big picture in the long term as well as focus on today’s issues?

Partridge: I want to pick up on the something that Rohit alluded to. We have never seen this kind of disruption before. And you asked why this is different. Although a lot of the responses learned by HPE from helping customers manage things like their security posture and cyber threats, you have to understand that for most customers that’s an issue for their organization alone. It’s about their ability to maintain a security posture, what’s vulnerable in that conversation, and the risks they are mitigating for the impact that is directly associated with their organization.

We have never seen the global economy being put on pause. It’s not just the effect on being able to transact, protect revenue and core services, and continue to be viable. It’s all of their ecosystems and supply chain being put on hold.

What we have never seen before is the global economy being put on pause. So it’s not just the effect on how an individual organization continues to be able to transact and protect revenue, protect core services, and continue to be able to be viable. It’s all of their ecosystem, it’s their entire supply chain, and it’s the global economy that’s being put on hold here.

When Rohit talks to these different dimensions, this is absolutely different. So we might have learned methods, have pragmatic ways to get through the forest fire now, and have ways to think about the future. But this is on a completely different scale. That’s the challenge customers are having right now and that’s why we are trying to help them out.

Gardner: Rohit, you have taken your nine steps and you have put them into two buckets, a two-mode approach. Why was that required and the right way to go?

One step at a time, now to the future 

Dixit: The model consists of the nine steps and it has two modes. The first one being immediate crisis management and then the second one is bridging to the new normal.

In the first step, the immediate crisis management, you do the triage that we were talking about. You adjust your operations to the most critical, life-sustaining kinds of activities. When you are in that mode, you stabilize and then finally you sustain on an ongoing basis.

And then the second mode is the bridge to the new normal, and here we are adjusting in parallel to what you are observing in the world around you. But you also start to align to a point of view with the business. Within IT, it means using that observation and that alignment to design a new point of view about the future, about the business, and where it’s going. You ask, how should IT be supporting the production of the new businesses?

logoNext comes a transformation to that new end-state and then optimizing that end-state. Honestly, in many ways, that means preparing for whatever the next shock is going to be because at some point there will be another disruption on the horizon.

So that’s how we divided up the model. The two modes are critical for a couple of reasons. First, you can’t take a long-term approach while a crisis unfolds. You need to keep your employees safe, keep the most critical functions going, and that’s priority number one.

The governance you put around the crisis management processes, and the teams you put there, have to be very different. They are focused on the here and the now.

In parallel, though, you can’t live in crisis-mode forever. You have to start thinking about getting to the new normal. If you wait for the crisis to completely pass before you do that, you will miss the learnings that come out of all of this, and the speed and expediency you need to get to the new normal — and to adapt to a world that has changed.

That’s why we talk about the two-mode approach, which deals with the here and the now — but at the same time prepares you for the mid- to long term as well.

Gardner: Craig, when you are in the heat of firefighting you can lose track of governance, management, planning architecture, and the methodologies. How are your clients dealing with keeping this managed even though you are in an intense moment?  How does that relate to what we refer to as minimum viable operations? How do we keep at minimum-viable and govern at the same time?

Security and speed needed 

Partridge: That’s a really key point, isn’t it? We are trained for a technology-driven operating model, to be very secure, safe, and predictable. And we manage change very carefully — even when we are doing things at an extreme pace, we are doing it in a very predictable way.

Craig PartridgeWhat this nine steps model introduces is that when you start running to the fire of immediate crisis management, you want to go in and roll with the governance model because you need extreme speed in your response. So you need small teams that can act autonomously – with a light governance model — to go to those particular fires and make very quick decisions.

And so, you are going to make some wrong decisions — and that’s okay because speed trumps perfection in this mode. But it doesn’t take away from that second team coming onstream and looking at the longer term. That’s the more traditional cadence of what we do as technologists and strategists. It’s just that now, looking forward, it’s a future landscape that is a radically different one.

And so ideas that might have been on hold or may not have been core to the value proposition before suddenly spring up as ideas that you can start to imagine your future being based around.

Those things are key in the model, the idea of two modes and two speeds. Don’t think about getting it right, it’s more about protecting critical systems and being able to continue to transact. But in the future, start looking at the opportunities that may not have been available to you in the past.

Gardner: How about being able to maintain a culture of innovation and creativity? We have seen in past crises some of the great inventions of technology and science. People when placed in a moment of need actually dig down deep in their minds and come up with some very creative and new thinking. How do we foster that level of innovation while also maintaining governance and the capability to react quickly?

Creativity on the rise 

Partridge: I couldn’t agree more. As an industry and as individuals, we are typically very creative. Certainly technologists are very creative people in the application of technologies, of different use cases, and business outcomes. That creativity doesn’t go away. I love the phrase, “Necessity is the mother of invention,” the idea that in a crisis those are the moments when you are most innovative, you are most creative, and people are coming to the fore.

For many of our customers, the ideas on how to respond — not just tactically, but strategically to these big disruptive moments — might already be on the table. People are already in the organization with the notion of how to create value in the new normal.

Nine Steps to the New Normal

For IT to Follow in Two Phases 

These moments bring those people to the surface, don’t they? They make champions out of innovators. Maybe they didn’t have the right moment in time or the right space to be that creative in the past.

Or maybe it’s a permission thing for many customers. They just didn’t have the permission. What’s key to these big, disruptive events is to create an environment where innovation is fostered, where those people that may have had ideas in the past but said, “Well, that will never work; it’s not core to the business model, it’s not core to driving innovation and productivity,” to create the environment where there are no sacred cows. Give them the space to come to the fore with those ideas. Create those kinds of new governance models.

Dixit: I would actually say that this is a great opportunity, right? Discontinuities in how we work create great cracks through which big innovations can be driven.

The phrase that I like to use is, “Never waste a crisis,” because a crisis creates discontinuities and opportunities. It’s a mindset thing. If we go through this crisis playing defense – and just trying to maintain what we already have, tweak it a little bit – that will be very unfortunate.

This goes back to Craig’s point about a sacred cow. We had a conversation with a customer who was talking about their hybrid IT mix, what apps and what workloads should run where. They had reached an uneasy alliance between risk and innovation. Their mix settled at a certain point of public, private, on-premises, and consumption-based sources.

But now they are finding that, because the environment has changed so much, they can revisit that mix from scratch. They have learned new things, and they want to bring more things on-premises. Or, they have learned something new and they decided to place some data in the cloud or use new Internet of things (IoT) and new artificial intelligence (AI) models.

The point is we shouldn’t approach this in just a defensive mode. We should approach it in an innovative mode, in a great-opportunity-being-presented-to-us-mode, because that’s exactly what it is.

Nine steps, two modes, one plan 

Gardner: And getting back to how this came about, the nine steps plan, Rohit, were you thinking of a specific industry or segment? Were you thinking public sector, private sector? Do these nine steps apply equally to everyone?

Dixit: That’s a good question, Dana. When we drew up the nine steps model, we drew from multiple industries. I think the model is applicable across all industries and across all segments — large enterprise and small- to medium-sized businesses (SMBs) as well.

The way it gets applied might be slightly different because for an enterprise their focus is more on the transaction, the monetary, and keeping revenue streams going in addition to, of course, the safety of their employees and communities.

When we drew up the nine steps model, we drew from multiple industries. The model is applicable across all industries and segments — large enterprises and SMBs.

But the public sector, they approach it very differently. They have national priorities, and citizen welfare is much more important. By the way, availability of cash, for example, might be different based on an SMB versus enterprise versus public sector.

But the applicability is across all, it’s just the way you apply the steps and how you bridge to the new normal. For example, what you would prioritize in the triage mode might be different for an industry or segment, but the applicability is very broad.

Partridge: I completely agree about the universal applicability of the nine steps model. For many industries, cash is going to be a big constraint right now. Just surviving through the next few months — to continue to transact and exchange value — is going to be the hard yards.

There are some industries where, at the moment, they are probably going to get some significant investment. Think about areas like the public sector — education, healthcare, and areas where critical national infrastructure is being stressed, like the telephones providing communication services because everybody is relying on that much more.

There are some industries where not just the nine steps model is universally applicable. Some industries are absolutely going to have the capability to invest because suddenly what they do is priority number one, not just the same citizen, welfare and health services, but to allow us to communicate and collaborate across the great distances we now work with.

So, I think it’s universally applicable and I think there is a story in each of the sectors which is probably a little bit different than others that we should consider.

Stay on track, prioritize safety 

Gardner: Craig, you mentioned earlier that mistakes will be made and that it’s okay. It’s part of the process when you are dealing in a crisis management environment. But are there key priorities that should influence and drive the decision-making — what keeps people on track?

Partridge: That’s a really good question, Dana. How do we prioritize some of the triage and adjust steps during the early phases of that crisis management phase of the model? A number of things have emerged that are universally applicable in those moments. And it starts, of course, with the safety of your people. And by your people, not just your employees and, of course, your customers, but also the people you interact with. In the government sector, it’s the citizens that you look after, and their welfare.

From inside of HPE, everything has been geared around the safety and welfare of the people and how we must protect that. That has to be number one in how you prioritize.

The second area you talked about before, the minimum viable operating model. So it’s about aligning the decisions you make in order to sustain the capability to continue to be productive in whichever way you can.

You’re focusing on things that create immediate revenue or immediate revenue-generating operations, anything that goes into continuing to get cash into the organization. Continuing to drive revenue is going to be really key. Keep that high on the priority list.

A third area would be around contractual commitments. Despite the global pandemic pausing movement in many economies around the world, there are still contractual commitments in play. So you want to make sure that your minimum viable operating model allows you to make good on the commitments you have with your customers.

Also, in the triage stage, think about your organization’s security posture. That’s clearly going to feature heavily in how you make priority decisions a key. You have a distributed workforce now. You have a completely different remote connectivity model and that’s going to open you up to all sorts of vulnerabilities that you need to consider.

Nine Steps to the New Normal

For IT to Follow in Two Phases 

Anything around critical customer support is key. So anything that enables you to continue to support your customers in a way that you would like to be supported yourself. Reach out to that customer, make sure they are well, safe, and are coping. What can you do to step in to help them through that process? I think that’s the key.

I will just conclude on prioritization with preserving the core transactional services that enable organizations to breathe; what we might describe as the oxygen apps, such as the enterprise resource planning (ERP) systems of the world, the finance systems, and the things that allow cash to flow in and out of the transactions and orders that need to be fulfilled. Those kinds of core systems need protection in these moments. So that would be my list of priorities.

Gardner: Rohit, critical customer support services is near the top of requirements for many. I know from my personal experience that it’s frustrating when I go to a supplier and find that they are no longer taking phone calls or that there is a long waiting line. How are you helping your organizations factor in customer support? And I imagine, you have to do it yourself, for your own organization, at HPE Pointnext Services.

Communicate clearly, remotely 

Dixit: Yes, absolutely. The first one is the one that you alluded to, the communications channels. How do we make sure that people can communicate and collaborate even though they are remote? How can we help in those kinds of things? Remote desktops. This has, for example, became extremely critical, as well as things like shared secure storage, which is critical so that people can exchange information and share data. And then wrapping around all of that for safe remote connectivity, collaboration, and storage, is a security angle to make sure that you do all of that in a protected, secure manner.

journeyThose are the kinds of things we are very much focused on — not just for ourselves, but also for our customers. We’re finding different levels of maturity in terms of their current adoption of any of these services across different industries and segments. So we are intersecting the customers at different points of their maturity and then moving them up that maturity stack for fully remote communication, collaboration, and then becoming much more secure in that.

Gardner: Rohit, how should teams organize themselves around these nine steps? We’ve talked about process and technology, but there is also the people side of the equation. What are you advising around team organization in order to follow these nine steps and preparing for the new normal?

Dixit: This is for me one of the most fascinating aspects of the model. In our triage step we borrowed a lot of our thinking from the way hospitals do triage. And we learned in that triage model that quick, immediate reaction means you need small teams that can work with autonomous decision-making. And you don’t want to overlay on that initially a restrictive governance model. The quick reaction through the “fog of war,” or whatever you want to call it, is extremely critical in that stage.

We borrowed a lot of our thinking from the way hospitals do triage. We learned that quick, immediate reaction means you need small teams that can work with autonomous decision-making.

By setting up small, autonomous teams that function independently, that make decisions independently, and you keep a light-touch governance model, then that feeds in broader directions, shares information, and captures learnings so that you remain very flexible.

Now, the fascinating aspect of this is that — as you bridge to the new normal, as you start to think about the mid- to thelong-term — the mode of operation becomes very different. You need somebody to collect all the information. You need somebody who is able to coordinate across the business, across IT, and the different functions, partners, and the customers. Then you can create a point of view about what the future holds.

What do we think the future mode of operations is going to look like from a business perspective? Translate that into IT needs and create a transformation plan, start to execute on that plan, which is not the skirmished approach that you’re taking in the immediate crisis management. You’re taking a much more evolved transformation approach that you’re going toward.

And what we find is, these modes of operations are very different. In fact, we advocate that you put two different teams on them. You can’t have the crisis management also involved in long-term planning and vice versa. It’s too much to handle and it’s very conflicting in the way it’s approached. So we suggest that you have two different approaches, two different governance models, two different teams that at some point in the future will come together.

Gardner: Craig, while you’re putting these small teams to work, are you able to see leadership qualities in people that maybe you didn’t have an opportunity to see before? Is this an opportunity for individuals to step up — and for managers to start looking for the type of leadership qualities — in this cauldron of crisis that will be of great value later?

Tech leaders emerge 

Partridge: I think that’s a fantastic observation because never more do you see leadership qualities on display than when people are in such pressurized systems. These are the moments of decision-making that need to be made rapidly, and where they have to have the confidence to acknowledge that sometimes those decisions may be wrong. The kind of leadership qualities that you’re going to see exhibited through this nine-step model are exactly the kind of leadership qualities that are going to give you that short list to potentially stand out for the next leaders of the organization.

With any of these moments of crisis management and long-term planning, those that step forward and take on thatburden and start to lead the organization through the thinking, process, strategy, and the vision are going to be thatpool of the next talent. So nurture them through this process because they could lead you well into the future.

Gardner: And I suppose this is also a time when we can look for technologies that are innovative and work in a pinch to be elevated in priority. I think we’re accelerating adoption patterns in this crisis mode.

So what about the information technology, Craig? Are we starting to see more use of cloud-first, software as a service (SaaS) models, multi-cloud, and hybrid IT? How are the various models of IT now available manifesting themselves in terms of being applicable now in the crisis?

Partridge: This global pandemic is maybe the first one that’s going to showcase why technology has become suchan integral part of how customers build, deliver, and create their value propositions. First, the most immediate area where technology has come into play is that massively distributed workforce now working from home. How was that possible even 10 years ago? How is it possible for an organization of 50,000 employees to suddenly have 70 percent to 80 percent of that workforce now communicating and collaborating online using virtual sessions?

The technology that underpins all of that remote experience has absolutely come to the fore. Then there are some technologies, which you may not see, but which are absolutely critical to how, as a society, we will respond to this.

Think about all of the data modeling and the number crunching that’s going on in these high-performance compute (HPC) platforms out there actively searching for the cure and the remedy to the novel coronavirus crisis. And the scientific field and HPC have become absolutely key to that.

The capability to instantly consume and to match that with what you pay has two benefits. It keeps costs aligned and it eases economic pressure by deferring capital spending over time.

You mentioned as-a-service models, and absolutely the capability to instantly consume and to match that with what you pay has two benefits. Not only does it keep the costs aligned, which is a threat that people are really going to focuson, but it might ease some of that economic pressure, because, as we know in those kinds of models, technology is consumed not as an upfront capital asset. It’s deferred over the use of its life, easing the economic stresses that customers are going to have.

If we hadn’t been through the cloud era, through pivoting technology to it being consumed as a service, then I don’t think we’d be in a position where we could respond as well in this particular time.

Dixit: What’s also very important is the mode of consumption of the technology. More and more customers are going to look for flexible models, especially in how they think about their hybrid IT model. What is the right mix of that hybrid IT? I think in these as-a-service models, or consumption-based models — where you pay for what you consume, no more, no less, and it allows you to flex up or down — that flexibility is going to drive a lot of the technology choices.

Gardner: As we transition to the new normal and we recognize we have to be thinking strategically as well as tactically at all times, do you have any reassurance that you can provide, Rohit, to people as they endeavor to get to that new normal?

Crisis management and strategic planning going hand-in-hand sounds like a great challenge. Are you seeing success? Are you seeing early signs that people are getting this and that it will be something that will put them in a stronger position having gone through this crisis?

In difficulty lies opportunity 

Dixit: Dana, for me, one of the best things I have seen in my interactions with customers, even internally at HPE, is the level of care and support that the companies are giving to their employees. I think it’s amazing. As a society and as a community, I’m really heartened by how positive the reactions have been and how well the companies are supporting them. That’s just one point, and I think technology does play a part in that, in enabling that.

The point I go back to is to never waste a crisis. The discontinuities we talked about, the great opportunities that this creates, if we approach this with the right mindset — and I see a lot of companies actually doing that, approaching this from an opportunity perspective instead of just playing defense. I think that’s really good to see.

Nine Steps to the New Normal

For IT to Follow in Two Phases 

If somebody is looking to design for the future, there is now more technology, consumption methods, and different means of approaching a problem than ever existed before. You have private cloud, public cloud, and you have consumption models on-premises, off-premises, and via colocation options. You have IoT, AI, and containerization. There is so much innovation out there and so many ways of doing things differently. Take that opportunity-based approach, it is going to be very disruptive and could be the making of a lot of great innovation.

Gardner: Craig, what light at the end of the tunnel do you see based on the work you’re doing with clients right now? What’s encouraging you that this is going to be a path to new levels of innovation and creativity?

Partridge: Over the last few years, I’ve been spending most of my time working with customers through their digital transformation agendas. A big focus has been the pivot toward better experiences: better customer engagement, better citizen engagement.  And a lot of that is enabled through digital engagement models underpinned by technology and driven by software.

Pick up the phone and speak to your HPE counterparts because they are there to help you. Nothing is more important to HPE than being there for our partners and customers.

What we are seeing now is the proof-positive that those investments made over the last few years were exactly the right investments to make. Those companies now have the capability to reach out very quickly, very rapidly. They can enable new features, new functions, new services, and new capabilities through those software-delivered experiences.

For me, what’s heartwarming is to see how we have embraced technology in our daily lives. It’s those customers who went in early with a customer experience-focused, technology-enabled, and edge-to-cloud outcome. Those are the ones now able to dance very quickly around this axis that we described in the HPE Pointnext Services nine-step model. So it’s a great proof-point.

Gardner: A lot of the detail to the nine-step program, and some great visual graphics, are available at Enterprise.nxt. An article is there about the nine-step process and dealing with the current crisis as well as setting yourself up for a new future.

Where else can people go to learn more about how to approach this as a partnership? Where else can people learn about how to deal with the current situation and try to come out in the best shape they can?

Dixit: There are a lot of great resources that customers and partners can reach out to with HPE, specifically of course, hpe.com, and a specific page around COVID-19 responses and great resources available to our customers and partners.

A lot of the capabilities that underpin some of the technology conversations we have been having are enabled through our Pointnext Services organization. So again, visit hpe.com/services to be able to get access to some of the resources.

And just pick up the phone and speak to HPE counterparts because they are there to help you. Nothing is more important to HPE at the moment than being there for our partners and customers.

Gardner: We are going to be doing more podcast discussions on dealing with the nine-step program as well as crisis management and long-term digital transformation here at BriefingsDirect, so look for more content there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, data center, Data center transformation, disaster recovery, Enterprise architect, enterprise architecture, Enterprise transformation, Help desk, Hewlett Packard Enterprise, managed services, supply chain | Tagged , , , , , , , , , , , , , | Leave a comment

International Data Center Day highlights how sustainability and diversity will shape the evolving modern IT landscape

IDCDThe next BriefingsDirect panel discussion explores how March 25’s International Data Center Day provides an opportunity to both look at where things have been in the evolution of the modern data center and more importantly — where they are going.

Those trends involve a lot more than just technology. Data center challenges and advancements alike will hinge around the next generation of talent operating those data centers and how diversity and equal opportunity best support that.

Our gathered experts also forecast that sustainability improvements — rather than just optimizing the speeds and feeds — will help determine the true long-term efficiency of IT facilities and systems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To observe International Data Center Day with a look at ways to make the data centers of the future the best-operated and the greenest ever, we are joined by Jaime Leverton, Senior Vice President and Chief Commercial Officer at eStruxture Data Centers in Montreal; Angie McMillin, Vice President and General Manager of IT Systems at VertivTM, and Erin Dowd, Vice President of Global Human Resources at Vertiv. The International Data Center Day observance panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Erin, why — based on where we have come from — is there now a need to think differently about the next generation of data center talent?

Erin Dowd

Dowd

Dowd: What’s important to us is that we have a diverse population of employees. We think about diversity from the perspective traditionally around ethnicity and gender. But when we consider diversity, we think about diversity of thought, diversity of behavior, and diverse backgrounds.

That all makes us a much stronger company; a much stronger industry. It’s representative of our customer base, frankly, and it’s representative of the globe. We are ensuring that we have people working within our company from around the world and contributing all of those diverse thoughts and perspectives that make us a much stronger company and much stronger industry.

Gardner: We have often seen that creative and innovative thought comes when you have a group of individuals that come from a variety of backgrounds, and so it’s often a big benefit. Why has it been slow-going? What’s been holding back the diversity of the support talent for data centers?

Diversity for future data centers 

Dowd: It’s a competitive environment, so it’s a struggle to find diverse candidates. It goes beyond our tech type of roles and into sales and marketing. We look at our talent early in their careers, and we are working on growing talent, in terms of nurturing them, helping them to develop, and helping them to grow into leadership roles. It takes a proactive approach, and it’s more than just letting the talent pool evolve naturally. It is about taking proactive and definitive actions around attracting people and growing people.

Gardner: I don’t think I am going out on a limb by observing that over the past 30 years, it’s been a fairly male-dominated category of worker. Tell us why women in science, technology, engineering, and math, or the so-called STEM occupations, are going to be a big part of making that diversity a strength.

Dowd: That is a huge pipeline for us as we benefit from all the initiatives to increase STEM education for women and men. The results help expand the pool, frankly, and it allows candidates across the board, that are interested at an early age, to best prepare for this type of industry. We know historically that girls have been less likely to pursue STEM types of interest at early ages.

So ensuring that we have people across the continuum, that we have women in these roles, to model and mentor — that’s really important in expanding the pool. There are a lot of things that we can be doing around STEM, and we are looking at all those opportunities.

Gardner: Statistically there are more women in universities than men, so that should translate into a larger share in the IT business. We will be talking about that more.

But we would also like to focus on International Data Center Day issues around sustainability. Jaime, why is sustainability the gift that keeps giving when it comes to improving our modern data centers?

Jaime Leverton

Leverton

Leverton: International Data Center Day is about the next generation of data center professionals. And we know that for the next generation, they are committed to preserving the environment, which is good news for all of us as citizens. And as one of the world’s biggest consumers of energy, I believe the data center industry has a fundamental duty to elevate its environmental stewardship with energy efficient infrastructure and renewable power resources. I think the conversation really does go well together with diversity.

Gardner: Alright, let’s dive in a little bit more to the issues around talent and finding the best future pool. First, Erin please tell us about your role at Vertiv.

Dowd: I am the Global Business HR Partner at Vertiv. So my focus is to help us design, build, and deliver the right people strategy for our teams that have a global presence. We focus on having super-engaged and productive people in the right places with the right skills, and in developing career opportunities across the continuum — from early level to senior level of contributors.

Gardner: We have heard a lot about the skills shortage in IT in general terms, but in your experience at Vertiv, what are your observations about the skills shortage? What challenges do you face?

Dowd: We have challenges in terms of a shortage of diverse candidates across the board. This is present in all positions. Increasing the diversity of candidates that we can attract and grow will help us address the shortage first-hand.

Gardner: And in addition to doing this on a purely pragmatic basis, there are other larger benefits. Tell us why diversity is so important to Vertiv over the long term?

We have challenges in terms of a shortage of diverse candidates across the board. This is present in all positions. The diversity of candidates that we can attract will help us.

Dowd: Diversity is the right thing to do. Just hands down, it has business benefits, and it has cultural benefits. As I mentioned earlier, it reflects not only on our global presence but also on our customer base. And research shows that companies that have more diverse workforces outperform and out-innovate those that don’t.

For example, companies in the top quartile of the workforce on diversity are 33 percent more likely to financially outperform their less diverse counterparts, according to a 2018 study from McKinsey. We have been embracing diversity, which aligns with our core values. It’s the right competitive strategy. It’s going to allow us to compete in the marketplace and relate to our customers best.

Gardner: Is Vertiv an outlier in this? Or is this the way the whole industry is going?

Dive into competitive talent pool 

Dowd: This is the way whole industry is going. I come from a line of IT companies prior to my tenure with Vertiv. Even the biggest, the most established companies are still wrestling with the competitiveness affiliated with the tracking of candidates that have diversity of thought, diverse backgrounds, diverse behaviors, and diversity on ethnicity and gender as well.

The trend is toward engineering and services, and everywhere we are experiencing turnover because it’s so competitive. It’s a very competitive environment. We are competing with brother and sister companies for the same types of talent.

WorkerAs I mentioned previously, if we attract people who are diverse in terms of thought, ethnicity, and gender we can expand our candidate pool and enhance our competitiveness. When our talent acquisition team looks at talent, they are expanding and enhancing diversity in our university relations and in our recruiting efforts. They are targeting diverse candidates as we hire interns and then folks that are later in their careers as well.

Gardner: We have been looking at this through the demand side, but on the supply-side, what are the incentives? Why should people from a variety of backgrounds consider and pursue these IT careers? What are the benefits to them?

Dowd: The career opportunities are amazing. This is a field that’s growing and that is not going to go away. We depend on IT infrastructure and data centers across our world, and we’re doing that more and more over time. There’s opportunity in the workplace and there are a lot of things that we are specifically doing at Vertiv to keep people engaged and excited. We think a lot about attracting talent.

But there is another piece, which is about retaining talent. Some of the things we are doing at Vertiv are specifically launching programs aligned with diversity.

So recently, and Angie has been involved in this, we have a women at Vertiv resource group called Women at Vertiv Excel (WAVE). And that group is nurturing women, encouraging more women to pursue leadership positions within Vertiv. Really it looks at diversity in leadership positions, but it also provides important training that women can apply in their current positions.

Together we are building one Vertiv culture, which is a really important framework for our company. We are creating solutions and resources that make us more competitive and reflect the global market. We find that diversity breeds new and different ideas, more innovation, and a deeper understanding of our customers, partners, employees, and our stakeholders all around the globe. We are a global company, so this is very important to us. It’s going to make us more successful as we grow into the future.

Another thing that we are doing is creating end-to-end management of Vertiv programs. This is new. We continue to improve this. It integrates behavioral skills and training designed to look at the work that we do through the eyes of others. We utilize experiences and talent effectively to grow stronger and stronger teams. Part of this is about recruiting and hiring. It has an emphasis on finding potential employees who possess a diverse experience of thought and perspectives. And diversity of thought comes from field experiences, from different backgrounds, and all of this contributes to our values as an employee in our organization.

Together we are building one Vertiv culture, which is a really important framework for our company. We are creating solutions and resources that make us more competitive and reflect the global market. We find that diversity breeds new and different ideas, more innovation, and a deeper understanding of our customers, partners, and employees.

We also are launching the Vertiv Operating System. Now this is being created, launched, and built with an emphasis on better understanding of our differences, in bridging gaps where there are differences, and in ways that bring out the best in everybody. It’s designed to encourage thought leadership, and to help all of us work through change management together.

Finally, another program that we’ve been implementing across the globe is called Intrinsic. And Intrinsic supplies a foundational assessment designed to improve our understanding of ourselves and also of our colleagues. It’s a formal experiential program that’s going to help us all learn more about ourselves, what makes our individual values and styles unique, but then also it allows us to think about the people that we are working with. We can learn more about our colleagues, potentially our customers, and it allows us to grow in terms of our team dynamics and the techniques that we are using to manage conflict, stress, and change.

Collectively, as we look at the full continuum of how we behave at Vertiv in the future we are building for ourselves, all of these efforts work together toward changing the way we think as individuals, how we behave in groups, and ultimately evolving our organizational culture to be more diverse, more inclusive, and more innovative.

Gardner: Jaime at eStruxture, when we look at sustainability, it aligns quite well with these issues around talent and diversity because all the polling shows that the younger generation is much more focused on energy efficiency and consciousness around their impact on the natural world — so sustainability. Tell us why the need for sustainability is key and aligns so well with talent and retaining the best people to work for your organization.

Sustainability inspires next generation 

Leverton: What we know to be true about the next generation is when they look to choose a career path, or take on an assignment, they want to make sure that it aligns with their values. They want to do work that they believe in. So, our industry offers them that opportunity to be value-aligned and to make an impact where it counts.

DC mainAs you can see all around us, people are working and learning remotely now more than ever, and data centers are what make all of that possible. They are crucial to our society and to our everyday lives. The data center industry is only going to continue to grow, and with our dependence on energy we have to have a focus on sustainability.

It represents a substantial opportunity to make a difference. It’s a fast-paced environment where we truly believe there is a career path for the next generation that will matter to them.

Gardner: Jaime, tell us about eStruxture Data Centers and your role there.

Leverton: eStruxture is relatively new data center company. It was established just over three years ago and we have grown rapidly from our original acquisition of our first data center in Montreal. We now have three data centers in Montreal, two in Vancouver, and one in Calgary. We are a Canadian pure-play — Canadian-owned, -operated, and -financed. We really believe in the Canadian landscape, the Canadian story, and we are going to continue to focus on growth in this nation.

Gardner: When it comes to efficiency and sustainability, we often look at power usage effectiveness (PUE). Where are we in terms of getting to complete sustainability? Is it that so farfetched?

Leverton: I don’t think it is. Huge strides have been made in reducing PUE, especially by us in our most recent construction, which has a PUE load of sub 1.2. Organizations in our industry continue to innovate every day, trying to get as close to that 1.0 as humanly possible.

We are very lucky that we partner with Vertiv. Vertiv solutions are key in driving our efficiency in our data centers, and we know that progress can be made continually by addressing the IP load deficiency and that is a savings that is incremental to PUE as well. PUE is specifically about the ratio of IP power usage and the power usage of the equipment that supports it. But we look at our data center and our business holistically to drive sustainability even outside of what the PUE covers.

Gardner: It sounds like sustainability is essentially your middle name. Tell me more about that. How did you focus the construction and placement of your data centers to be focused so much on sustainability?

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

Leverton: All of our facilities have been designed with a focus on sustainability. When we have purchased facilities, we have immediately gone to upgrade them and make them more efficient. We take advantage of free cooling wherever possible. As I mentioned, three of our data centers are in Montreal, so we get to take advantage of about eight months of the year of free cooling where the majority of our data centers are using 99.5 percent hydro-power energy, which is the cleanest possible energy that we can use.

We virtualize our environments as much as possible. We carefully select eco-responsible technologies and suppliers, and we are committed to continuing to increase our power usage effectiveness without ever sacrificing the performance, scalability, or uptime of our data centers, of course.

Gardner: And more specifically, when you look at that holistic approach to sustainability, how does working with a supplier like Vertiv augment and support that? How does that become a tag-team when it comes to the power source and the underlying infrastructure?

Leverton: Vertiv has just been such a great partner. They were there with us from the very beginning. We work together as a team, trying to make sure that we’re designing the best possible environment for our customers and for our community. One of our favorite solutions from Vertiv is around their thermal management, which is a water-free solution.

Our commitment is to operate as sustainably as possible. Being able to partner with Vertiv and build their solutions into our design right from the beginning has had a huge impact.  

That is absolutely ideal in keeping with our commitment to operate as sustainably as possible. In addition to being water-free, it’s 75 percent more efficient because it has advanced controls and economization. Being able to partner with Vertiv and build their solutions into our design right from the beginning has made a huge, huge impact.

Gardner: And, like I mentioned, sustainability is the gift that keeps giving. This is not just a nice to have. This is a bottom-line benefit. Tell us about the costs and how that reinforces sustainability initiatives.

Leverton: Yes, while there is an occasional higher cost in the short term, we firmly believe that the long-term total cost of ownership is lower — and the benefits far outweigh any initial incremental costs.

Obviously, it’s about our values. It’s critical that we do the right thing for the environment, for the community, for our staff, and for our customers. But, as I say, over the long-term, we believe the total cost is less. So far and above, sustainability is the right thing to do.

Gardner: Jaime, when it comes to that sustainability formula, what really works? It’s not just benefiting the organization that’s supplying, it’s also benefiting the consumer. Tell us how sustainability is also a big plus when it comes to those people receiving the fruits of what the data centers produce.

Leverton: Sustainability is huge for our customers, and it’s increasingly a key component of their decision-making criteria. In fact, many hyperscale cloud providers and corporations — large corporate enterprises — have declared very ambitious environmental responsibility objectives and are shifting to green energy.

Microsoft, as an example, is targeting over 70 percent renewable energy for its data centers by 2023. Amazonreached a 50 percent renewable energy target in 2018 and is now aiming for 100 percent.

Women and STEM step IT up 

Gardner: Let’s look at the sustainability issue again through the lens of talent and the people who are going to be supporting these great initiatives. Angie, when it comes to bringing more women into the STEM professions, how does the IT industry present itself as an attractive career path, say for someone just graduating from high school?

Angie McMillin

McMillin

McMillin: When I look at children today, they’re growing up with IT as part of their lives. That’s a huge advantage for them. They see firsthand the value and impact it has on everything they do. I look at my nieces and nephews, and even grandkids, and they can flip through phones, tablets, they are using XBoxes, you name it, all faster than adults.

They’re the next generation of IT. And now, with the COVID-19situation, children are learning how to do schooling collaboratively — but also remotely. I believe we can engage children early with the devices they already know and use. And with the tools that they’re now learning for schoolwork, those are a bridge to learning about what makes that work. It’s the data center industry. All of our data centers can be a part of that as they complete their schooling and go into higher education. They will remember this experience that we’re all living through right now forever — and so why not build upon that?

Gardner: Jaime, does that align with your personal experience in terms of technology being part of the very fabric of life?

Leverton: Oh, absolutely. I’m really proud of what I’ve seen happening in Canada. I have two young daughters and they have been able to take part in STEM camps, coding clubs, and technology is part of their regular curriculum in elementary school. The best thing we can do for our children is to teach them about technology, teach them how to be responsible with tech, and to keep them engaged with it so that over time they can be comfortable looking toward STEM careers later on.

Gardner: Angie, to get people focused on being part of the next generation of data centers, are there certain degrees, paths, or educational strategies that they should be pursuing?

Education paths lead to STEM careers 

McMillin: Yes. It’s a really interesting time in education. There are countless degrees specifically geared toward the IT industry. So those are good bets, but specifically in networking and computers, there’s coding, there is cyber security, which is becoming even more important, and the list goes on.

We currently see a very large skill set gap specifically around the science and technology functions. So these offer huge opportunities for a young person’s future. But I also want to highlight that the industry still needs the skill sets, the traditional engineering skills, such as power management, thermal management, services and equally important are the trade skills in this industry. There’s a current gap in the workforce and the training for that may be different, but it still has a really vital role to play.

And then finally, we’d be remiss if we didn’t recognize the fact that there are support functions, finance, HR, and marketing. People often think that you must only be in the science or engineering part of the business to work in a particular given market, and that really isn’t true. We need skill sets across a broad range to really help make us successful.

IDCD 2Leverton: I am an IT leader and have been in this business for 20 years, and my undergraduate degrees are in political science and psychology. So I really think that it’s all about how you think, and the other skills that you can bring to bear. More and more, we see emotional intelligence (EQ) and communication skills as the difference-maker to somebody’s career success or career trajectory. We just need to make sure that people aren’t afraid of coming out of more generalized degrees.

Gardner: We have heard a lot about the T structure, where we need to have the vertical technology background but also we want those with cultural leadership, liberal arts, and collaboration skills.

Angie, you are involved with mentoring young women specifically. What’s your take on the potential? What do you see now as the diversity is welling up and the available pool of talent is shifting?

McMillin: I am, and I absolutely love it. One of the things I do is support a women’s engineering summer camp probably much like Jaime’s daughters attend, and other events around my alma mater, with the University of Dayton. I support mentoring interns and other early career individuals, be they male or female. There is just so much potential in young people. They are absolutely eager to learn and play their part. They want to have relevance in the growing data center market, and the IT and sustainability that we talked about earlier. It’s really fun and enjoyable to help them along that journey.

There are two key themes I repeat. One is that success doesn’t happen overnight. So enjoy those small steps on the journey, learn as much as you can, and don’t give up. The second is keep an open mind about your career, try new things, and doors you never imagined will open up.

I get asked for advice, and there are two key themes that I repeat. One is that success doesn’t happen overnight. So enjoy those small steps on the journey that we take to much greater things, and the important part of that, is really just keep taking the steps, learn as much as you can, and don’t give up. The second thing is to keep an open mind in your career, being willing to try new things and opportunities and sometimes doors are going to open that you didn’t even imagine, which is absolutely okay.

As a prime example, I started my education in the aerospace industry. When that industry was hurting, I switched to mechanical. There is a broader range of that field of study, and I spent a large part of my career in automotive. I then moved to consumer and now I am in data center and IT. I am essentially a space geek and car junkie engineer with experience in engineering, strategy, sales, portfolio transformation, and operations. And now I am a general manager for an IT management portfolio.

If I hadn’t been open to new opportunities and doors along my career path, I wouldn’t be here today. So it’s an example for the younger generation. There are broad possibilities. You don’t have to have it all figured out now, but keep taking those steps and keep trying and keep learning — and the world awaits you, essentially.

Gardner: Angie what sort of challenges have you faced over the years in your career? And how is that changing?

Women rise, challenges continue 

McMillin: It’s a great question. My experience at Vertiv has been wonderful with a support structure of diversity for women and leadership. We talked about the new WAVE program that Erin mentioned earlier. You can feel that across your organization. It starts at the top. I also had the benefit, as many of us I think had on this podcast, of having good sponsors along the way in our career journeys to help us get to where we are.

But that doesn’t mean we haven’t faced challenges throughout our careers. And there are challenges that still arise for many in the industry. In all the industries I have worked, which have all been male-dominated industries, there is this necessity to have to prove yourself as a woman — like 10 times over — for your right to be at the table with a voice regardless of the credentials you have coming in. It gets exhausting, and it’s not consistent with male counterparts. It’s a “show me first” and then “I might believe,” it’s also BS. That’s something that a lot of women in this industry, as well as in other industries, continue to have to surpass.

The other common challenge is that you need to over-prove yourself, so that people know that the position was earned. I always want people to know I got my position because I earned it, and I have something to offer not because of a diversity quota. And that’s a lot better today than it’s been in years passed. But I can tell you, I can still hear those words, of accusations made of female colleagues that I knew throughout my career. When one female gets elevated in a position and fails, it makes it a lot harder for other females to get the chance of an opportunity or promotion.

Now, again, it’s getting better. But to give you a real-world example, if you think about the number of industries where there are women CEOs. If they don’t succeed, boards get very nervous about putting another woman in a CEO position. If a male CEO doesn’t succeed, he is often just not the right fit. So we still have a long way to go.

Gardner: Jaime at eStruxture, what’s been your experience as a woman in the technology field?

Leverton: Well, eStruxture has been an incredible experience for me. We have diversity throughout the organization. Actually we are almost at 50 percent of our population identifying as non-white heterosexual male, which is quite different from what I’ve experienced over the rest of my career in technology. From a female perspective, our senior leadership team is 35 percent women; our director population is almost 50 percent women.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

So it’s been a real breath of fresh air for me. In fact, I would say it really speaks to the values of our founder when he started this company three years ago and did it with the intention of having a diverse organization. Not only does it better mirror our customers but it absolutely reflects the values of our organization, the culture we wanted to create, and ultimately to drive better returns.

Gardner: Angie, why is the data center industry a particularly attractive career choice right now? What will the future look like in say five years? Why should people be thinking about this as a no-brainer when it comes to their futures?

Wanted: Skilled data center pros 

McMillin: We are in a fascinating time for data center trends. The future is very, very strong. We know now — and the kids of today certainly know — that data isn’t going away. It’s part of our everyday lives and it’s only going to expand — it’s going to get faster with more compute power and capability. Let’s face it, nobody has patience for slow anymore. There are trends in artificial intelligence (AI), 5G, and others that haven’t even been thought of yet that are going to offer enormous potential for careers for those looking to get into the IT space.

We are in a fascinating time for data center trends. The future is very strong. Data isn’t going away. And nobody has patience for slow anymore. There are trends in AI, 5G, and others that haven’t even been thought of yet.

And when we think about that new trend — with the increase of working or schooling remotely as many of us are doing currently — that may permanently alter how people work and learn going forward. There will be a need for different tools, capabilities, and data management. And how this all remains secure and efficient is also very important.

Likewise, more data centers will need to operate independently and be managed remotely. They will need to be more efficient. Sustainability is going to remain very prevalent, especially edge-of-the-network data centers and enabling the connectivity and productivity wherever they are.

wind powerGardner: Now that we are observing International Data Center Day 2020, where do you see this state of the data center in just the next few years? Angie, what’s going to be changing that makes this even more important to almost every aspect of our lives and businesses?

McMillin: We know now the data center as an ecosystem that is changing dramatically. The hybrid model is a product that’s enabling a diversification of data workloads where customers get the best of all options available: cloud, data center, and edge, as our regional global survey of data center professionals are experiencing phenomenal growth. And we also see a lot more remote management to operate and maintain these disparate locations securely.

We need more people with all the skill sets capable of supporting these advancements on the horizon like 5G, theindustrial internet of things (IIoT), and AI.

Gardner: Erin, where do you see the trends of technology and human resources going that will together shape the future of the data center?

Dowd: I will piggyback on the technology trends that Angie just referenced and say the future requires more skilled professionals. It will be more competitive in the industry to hire those professionals, and so it’s really a great situation for candidates.

logoIt makes it important for companies like Vertiv to continue creating environments that favor diversity. Diversity should manifest in many different ways and in an environment where we welcome and nurture a broad variety of people. That’s the direction of the future, and, naturally, the secret for success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, data center, Data center transformation, Enterprise transformation, Networked economy, Vertiv | Tagged , , , , , , , , , , , , , , , | Leave a comment

Business readiness provides an agile key to surviving and thriving in these uncertain times

device userJust as the nature of risk has been a whirling dervish of late, the counter-forces of business continuity measures have had to turn on a dime as well. What used to mean better batteries for servers and mirrored, distributed datacenters has recently evolved into anywhere, any-circumstance solutions that keep workers working — no matter what.

Out-of-the-blue workplace disruptions — whether natural disasterspolitical unrest, or the current coronavirus pandemic — have shown how true business continuity means enabling all employees to continue to work in a safe and secure manner.

The next BriefingsDirect business agility panel discussion explores how companies and communities alike are adjusting to a variety of workplace threats using new ways of enabling enterprise-class access and distribution of vital data resources and applications.

And in doing so, these public and private sector innovators are setting themselves up to be more agile, intelligent, and responsive to their workers, customers, and citizens once the disaster inevitably passes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share stories on making IT systems and people evolve together to overcome workplace disruptions is Chris McMasters, Chief Information Officer (CIO) at the City of Corona, California; Jordan Catling, Associate Director of Client Technology at The University of Sydney in Australia, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, how has business readiness changed over the past few years? It seems to be a moving target.

Minahan: The very nature of business readiness is not about preparing for what’s happening today — or responding to a specific incident. It’s a signal for having a plan to ensure that your work environment is ready for any situation.

That certainly means having in place the right policies and contingency plans, but it also — with today’s knowledge workforce — goes to enabling a very flexible and dynamic workspace infrastructure that allows you to scale up, scale down, and move your entire workforce on a moment’s notice.

You need to ensure that your employees can continue to work safely and remotely while giving your company the confidence that they’re doing that all in a very secure way, so the company’s information and infrastructure remains secure.

Gardner: Chris McMasters, as a CIO, you surely remember the days when IT systems were brittle, not easily adjusted, and hard to change. Has the nature of work and these business continuity challenges forced IT to be more agile?

McMasters: Yes, absolutely. There’s no better example than in government. Government IT is known for being on-premises and very resistant to change. In the current environment everything has been flipped on its head. We’re having to be flexible, more dynamic in how we deploy services, and in how users get those services.

Gardner: Jordan, higher education hasn’t necessarily been the place where we’d expect business continuity challenges to be overcome. But you’ve been dealing with an aggressive outbreak of the coronavirus in China.

Catling: It’s been a very interesting six months for us, particularly in higher education, with the Australian fires, floods, and now the coronavirus. But generally, as an institution that operates over 22 locations, with teaching hospitals and campuses — our largest campus has its own zip code — this is part of our day, enabling people to work from wherever they are.

The really interesting thing about this situation is we’re having to enable teaching from places that we wouldn’t ordinarily. We’re having to make better use of the tools that we have available to come up with innovative solutions to keep delivering a distinctive education that The University of Sydney is known for.

Gardner: And when you’re trying to anticipate challenges, something like COVID-19, the disease that emanates from the coronavirus, did you ever think that you’d have to virtually overnight provide students stuck in one location with the opportunity to continue to learn from a distance?

Catling: We need to always be preparing for a number of scenarios. We need to be able to rapidly deploy solutions to enable people to work from wherever they are. The flexibility and dynamic toolsets are really important for us to be able to scale up safely and securely.

Gardner: Tim, the idea of business continuity including workers not only working at home but perhaps in far-flung countries where they’ve been stuck because of a quarantine, for example — these haven’t always been what we consider IT business continuity. Why is worker continuity more important than ever?

Minahan: Globally we’re recognizing the importance of the overall employee experience and how it’s becoming a key differentiator for companies and organizations. We have a global shortage of medium- to high-skilled talent. We’re short about 85 million workers.

Companies are battling for the high ground on providing preferred ways to work. One way they do that is ensuring that they can provide flexible work environments that rely on effective workplace technologies that enable employees to do their very best work.

So companies are battling for the high ground on providing preferred ways to work. One way they do that is ensuring that they can provide flexible work environments, ones that rely on effective workplace technologies that enable employees to do their very best work wherever that might be. That might be in an office building. It might be in a remote location, or in certain situations they may need to turn on a dime and move from their office to the home force to keep operations going. Companies are planning to be flexible not just for business readiness but also for competitive advantage.

Gardner: Making this happen with enterprise-caliber, mission-critical reliability isn’t just a matter of renting some new end-devices and throwing up a few hotspots. Why is this about an end-to-end solution, and not just point solutions?

Be proactive not reactive

Minahan: One of the most important things to recognize is companies often first react to a crisis environment. Currently, you’re hearing a lot of, “Hey, we just,” like the school system in Miami, for example, “purchased 250,000 laptops to distribute to students and teachers to maintain their education.”

multiple threatsHowever, that may enable and empower students and employees, but it may be less associated with proper security measures and put both the companies’, workers’, and customers’ personal information at risk.

You need to plan from the get-go for having a very flexible, remote workplace infrastructure — one that embeds security. That way — no matter where the work needs to get done, no matter on what device, or even on whatever unfamiliar network — you can be assured that the appropriate security policies are in place to protect the private information of your employees. The critical information of your business, and certainly any kinds of customer or constituent information, is at stake.

Gardner: Let’s hear what you get when you do this right. Jordan at The University of Sydney, you had more than 14,000 students unexpectedly quarantined in China, yet they still needed to somehow do their coursework. Tell us how this came about, and what you’ve done to accommodate them.

Quality IT during quarantine

Catling: Exactly right. As this situation began to develop in late January, we quite quickly began to scenario plan around the possible eventualities. A significant part of our role, as the technologists within the university, is making sure that we’re providing a toolset that can adapt to the needs of the community.

University_of_Sydney

So we looked at various platforms that we were already using — and some that we hadn’t — to work out what do. Within the academic community, we needed the best set of tools for our staff to use in different and innovative ways. We quickly had to develop solutions and had to lean on our partners to help us out with developing those.

Gardner: Did you know where your students were going to be housed? Was this a case where you knew that they were going to be in a certain type of facility with certain types of resources or are they scattered around? How did you deal with that last mile issue, so to speak?

Catling: The last mile issue is a real tricky one. We knew that people were going to be in various locations throughout mainland China, and elsewhere. We needed to quickly build a solution capable of supporting our students — no matter where they were, no matter what device that they were using, and no matter what their local Internet connection was like.

We have had variability in the quality of our connections even within Australia. But now we needed a solution that would cater to as many people as possible and be considerate of quite a number of different scenarios that our students and staff would be facing.

Gardner: How were you are able to provide that quality of service across so many applications given that level of variability?

Catling: The biggest focus for us, of course, is the safety and security of our staff and students. It’s paramount. We very quickly tried to work out where our people would be connecting from and tried to make sure that the resources we were providing, the connection to the resources, would be as close to them as possible to minimize the impact of that last mile.

We worked with Citrix to put together a set of application delivery controllers into Hong Kong to make sure that the access to the solutions was nice and fast. We then worked to optimize the connection from Hong Kong to Sydney to maximize the user experience. 

We worked with Citrix to put together a set of application delivery controllers into Hong Kong to make sure that the access to the solution was nice and fast. Then we worked to optimize the connection back from Hong Kong to Sydney to maximize the user experience for our staff and students.

Gardner: So this has very much been a cloud-enabled solution. You couldn’t have really done this eight or 10 years ago.

Catling: Certainly not this quickly. Literally from putting a call into Citrix, we worked from design to a production environment within seven days. For me, that’s unheard of, really. Regardless of whether it’s 10 years ago or 10 weeks ago, it was quite a monumental effort. It’s highlighted the importance of having partners that both seek to understand the business problems you’re facing and coming up with innovative solutions rapidly and are able to deploy those at scale. And cloud is obviously a really important part of that.

Citrix logoWe are still delivering on this solution. We have the capabilities now that we didn’t have a couple of months ago. We’re able to provide applications to students no matter where they are. They’re able to continue their studies.

Obviously, the solution needs to remain flexible to the evolving needs. The situation is changing frequently and we are discovering new needs and new requirements. As our academics start to use the technology in different ways, we’re evolving the solution based on their feedback to try and maximize the experience for both our staff and students.

Gardner: Tim, when you hear Jordan describe this solution, does it strike you as a harbinger of more business continuity things to come? How has the coronavirus issue — and not just China but in Europe and in North America — reinforced your idea of what a workplace-enhanced business continuity solution should be?

Business continuity in crisis

Minahan: We continue to field a rising a number of inquiries from customers and other companies. They are trying to assess the best ways to ensure continuity of their business operations and switch to a remote workforce in a very short period of time.

Situations like this remind us that we need to be planning today for any kind of business-ready situation. Using these technologies ensures that you can quickly adapt your work models, moving entire employee groups from an office to a remote environment, if needed, whether it’s because of virus, flood, or any other unplanned event.

What’s exciting for me is being able to use such agile work models and digital workspace technology to arm companies with new sources for growth and competitive advantage.

One good example is we recently partnered with the Center for Economics and Business Research to examine the impact remote work models and technologies have on business and economic growth. We found that 69 percent of people who are currently unemployed or economically inactive would be willing to start working if given the opportunity to work flexibly by having the right technology.

device user2They further estimate that activating these, if you will, untapped pools of talent by enabling these flexible work-from-home models — especially for parents, workers in rural areas, retirees, part-time, and gig workers, folks that are normally outside of the traditional work pool and reactivating them through digital workspace technologies — could drive upward of an initial $2 trillion in economic gains across the US economy. So, the investment in readiness that folks are making is now being applied to drive ongoing business results even in non-crisis times.

Gardner: The coronavirus has certainly been leading the headlines recently, but it wasn’t that long ago that we had other striking headlines.

In California last fall, Chris McMasters, the wildfires proved a recurring problem. Tell us about Corona and why adjusting to a very dangerous environment — but requiring your key employees to continue to work – allowed you to adjust to a major business continuity challenge.

Fighting fire with cloud

McMasters: Corona is like a lot of local governments within the United States. I came from the private sector and have been in the city IT for about four years now. When I first got there, everything was on-premises. Our back-up with literally three miles away on the other side of the freeway.

If there was a disaster and something totaled the city, literally all of our technology assets would be down, which concerned me. I used to work for a national company and we had offices all over and we backed up across the country. So this was a much different environment. Yet we were dealing with public safety, which with police and fire service, 911 service, and they can never go down. Citizens depend on all of that.

CoronaCA

That was a wake-up call for me. At that time, we didn’t really have any virtual desktop infrastructure (VDI) going on. We did have server virtualization, but nothing in the cloud. In the government sector, we have a lot of regulation that revolves around the cloud and its security, especially when we are dealing with police and fire types of information. We have to be very careful. There are requirements both from the State of California and the federal government that we have to comply with.

At first, we used a government cloud, which was a little bit slower in terms of innovation because of all the regulations. But that was a first step to understanding what was ahead for us. We started this process about two years ago. At the time, we felt like we needed to push more of our assets to the cloud to give us more continuity.

At the end of the day, we realized we also needed to get the desktops up there, too: Using VDI and the cloud. And at the time, no one was doing that. We went and talked to Citrix on how that would extend to support our environment for public safety. Citrix has been there since day-one.

At the end of the day, we realized we also needed to get the desktops up there, too: Using VDI and the cloud. And at the time, no one was doing that. But we went and talked to Citrix. We flew out to their headquarters, sat with their people, and discussed our initiative, what we are trying to accomplish, and how that would extend out to support our environment for public safety. And that means all of the people out at the edge who actually touch citizens and provide emergency support services.

That was the beginning of the journey and Citrix has been there since day-one. They develop the products around that particular idea for us right up to today.

In the last two years, we’ve had quite a few fires in the State of California. Corona butts right up against the forest line and so we have had a lot of damage done by fires, both in our city and in the surrounding county. And there have been the impacts that occur after fires, too, which include mudslides. We get the whole gamut of that stuff.

But now we find that those first responders have the data to take action. We get the data into their hands quickly, make sure it’s secure on the way there, and we make that continuative so that it never fails. Those are the last people that we want to have fail.

We’ve been able to utilize this type of a platform where our data currently resides in two different datacenters in two different states. It’s on encrypted arrays at rest.

We are operating on a software-defined network so we can look at security from a completely different perspective. The old way was, “Let’s build a moat around it and a big wall, and hopefully no one gets in.” Now, instead we look at it quite differently. Our assets are protected outside of our facilities.

Those personnel riding in fire engines, in police cars, right up at the edge — they have to be secure right up to that edge. We have to maintain and understand the identity of that person. We need to know what applications they are accessing, or should not be accessing, and be secure all along that path.

This has all changed our outlook on how we deal with things and what a modern-day work environment looks like. The money we use comes from taxes, the people pay, and we provide services for our citizens. The interesting thing about that is we’re now driving toward the idea of government on-demand.

Before, when you would come home, right after a hard day’s work, city hall would be closed. Government was open 8 to 5, when people are normally working. So, when you want to conduct business at city hall, you have to take some time off of work. You try to find one day of the week, or a time when you might sneak in there to get your permits for something and proceed with your business.

endpoint-security-solution

But our new idea is different. Most of our services can be provided online for people. If we can do that, that’s fantastic, right? So, you can come home and say, “Hey, you know what? I was thinking about building an addition to my house.” So you go online, file your permits, and submit all of your documents electronically to us.

The difference that VDI provides for our employees is that I can now tap into a workforce of let’s say, a single mother who has a special needs child who can’t work normal hours, but she can work at night. So that person can take that permit, look at that permit at 6 or 7 pm, process the permit, and then at 5 am the next day, that process is done. You wake up in the morning, your permit has been processed by the city and completed. That type of flexibility is integral for us to make government more effective for people.

It’s not the necessarily the public safety support, which we are concerned about. But it’s about also generally providing flexible services for people and making sure government continues to operate.

Gardner:  Tim, it’s interesting that by addressing business continuity issues and disasters we are able to move very rapidly to a government on-demand or higher education on-demand. So, what are some of the larger goals when it comes to workforce agility?

Flexibility extends the business

Minahan: The examples that Chris and Jordan just gave are what excites me about flexible work models, empowered by digital workplace technologies, and the ability to embrace entirely new business models.

I used the example from the Center of Economic Business Research and how to tap into untapped talent pools. Another example of a company using similar technology is eBay. So eBay, like many of their competitors, would build a big call center and hire a bunch of people, train them up, and then one of the competitors will build a call center down the street and steal them away. They would have rapid turnover. They finally said, “Enough is enough, we have to think of a different model.”

eBay used the same approach of providing a secure digital workspace to reach into new talent pools outside of big cities. They could now hire gig workers and re-engage them in the workforce by using a workplace platform to arm them at the edge.

Well, they used the same approach of providing a secure digital workspace to reach into new talent pools outside of big cities. They could now hire gig workers, stay-at-home parents, etc., and re-engage them in the workforce by using the workplace platform to arm them at the edge and provide a service that was formally only provided in a big work hub, a big call center.

They went from having zero home force workers to 600 by the end of last year, and they are on a path to 4,000 by the end of this year. eBay solved a big problem, which is providing support for customers. How do I have a call center in a very competitive market? Well, I turn the tables and create new pools of talent, using technology in an entirely different way.

Gardner: Jordan, now that you’ve had help from organizations like Citrix to deal with your tough issue of students stuck in China, or other areas where there’s a quarantine, are you going to take that innovation and use it in other ways? Is this a gift that keeps giving?

Catling: It’s a really interesting question. What it’s demonstrated to me is that, as technologists, we need to be working with all of our people across the organization to understand their needs and to provide the right tools, but not necessarily to be prescriptive in how they are used. This current coronavirus situation has demonstrated to us that a combination of just a few tools — for example, the Citrix platform, ZoomEcho, and Canvas — means a very different thing to one person than to another person.

There’s such large variability in the way that education is delivered across the university, across so many disciplines, that it becomes about providing a flexible set of tools that all of our people can use in different and exciting ways. That extends not only to the current situation but to more normal times.

If we can provide the right toolset that’s flexible and meets the users where they are, and also make sure that the solutions provide a natural experience, that’s when you are really geared up well for success. The technology kind of fades into the background and becomes a true enabler of the bright minds across the institution.

Gardner: Chris, now that you’re able to do more with virtual desktops and delivering data regardless of the circumstances to your critical workers as well as to your citizens, what’s the next step?

Can you add a layer of intelligence rather than just being about better feeds and speeds? What comes next, and how would Citrix help you with that?

Intelligence improves government

McMasters: We’re neck deep in data analytics and in trying to understand how we can make impacts correctly by analyzing data. So adding artificial intelligence (AI) on top of those layers, understanding utilization of our resources, is the amazing part of where we’re going.

Wildfire_in_CaliforniaThere’s so much unused hardware and processing power tied up in our normal desktop machines. Being able to disrupt that and flip it up on its end is a fundamental change in how government operates. This is literally turning it on-end. I mean, AI can impact all the way down to how we do helpdesk, how it minimizes our response times and turnaround times, to increased productivity, and in how we service 160,000 people in my city. All of that changes.

Already I’m saving hundreds of thousands of dollars by using the cloud and VDI models and at the same time increasing all my service levels across the board. And now we can add this layer of business continuity to it, and that’s before we start benefitting from predictive AI and using data to determine asset utilization.

Moving from a CAPEX model to this OPEX model for government is something very new, it’s something that public sector or a private sector has definitely capitalized on and I think public sector is ripe for doing that. So for us, it’s changing everything, including our budget, how we deliver services, how we do helpdesk support, and on to the ways that we’re assessing our assets and leveraging citizens’ tax dollars correctly.

Gardner: Tim, organizations, both public and private sector, get involved with these intelligent workspaces in a variety of ways. Sometimes it might be a critical issue such as business continuity or a pandemic.

But ultimately, as Chris just mentioned, this is about digital business transformation. How are you able to take whatever on-ramp organizations are getting into an intelligent workspace and then give them more reasons to see ongoing productivity? How is this something that has a snowball effect on productivity?

AI, ML works with you

Minahan: Chris hit the nail on the head. Certainly, the initial on-ramps to digital workspace provides employees with unified and secure access to everything they need to be productive and in one experience. That means all of their apps, all of their content, regardless of where that’s stored, regardless of what device they’re accessing it from and regardless of where they’re accessing it from.

However, it gets really exciting when you go beyond that foundation of unified experience in a secure environment toward infusing things like machine learning (ML), digital assistants, and bots to change the way that people work. They can newly extract out some of the key insights and tasks that they need to do and offer them up to employees in real-time in a very personalized way. Then they can quickly take care of those tasks and the things they need to remove that noise from their day, and even guide them toward the right next steps to take to be even more productive, more engaged, and do much more innovative and creative work.

So, absolutely, AI and ML and the rise of bots are the next phase of all of this, where it’s not just a place you go to launch apps and work securely, but a place where you go to get your very best work done.

Gardner: Jordan, you were very impressively able to get more than 14,000 students to continue their education regardless of what Mother Nature threw at them. And you were able to do it in seven days. For those organizations that don’t want to be caught under such circumstances, that want to become proactive and prepared, what lessons have you have learned in your most recent journey that you can share with them? How can they be better positioned to combat any unfortunate circumstances they might face?

Prioritize when and how you work

Catling: It’s almost becoming cliché to say, but work is something that you do — it’s not a place anymore. So when we’re looking at and assessing tools for how we support the university, we’re focusing on taking a cloud-first approach where it doesn’t matter where a student or staff member is. They have access to all the resources they need on-demand. That’s one of the real guiding principles we should be using in our decision-making process.

Scalability is also a very important thing to us. The nature of the way that education is delivered today with an on-campus model is that demand is very peaky. We need to be mindful of how scalable and rapidly scalable a solution can be. That’s important to consider, particularly in the higher education context. How quickly can you scale up and down your environments to meet varying demands?

We can use the Citrix platform in many different ways. It’s not only for us to provide applications out to students to complete coursework. It can also be used for providing secure access to data and workspaces. 

Also, it’s important to consider the number of flexible ways that each of the technology products you choose can be used. For example, with the Citrix platform we can use it in many different ways. It’s not only for us to provide applications out to students to complete their coursework. It can also be used for providing secure access to data and to workspaces. There are so many different ways it can be extended, and that’s a real important thing when deciding which platform to use.

The final really important takeaway for us has been the establishment of true partnerships. We’ve had extremely good support from our partners, such as Citrix and Zoom, where they very rapidly sought to understand and work with us to solve the unique business problems that we’re facing. The real, true partnership is not one of just providing products, but of really sitting down shoulder-to-shoulder, trying to understand, but also suggesting ways to use a technology we may not be thinking of — or maybe it’s never been done before.

As Chris mentioned earlier, virtual desktops in the cloud weren’t a big thing that many years ago. About a decade ago, we began working with Citrix to provide streams of desktops to physical devices across campus.

That was something — that was a very unusual use of technology. So I think that the partnership is very important and something that organizations should develop and be ready to use. It goes in both directions at all times.

Gardner: Chris, now that you have, unfortunately, dealt with these last few harsh wildfire seasons in Southern California, what lessons have you learned? How do you make yourselves more like local government on demand?

Public-private partnerships

McMasters: That’s a big question. For us, we looked at breaking some of the paradigms that exist in government. They don’t have the same impetus to change as in the private sector. They are less willing to take risks. However, there are ways to work with vendors and partners to mitigate a lot of that risk, ways to pilot and test cutting-edge technologies that don’t put you at risk as you push these things out.

There are very few vendors that I would consider such partners. I probably can count them on one hand in total, and the interesting thing is that when we were selecting a vendor for this particular project, we were looking for a true partner. In our case, it was Citrix and Microsoft who came to the table. And when I look back at what’s happened in our relationship with those two in particular, I couldn’t ask for anything better.

We have literally had technicians, engineers, everyone on-site, on the phone every step of the way as we have been developing this. They took a lot of the risk out for us, because we are dealing with public dollars and we need to make sure these projects work. To have that level of comfort and stability in the background and knowing that I can rely on these people was huge. It’s what allowed us to develop to where we are today, which is far advanced in the government world.

That’s where things have to change. This kind of public-private partnership is what the public sector needs to start maturing. It’s bidirectional; it goes both ways. There is a lot of information that we offer to them; there is a lot of things they do for us. And so it goes back and forth as we develop this through this product cycle. It’s advantageous for both of us to be in it.

That’s where sometimes, especially in the public sector, we lose focus. They don’t understand what the private sector wants and what they are moving toward. It’s about being aligned on both sides of that equation — and it benefits both parties.

Technology is going to change, and it just keeps driving faster. There’s always another thing around the corner, but building these types of partnerships with vendors and understanding what they want helps them understand what you want, and then be able to deliver.

Gardner: Tim, how should businesses better work with vendor organizations to prepare themselves and their workers for a flexible future?

Minahan: First off, I would echo Chris’s comments. We all want government on-demand. You need a solution like that. But how they should work together? There are two great examples here in The University of Sydney and the City of Corona.

It really starts by listening. What are the problems we are trying to solve in planning for the future? How do we create a digitally agile organization and infrastructure that allows us to pursue new business opportunities, and just as easily ensure business continuity? So start by listening, map out a joint roadmap together and innovate toward that.

We are collectively as an industry constantly looking to innovate, constantly looking to leverage new technologies to drive business outcomes — whether those are for our citizens, students, or clientele. Start by listening, doing joint and co-development work, and constantly sharing that innovation with the rest of the market. It raises all boats.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in artificial intelligence, Citrix, Cloud computing, contact center, Cyber security, Data center transformation, disaster recovery, Information management, Internet of Things, Microsoft, mobile computing, risk assessment, Security, supply chain, User experience, vdi | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

As containers go mainstream, IT culture should pivot to end-to-end DevSecOps

hanging container

Container-based deployment models have rapidly gained popularity from cloud models to corporate data centers. IT operators are now looking to extend the benefits of containers to more use cases, including the computing edge.

Yet in order to push containers further into the mainstream, security concerns need to be addressed across this new end-to-end container deployment spectrum — and that means addressing security during development under the rubric of DevSecOps best practices.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as the next BriefingsDirect Voice of Innovation discussion examines the escalating benefits that come from secure and robust container use with Simon Leech, Worldwide Security and Risk Management Practice at Hewlett Packard Enterprise (HPE) Pointnext Services. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Simon, are we at an inflection point where we’re going to see containers take off in the mainstream? Why is this the next level of virtualization?

Leech: We are certainly seeing a lot of interest from our customers when we speak to them about the best practices they want to follow in terms of rapid application development.

One of the things that always held people back a little bit with virtualization was that you are always reliant on an operating system (OS) managing the applications that sit on top of that OS in managing the application code that you would deploy to that environment.

But what we have seen with containers is that as everything starts to follow a cloud-native approach, we start to deal with our applications as lots of individual microservices that all communicate integrally to provide the application experience to the user. It makes a lot more sense from a development perspective to be able to address the development in these small, microservice-based or module-based development approaches.

So, while we are not seeing a massive influx of container-based projects going into mainstream production at the moment, there are certainly a lot of customers testing their toes in the water to identify the best possibilities to adopt and address container use within their own application development environments.

Gardner: And because we saw developers grok the benefits of containers early and often, we have also seen them operate within a closed environment — not necessarily thinking about deployment. Is now the time to get developers thinking differently about containers — as not just perhaps a proof of concept (POC) or test environment, but as ready for the production mainstream?

Leech: Yes. One of the challenges I have seen with what you just described is a lot of container projects start as a developer’s project behind his laptop. So the developer is going out there, identifying a container-based technology as something interesting to play around with, and as time has gone by has realized he can actually make a lot of progress by developing his applications using a container-based architecture.

This is often done under the radar of management. one of the things we are discussing with customers as we address DevSecOps and DevOps is to make sure you get buy-in from the executive team and enable top-down integration.

What that means from an organizational perspective is that this is often done under the radar of management. One of the things we are discussing with our customers as we go and talk about addressing DevSecOps and DevOps initiatives is to make sure that you do get that buy-in from the executive team and so you can start to enable some top-down integration.

Don’t just see containers as a developer’s laptop project but look at it broadly and understand how you can integrate that into the overall IT processes that your organization is operating with. And that does require a good level of buy-in from the top.

Gardner: I imagine this requires a lifecycle approach to containers thinking — not just about the development, but in how they are going to be used over time and in different places.

Now, 451 Research recently predicted that the market for containers will hit $2.7 billion this year. Why do you think that the IT operators — the people who will be inheriting these applications and microservices — will also take advantage of containers? What does it bring to their needs and requirements beyond what the developers get out of it?

Quick-change code artists

Leech: One of the biggest advantages from an operational perspective is the ability to make fast changes to the code you are using. So whereas in the traditional application development environment, a developer would need to make a change to some code and it would involve requesting a downtime to be able to update the complete application, with a container-based architecture, you only have to update parts of the architecture.

So, it allows you to make many more changes than you previously would have been able to deliver to the organization — and it allows you to address those changes very rapidly.

Gardner: How does this allow for a more common environment to extend across hybrid IT — from on-premises to cloud to hybrid cloud and then ultimately to the edge?

HPE C truck

Leech: Well, applications developed in containers and developed within a cloud-native approach typically are very portable. So you don’t need to be restricted to a particular version or limits, for example. The container itself runs on top of any OS of the same genre. Obviously, you can’t run a Windows container on top of a Linux OS, or vice versa.

But within the general Linux space that pretty much has compatibility. So it makes it very easy for the containers to be developed in one environment and then released into different environments.

Gardner: And that portability extends to the hyperscale cloud environments, the public cloud, so is there a multi-cloud extensibility benefit?

Leech: Yes, definitely. You see a lot of developers developing their applications in an on-premises environment with the intention that they are going to be provisioned into a cloud. If they are done properly, it shouldn’t matter if that’s a Google Cloud Platform instance, a Microsoft Azure instance, or Amazon Web Services (AWS).

Gardner: We have quite an opportunity in front of us with containers across the spectrum of continuous development and deployment and for multiple deployment scenarios. What challenges do we need to think about to embrace this as a lifecycle approach?

What are the challenges to providing security specifically, making sure that the containers are not going to add risk – and, in fact, improve the deployment productivity of organizations?

Make security a business priority 

Leech: When I address the security challenges with customers, I always focus on two areas. The first is the business challenge of adopting containers, and the security concerns and constrains that come along with that. And the second one is much more around the technology or technical challenges.

If you begin by looking at the business challenges, of how to adopt containers securely, this requires a cultural shift, as I already mentioned. If we are going to adopt containers, we need to make sure we get the appropriate executive support and move past the concept that the developer is doing everything on his laptop. We train our coders on the needs for secure coding.

A lot of developers are not trained as security specialists. It makes sense to put a program into place that trains coders to think more about security, especially in a container environment where you have fast release cycles. 

A lot of developers have as their main goal to produce high-quality software fast, and they are not trained as security specialists. It makes a lot of sense to put an education program into place, that allows you to train those internal coders so that they understand the need to think a little bit more about security — especially in a container environment where you have fast release cycles and sometimes the security checks get missed or don’t get properly instigated. It’s good to start with a very secure baseline.

And once you have addressed the cultural shift, the next thing is to think about the role of the security team in your container development team, your DevOps development teams. And I always like to try and discuss with my customers the value of getting a security guy into the product development team from day one.

Often, we see in a traditional IT space that the application gets built, the infrastructure gets designed, and then the day before it’s all going to go into production someone calls security. Security comes along and says, “Hey, have you done risk assessments on this?” And that ends up delaying the project.

Hooded guyIf you introduce the security person into the small, agile team as you build it to deliver your container development strategy, then they can think together with the developers. They can start doing risk assessments and threat modeling right from the very beginning of the project. It allows us to reduce delays that you might have with security testing.

At the same time, it also allows us to shift our testing model left in a traditional waterfall model, where testing happens right before the product goes live. But in a DevOps or DevSecOps model, it’s much better to embed the security, best practices, and proper tooling right into the continuous integration/continuous delivery (CI/CD) pipeline.

The last point around the business view is that, again, going back to the comment I made earlier, developers often are not aware of secure coding and how to make things secure. Providing a secure-by-default approach — or even a security self-service approach – allows developers to gain a security registry, for example. That provides known good instances of container images or provides infrastructure and compliance code so that they can follow a much more template-based approach to security. That also pays a lot of dividends in the quality of the software as it goes out the door.

Gardner: Are we talking about the same security precautions that traditional IT people might be accustomed to but now extending to containers? Or is there something different about how containers need to be secured?

Updates, the container way 

Leech: A lot of the principles are the same. So, there’s obviously still a need for network security tools. There’s still a need to do vulnerability assessments. There is still a need for encryption capabilities. But the difference with the way you would go about using technical controls to protect a container environment is all around this concept of the shared kernel.

An interesting white paper has been released by the National Institute of Standards and Technology (NIST) in the US, SP 800-190, which is their Application Container Security Guide. And this paper identifies five container security challenges around risks with the images, registry, orchestrator, the containers themselves, and the host OS.

So, when we’re looking at defining a security architecture for our customers, we always look at the risks within those five areas and try to define a security model that protects those best of all.

hpe-logoOne of the important things to understand when we’re talking about securing containers is that we have a different approach to the way we do updates. In a traditional environment, we take a gold image for a virtual machine (VM). We deploy it to the hypervisor. Then we realize that if there is a missing patch, or a required update, that we roll that update out using whatever patch management tools we use.

In a container environment, we take a completely different approach. We never update running containers. The source of your known good image is your registry. The registry is where we update containers, have updated versions of those containers, and use the container orchestration platform to make sure that next time somebody calls a new container that it’s launched from the new container image.

It’s important to remember we don’t update things in the running environment. We always use the container lifecycle and involve the orchestration platform to make those updates. And that’s really a change in the mindset for a lot of security professionals, because they think, “Okay, I need to do a vulnerability assessment or risk assessment. Let me get out my Qualys and my Rapid7,” or whatever, and, “I’m going to scan the environment. I’m going to find out what’s missing, and then I’m going to deploy patches to plug in the risk.”

So we need to make sure that our vulnerability assessment process gets built right into the CI/CD pipeline and into the container orchestration tools we use to address that needed change in behavior.

Gardner: It certainly sounds like the orchestration tools are playing a larger role in container security management. Do those in charge of the container orchestration need to be thinking more about security and risk?

Simplify app separation 

Leech: Yes and no. I think the orchestration platform definitely plays a role and the individuals that use it will need to be controlled in terms of making sure there is good privileged account management and integration into the enterprise authentication services. But there are a lot of capabilities built into the orchestration platforms today that make the job easier.

One of the challenges we’ve seen for a long time in software development, for example, is that developers take shortcuts by hard coding clear text passwords into the text, because it’s easier. And, yeah, that’s understandable. You don’t need to worry about managing or remembering passwords.

But what you see a lot of orchestration platforms offering is the capability to deliver sequence management. So rather than storing the passcode in within the code, you can now request the secret from the secrets management platform that the orchestration platform offers to you.

Orchestration tools give you the capability to separate container workloads for differing sensitivity levels. This provides separation between the applications without having to think too much about it.

These orchestration tools also give you the capability to separate container workloads for differing sensitivity levels within your organization. For example, you would not want to run containers that operate your web applications on the same physical host as containers that operate your financial applications. Why? Because although you have the capability with the container environment using separate namespaces to separate the individual container architectures from one another, it’s still a good security best practice to run those on completely different physical hosts or in a virtualized container environment on top of different VMs. This provides physical separation between the applications. Very often the orchestrators will allow you to provide that functionality within the environment without having to think too much about it.

Gardner: There is another burgeoning new area where containers are being used. Not just in applications and runtime environments, but also for data and persistent data. HPE has been leading the charge on making containers appropriate for use with data in addition to applications.

How should the all-important security around data caches and different data sources enter into our thinking?

Save a slice for security 

Leech: Because containers are temporary instances, it’s important that you’re not actually storing any data within the container itself. At the same time, as importantly, you’re not storing any of that data on the host OS either.

puzzleIt’s important to provide persistent storage on an external storage array. So looking at storage arrays, things like from HPE, we have Nimble Storage or Primera. They have the capability through plug-ins to interact with the container environment and provide you with that persistent storage that remains even as the containers are being provisioned and de-provisioned.

So your container itself, as I said, doesn’t store any of the data, but a well-architected application infrastructure will allow you to store that on a third-party storage array.

Gardner: Simon, I’ve had an opportunity to read some of your blogs and one of your statements jumped out … “The organizational culture still lags behind when it comes to security.” What did you mean by that? And how does that organizational culture need to be examined, particularly with an increased use of containers?

Leech: It’s about getting the security guys involved in the DevSecOps projects early on in the lifecycle of that project. Don’t bring them to the table toward the end of the project. Make them a valuable member of that team. There was a comment made about the idea of a two-pizza team.

A two-pizza team means a meeting should never have more people in it than can be fed by two pizzas and I think that that applies equally to development teams when you’re working on container projects. They don’t need to be big; they don’t need to be massive.

It’s important to make sure there’s enough pizza saved for the security guy! You need to have that security guy in the room from the beginning to understand what the risks are. That’s a lot of where this cultural shift needs to change. And as I said, executive support plays a strong role in making sure that that happens.

Gardner: We’ve talked about people and process. There is also, of course, that third leg of the stool — the technology. Are the people building container platforms like HPE thinking along these lines as well? What does the technology, and the way it’s being designed, bring to the table to help organizations be DevSecOps-oriented?

Select specific, secure solutions 

Leech: There are a couple of ways that technology solutions are going to help. The first are the pre-production commercial solutions. These are the things that tend to get integrated into the orchestration platform itself, like image scanning, secure registry services, and secrets management.

A lot of those are going to be built into any container orchestration platform that you choose to adopt. There are also commercial solutions that support similar functions. It’s always up to an organization to do a thorough assessment of whether their needs can be met by the standard functions in the orchestration platform or if they need to look at some of the third-party vendors in that space, like Aqua Security or Twistlock, which was recently acquired by Palo Alto Networks, I believe.

No single solution covers all of an enterprise’s requirements. It’s a task to assess security shortcomings, what products you need, and then decide who will be the best partner for those total solutions.

And then there are the solutions that I would gather up as post-production commercial solutions. These are for things such as runtime protection of the container environment, container forensic capabilities, and network overlay products that allow you to separate your workloads at the network level and provide container-based firewalls and that sort of stuff.

Very few of these capabilities are actually built into the orchestration platforms. They tend to be third parties such as SysdigGuardicore, and NeuVector. And then there’s another bucket of solutions, which are more open-source solutions. These typically focus on a single function in a very cost-effective way and are typically open source community-led. And these are solutions such as SonarQubePlatform as a Service (PaaS), and Falco, which is the open source project that Sysdig runs. You also have Docker Bench and Calico, a networking security tool.

But no single solution covers all of an enterprise customer’s requirements. It remains a bit of a task to assess where you have security shortcomings, what products you need, and who’s going to be the best partner to deliver those products with those technology solutions for you.

Gardner: And how are you designing Pointnext Services to fill that need to provide guidance across this still dynamic ecosystem of different solutions? How does the services part of the equation shake out?

Leech: We obviously have the technology solutions that we have built. For example, the HPE Container Platform, which is based around technology that we acquired as part of the BlueData acquisition. But at the end of the day, these are products. Companies need to understand how they can best use those products within their own specific enterprise environments.

ship containersI’m part of Pointnext Services, within the advisory and professional services team. A lot of the work that we do is around advising customers on the best approaches they can take. On one hand, we’d like them to purchase our HPE technology solutions, but on the other hand, a container-based engagement needs to be a services-led engagement, especially in the early phases where a lot of customers aren’t necessarily aware of all of the changes they’re going to have to make to their IT model.

At Pointnext, we deliver a number of container-oriented services, both in the general container implementation area as well as more specifically around container security. For example, I have developed and delivered transformation workshops around DevSecOps.

We also have container security planning workshops where we can help customers to understand the security requirements of containers in the context of their specific environments. A lot of this work is based around some discovery we’ve done to build our own container security solution reference architecture.

Gardner: Do you have any examples of organizations that have worked toward a DevSecOps perspective on continuous delivery and cloud native development? How are people putting this to work on the ground?

Edge elevates container benefits 

Leech: A lot of the customers we deal with today are still in the early phases of adopting containers. We see a lot of POC engagement where a particular customer may be wanting to understand how they could take traditional applications and modernize or architect those into cloud-native or container-based applications.

There’s a lot of experimentation going on. A lot of the implementations we see start off small, so the customer may buy a single technology stack for the purpose of testing and playing around with containers in their environment. But they have intentions within 12 to 18 months of being able to take that into a production setting and reaping the benefits of container-based deployments.

Gardner: And over the past few years, we’ve heard an awful lot of the benefits for moving closer to the computing edge, bringing more compute and even data and analytics processing to the edge. This could be in a number of vertical industries, from autonomous vehicles to manufacturing and healthcare.

But one of the concerns, if we move more compute to the edge, is will security risks go up? Is there something about doing container security properly that will make that edge more robust and more secure?

Leech: Yes, a container project done properly can actually be more secure than a traditional VM environment. This begins from the way you manage the code in the environment. And when you’re talking about edge deployments, that rings very true.

From the perspective of the amount of resources it has to use, it’s going to be a lot lighter when you’re talking about something like autonomous driving to have a shared kernel rather than lots of instances of a VM running, for example.

From a strictly security perspective, if you deal with container lifecycle management effectively, involve the security guys early, have a process around releasing, updating, and retiring container images into your registry, and have a process around introducing security controls and code scanning in your software development lifecycle — making sure that every container that gets released is signed with an appropriate enterprise signing key — then you have something that is very repeatable, compared with a traditional virtualized approach to application and delivery.

That’s one of the big benefits of containers. It’s very much a declarative environment. It’s something that you prescribe … This is how it’s going to look. And it’s going to be repeatable every time you deploy that. Whereas with a VM environment, you have a lot of VM sprawl. And there are a lot of changes across the different platforms as different people have connected and changed things along the way for their own purposes.

There are many benefits with the tighter control in a container environment. That can give you some very good security benefits.

Gardner: What comes next? How do organizations get started? How should they set themselves up to take advantage of containers in the right way, a secure way?

Begin with risk evaluation 

Leech: The first step is to do the appropriate due diligence. Containers are not going to be for every application. There are going to be certain things that you just can’t modernize, and they’re going to remain in your traditional data center for a number of years.

I suggest looking for the projects that are going to give you the quickest wins and use those POCs to demonstrate the value that containers can deliver for your organization. Make sure that you do appropriate risk awareness, work with the services organizations that can help you. The advantage of a services organization is they’ve probably been there with another customer previously so they can use the best practices and experiences that they have already gained to help your organization adopt containers.

Just make sure that you approach it using a DevSecOps model. There is a lot of discussion in the market at the moment about it. Should we be calling it DevOps or should we call it SecDevOps or DevOpsSec? My personal opinion is call it DevSecOps because security in a DevSecOps module sits right in the middle of development and operations — and that’s really where it belongs.

endpoint-security-solution

In terms of assets, there is plenty of information out there in a Google search; it finds you a lot of assets. But as I mentioned earlier, the NIST White Paper SP 800-190 is a great starting point to understand not only container security challenges but also to get a good understanding of what containers can deliver for you.

At the same time, at HPE we are also committed to delivering relevant information to our customers. If you look on our website and also our enterprise.nxt blog site, you will see a lot of articles about best practices on container deployments, case studies, and architectures for running container orchestration platforms on our hardware. All of this is available for people to download and to consume.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, Cloud computing, containers, Cyber security, Data center transformation, enterprise architecture, Hewlett Packard Enterprise, Security, Virtualization | Tagged , , , , , , , , , , , , , , , | Leave a comment

AI-first approach to infrastructure design extends analytics to more high-value use cases

speech text

The next BriefingsDirect Voice of artificial intelligence (AI) Innovation discussion explores the latest strategies and use cases that simplify the use of analytics to solve more tough problems.

Access to advanced algorithms, more cloud options, high-performance compute (HPC) resources, and an unprecedented data asset collection have all come together to make AI more attainable — and more powerful — than ever.

Major trends in AI and advanced analytics are now coalescing into top competitive differentiators for most businesses. Stay with us as we examine how AI is indispensable for digital transformation through deep-dive interviews on prominent AI use cases and their escalating benefits.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about analytic infrastructure approaches that support real-life solutions, we’re joined by two experts, Andy Longworth, Senior Solution Architect in the AI and Data Practice at Hewlett Packard Enterprise (HPE) Pointnext Services, and Iveta Lohovska, Data Scientist in the Pointnext Global Practice for AI and Data at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Andy, what are the top drivers for making AI more prominent in business use cases?

Longworth: We have three main things driving AI at the moment for businesses. First of all, we know about the data explosion. These AI algorithms require huge amounts of data. So we’re generating that, especially in the industrial setting with machine data.

Andy Longworth

Longworth

Also, the relative price of computing is coming down, giving the capability to process all of that data at accelerating speeds as well. You know, the graphics processing units (GPUs) and tensor processing units (TPUs) are becoming more available, enabling us to get through that vast volume of data.

And thirdly, the algorithms. If we look to organizations likeFacebookGoogle, and academic institutions, they’re making algorithms available as open source. So organizations don’t have to go and employ somebody to build an algorithm from the ground up. They can begin to use these pre-trained, pre-created models to give them a kick-start in AI and quickly understand whether there’s value in it for them or not.

Gardner: And how do those come together to impact what’s referred to as digital transformation? Why are these actually business benefits?

Longworth: They allow organizations to become what we call data driven. They can use the massive data that they’ve previously generated but never tapped into to improve business decisions, impacting the way they drive the business through AI. It’s transforming the way they work.

AI data boost to business 

Across several types of industry, data is now driving the decisions. Industrial organizations, for example, improve the way they manufacture. Without the processing of that data, these things wouldn’t be possible.

Gardner: Iveta, how do the trends Andy has described make AI different now from a data science perspective? What’s different now than, say, two or three years ago?

Lohovska: Most of the previous AI algorithms were 30, 40, and even 50 years old in terms of the linear algebra and their mathematical foundations. The higher levels of computing power enable newer computations and larger amounts of data to train those algorithms.

Iveta Lohovska

Lohovska

Those two components are fundamentally changing the picture, along with the improved taxonomies and the way people now think of AI as differentiated between classical statistics and deep learning algorithms. Now, not just technical people can interact with these technologies and analytic models. Semi-technical people can with a simple drag-and-drop interaction, based on the new products in the market, adopt and fail fast — or succeed faster — in the AI space. The models are also getting better and better in their performance based on the amount of data they get trained on and their digital footprint.

Gardner: Andy, it sounds like AI has evolved to the point where it is mimicking human-like skills. How is that different and how does such machine learning (ML) and deep learning change the very nature of work?

Let simple tasks go to machines 

Longworth: It allows organizations and people to move some of the jobs that were previously very tedious for people so they can be done by machines and repurposes the people’s skills into more complex jobs. For example, in computer vision and applying that in quality control. If you’re creating the same product again and again and paying somebody to look at that product to say whether there’s a defect on it, it’s probably not the best use of their skills. And, they become fatigued.

If you look at the same thing again and again, you start to miss features of that and miss the things that have gone wrong. A computer doesn’t get that same fatigue. You can train a model to perform that quality-control step and it won’t become tired over time. It can keep going for longer than, for example, an eight-hour shift that a typical person might work. So, you’re seeing these practical applications, which then allows the workforce to concentrate on other things.

Gardner: Iveta, it wasn’t that long ago that big data was captured and analyzed mostly for the sake of compliance and business continuity. But data has become so much more strategic. How are businesses changing the way they view their data?

Lohovska: They are paying more attention to the quality of the data and the variety of the data collection that they are focused on. From a data science perspective, even if I want to say that the performance of models is extremely important, and that my data science skills are a critical component to the AI space and ecosystem, it’s ultimately about the quality of the data and the way it’s pipelined and handled.

Organizations will realize that being more selective and paying more attention to the foundations of how they handle big data — or small data — will get them to the data science part of the process.

This process of data manipulation, getting to the so-called last mile of the data science contribution, is extremely important. I believe it’s the critical step and foundation. Organizations will realize that being more selective and paying more attention to the foundations of how they handle big data — or small data – will get them to the data science part of the process.

You can already see the maturity as many customers, partners, and organizations pay more attention to the fundamental layers of AI. Then they can get better performance at the last mile of the process.

Gardner: Why are the traditional IT approaches not enough? How do cloud models help?

Cloud control and compliance 

Longworth: The cloud brings opportunities for organizations insomuch as they can try before they buy. So if you go back to the idea of processing all of that data, before an organization spends real money on purchasing GPUs, they can try them in the cloud to understand whether they work and deliver value. Then they can look at the delivery model. Does it make sense with my use case to make a capital investment, or do I go for a pay-per-use model using the cloud?

You also have the data management piece, which is understanding where your data is. From that sense, cloud doesn’t necessarily make life any less complicated. You still need to know where the data resides, control that data, and put in the necessary protections in line with the value of the data type. That becomes particularly important with legislation like the General Data Protection Regulation (GDPR) and the use of personally identifiable information (PII).

outside factoryIf you don’t have your data management under control and understand where all those copies of that data are, then you can’t be compliant with GDPR, which says you may need to delete all of that data.

So, you need to be aware of what you’re putting in the cloud versus what you have on-premises and where the data resides across your entire ecosystem.

Gardner: Another element of the past IT approaches has to do with particulars vs. standards. We talk about the difference between managing a cow and managing a herd.

How do we attain a better IT infrastructure model to attain digital business transformation and fully take advantage of AI? How do we balance between a standardized approach, but also something that’s appropriate for specific use cases? And why is the architecture of today very much involved with that sort of a balance, Andy?

Longworth: The first thing to understand is the specific use case and how quickly you need insights. We can process, for example, data in near real-time or we can use batch processing like we did in days of old. That use case defines the kind of processing.

If, for example, you think about an autonomous vehicle, you can’t batch-process the sensor data coming from that car as it’s driving on the road. You need to be able to do that in near real-time — and that comes at a cost. You not only need to manage the flow of data; you need the compute power to process all of that data in near real-time.

So, understand the criticality of the data and how quickly you need to process it. Then we can build solutions to process the data within that framework and within the right time that it needs to be processed. Otherwise, you’re putting additional cost into a use case that doesn’t necessarily need to be there.

When we build those use cases we typically use cloud-like technologies. That allows us portability of the use case, even if we’re not necessarily going to deploy it in the cloud. It allows us to move the use case as close to the data as possible.

When we build those use cases we typically use cloud-like technologies — be that containers or scalar technologies. That allows us portability of the use case, even if we’re not necessarily going to deploy it in the cloud. It allows us to move the use case as close to the data as possible.

For example, if we’re talking about a computer vision use case on a production line, we don’t want to be sending images to the cloud and have the high latency and processing of the data. We need a very quick answer to control the production process. So you would want to move the inference engine as close to the production line as possible. And, if we use things like HPE Edgeline computing and containers, we can place those systems right there on the production line to get the answers as quickly as we need.

So being able to move the use case where it needs to reside is probably one of the biggest things that we need to consider.

Gardner: Iveta, why is the so-called explore, experiment, and evolve approach using such a holistic ecosystem of support the right way to go?

Scientific methods and solutions

Lohovska: Because AI is not easy. If it were easy, then everyone would be doing it and we would not be having this conversation. It’s not a simple statistical use case or a program or business intelligence app where you already have the answer or even an idea of the questions you are asking.

The whole process is in the data science title. You have the word “science,” so there is a moment of research and uncertainty. It’s about the way you explore the data, the way you understand the use cases, starting from the fact that you have to define your business case, and you have to define the scope.

My advice is to start small, not exhaust your resources or the trust of the different stakeholders. Also define the correct use case and the desired return on investment (ROI). HPE is even working on the definitions and the business case when approaching an AI use case, trying to understand the level of complexity and the required level of prediction needed to achieve the use case’s success.

Such an exploration phase is extremely important so that everyone is aligned and finds a right path to minimize failure and get to the success of monetizing data and AI. Once you have the fundamentals, once you have experimented with some use cases, and you see them up and running in your production environment, then it is the moment to scale them.

I think we are doing a great job bringing all of those complicated environments together, with their data complexity, model complexity, and networking and security regulations into one environment that’s in production and can quickly bring value to many use cases.

This flow is extremely important, of experimenting and not approaching things like you have a fixed answer or fixed approach. It’s extremely important, and this is the way we at HPE are approaching AI.

Gardner: It sounds as if we are approaching some sort of a unified reference architecture that’s inclusive of systems, cloud models, data management, and AI services. Is that what’s going to be required? Andy, do we need a grand unifying theory of AI and data management to make this happen?

Longworth: I don’t think we do. Maybe one day we will get to that point, but what we are reaching now is a clear understanding of what architectures work for which use cases and business requirements. We are then able to apply them without having to experiment every time we go into this because it’s a complement to what Iveta said.

machine monitoringWhen we start to look at these use cases, when we engage with customers, what’s key is making sure there is business value for the organization. We know AI can work, but the question is, does it work in the customer’s business context?

If we can take out a good deal of that experimentation and come in with a fairly good answer to the use case in a specific industry, then we have a good jump start on that.

As time goes on and AI develops, we will see more generic AI solutions that can be used for many different things. But at the moment, it’s really still about point solutions.

Gardner: Let’s find out where AI is making an impact. Let’s look first, Andy, at digital prescriptive maintenance and quality control. You mentioned manufacturing a little earlier. What’s the problem, context, and how are we getting better business outcomes?

Monitor maintenance with AI

Longworth: The problem is the way we do maintenance schedules today. If you look back in history, we had reactive maintenance that was basically … something breaks and then we fix it.

Now, most organizations are in a preventative mode so a manufacturer gives a service window and says, “Okay, you need to service this machinery every 1,000 hours of running.” And that happens whether it’s needed or not.

Read the White Paper on Digital Prescriptive

 Maintenance and Quality Control 

When we get into prescriptive and predictive maintenance, we only service those assets as they actually need it, which means having the data, understanding the trends, recognizing if problems are forthcoming, and then fixing them before they impact the business.

That data from machinery may sense temperature, vibration, speed, and getting a condition-based monitoring view and understanding in real time what’s happening with the machinery. You can then also use past history to be able to predict what is going to happen in the future with that machine.

We can get to a point where we know in real time what’s happening with the machinery and have the capability to predict the failures before they happen.

The prescriptive piece comes in when we understand the business criticality or the business impact of an asset. If you have a production line and you have two pieces of machinery on that production line, both may have the identical probability of failure. But one is on your critical manufacturing path, and the other is some production buffer.

The prescriptive piece goes beyond the prediction to understand the business context of that machinery and applying that to how you are behaving, and then how you react when something happens with that machine.

As a business, the way that you are going to deal with those two pieces of machinery is different. You will treat the one on the critical path differently than the one where you have a product buffer. And so the prescriptive piece goes beyond the prediction to understanding the business context of that machinery and applying that to how you are behaving, and then how you react when something happens with that machine.

That’s the idea of the solution when we build digital prescriptive maintenance. The side benefit that we see is the quality control piece. If you have a large piece of machinery that you can test to it running perfectly during a production run, for example, then you can say with some certainty what the quality of the outcoming product from that machine will be.

video of carsGardner: So we have AI overlooking manufacturing and processing. It’s probably something that would make you sleep a little bit better at night, knowing that you have such a powerful tool constantly observing and reporting.

Let’s move on to our next use case. Iveta, video analytics and surveillance. What’s the problem we need to solve? Why is AI important to solving it?

Scrutinize surveillance with AI 

Lohovska: For video surveillance and video analytics in general, the overarching field is computer vision. This is the most mature and currently the trendiest AI field, simply because the amount of data is there, the diversity is there, and the algorithms are getting better and better. It’s no longer state-of-the-art, where it’s difficult to grasp, adopt, and bring into production. So, now the main goal is moving into production and monetizing these types of data sources.

Read the White Paper on

Video Analytics and Surveillance 

When you talk about video analytics or surveillance, or any kind of quality assurance, the main problem is improving on or detecting human errors, behaviors, and environments. Telemetry plays a huge role here, and there are many complements and constraints to consider in this environment.

That makes it hardware-dependent and also requires AI at the edge, where most of the algorithms and decisions need to happen. If you want to detect fire, detect fraud or prevent certain types of failure, such as quality failure or human failure — time is extremely important.

As HPE Pointnext Services, we have been working on our own solution and reference architectures to approach those problems because of the complexity of the environment, the different cameras, and hardware handling the data acquisition process. Even at the beginning it’s enormous and very diverse. There is no one-size-fits-all. There is no one provider or one solution that can handle surveillance use cases or broad analytical use cases at the manufacturing plant or oil and gas rig where you are trying to detect fire or oil and gas spills from the different environments. So being able to approach it holistically, to choose the right solution for the right complement, and design the architecture is key.

Also, it’s essential to have the right hardware and edge devices to acquire the data and handle the telemetry. Let’s say when you are positioning cameras in an outside environment and you have different temperatures, vibrations, and heat. This will reflect on the quality of the acquired information going through the pipeline.

Some of the benefits in use cases using computer vision and video surveillance include real time information coming from manufacturing plants, knowing that all the safety and security standards there are met, and that the people operating are following the instructions and have the safeguards required for a specific manufacturing plant is also extremely important.

When you have a quality assurance use case, video analytics is one source of information to tackle the problem. For example, improving the quality of your products or batches is just one source in the computer vision field. Having the right architecture, being agile and flexible, and finding the right solution for the problem and the right models deployed at the right edge device — or at the right camera — is something we are doing right now. We have several partners working to solve the challenges of video analytics use cases.

Gardner: When you have a high-scaling, high-speed AI to analyze video, it’s no longer a gating factor that you need to have humans reviewing the processes. It allows video to be used in so many more applications, even augmented reality, so that you are using video on both ends of the equation, as it were. Are we seeing an explosion of applications and use cases for video analytics and AI, Iveta?

Lohovska: Yes, absolutely. The impact of algorithms in this space is enormous. Also, all the open source datasets, such as ImageNet and ResNet, allow a huge amount of data to train any kind of algorithms on those open source datasets. You can adjust them and pre-train them for your own use cases, whether it’s healthcare, manufacturing, or video surveillance. It’s very enabling.

You can see the diversity of the solutions people are developing and the different programs they are tackling using computer vision capabilities, not only from the algorithms, but also from the hardware side, because the cameras are getting more and more powerful.

Currently, we are working on several projects in the non-visible human spectrum. This is enabled by the further development of the hardware acquiring those images that we can’t see.

Gardner: If we can view and analyze machines and processes, perhaps we can also listen and talk to them. Tell us about speech and natural language processing (NLP), Iveta. How is AI enabling those businesses and how they transform themselves?

Speech-to-text to protect

Lohovska: This is another strong field for how AI is used and still improving. It’s not as mature as computer vision, simply because the complexity of human language and speech, and the way speech gets recorded and transferred. It’s a bit more complex, so it’s not only a problem of technologies and people writing algorithms, but also linguists being able to combine the grammar problems and write the right equation to solve those grammar problems.

Read the White Paper on

Speech and Natural Language Processing 

But one very interesting field in the speech and NLP area is speech-to-text, so basically being able to transcribe speech into text. It’s very helpful for emergency organizations handling emergency calls or fraud detection, where you need, in real time, to detect fraud or danger. If someone is in danger, it’s a very common use case for law enforcement or for security organizations or for simply improving the quality of your service for call centers.

carsThis example is industry- or vertical-independent. You can have finance, manufacturing, retail — but all of them have some kind of customer support. This is the most common use case, being able to record and improve the quality of your services, based on the analysis you can apply. Similar to the video analytics use case, the problem here, too, is handling the complexity of different algorithms, different languages, and the varying quality of the recordings.

A reference architecture, where you have the different components designed on exactly this holistic approach, allows the user to explore, evolve, and experiment in this space. We choose the right complement for the right problem and how to approach it.

And in this case, if we combine the right data science tool with the right processing tool and the right algorithms on top of it, then you can simply design the solution and solve the specific problem.

Gardner: Our next and last use case for AI is one people are probably very familiar with, and that’s the autonomous driving technology (ADT).

Andy, how are we developing highly automated-driving infrastructures that leverage AI and help us get to that potential nirvana of truly self-driving and autonomous vehicles?

Data processing drives vehicles 

Longworth: There are several problems around highly autonomous driving as we have seen. It’s taking years to get to the point where we have fully autonomous cars and there are clear advantages to it.

If you look at, for example, what the World Health Organization (WHO) says, there are more than 1 million deaths per year in road traffic accidents. One of the primary drivers for ADT is that we can reduce the human error in cars on the road — and reduce the number of fatalities and accidents. But to get to that point we need to train these immensely complex AI algorithms that take massive amounts of data from the car.

Just purely from the sensor point of view, we have high-definition cameras giving 360-degree views around the car. You have radar, GPS, audio, and vision systems. Some manufacturers use light detection and ranging (LIDAR), some not. But you have all of these sensors giving massive amounts of data. And to develop those autonomous cars, you need to be able to process all of that raw data.

Read the White Paper on

Development of Self-Driving Infrastrcuture 

Typically, in an eight-hour shift, an ADT car generates somewhere between 70 and 100 terabytes of data. If you have an entire fleet of cars, then you need to be able to very quickly get that data off of the car so that you can get them back out on the road as quickly as possible. Then you need to get that data from where you offload it into the data center so that the developers, data scientists, analysts, and engineers can build to the next iteration of the autonomous driving strategy.

When you have built that, tested it, and done all the good things that you need to do, you need to next be able to get those models and that strategy from the developers back into the cars again. It’s like the other AI problems that we have been talking about, but on steroids because of the sheer volume of data and because of the impact of what happens if something should go wrong.

At HPE Pointnext Services, we have developed a set of solutions that address several of the pain points in the ADT development process. First is the ingest; how can we use HPE Edgeline processing in the car to pre-process data and reduce the amount of data that you have to send back to the data center. Also, you have to send back the most important data after the eight-hour drive first, and then send the run-of-the-mill, backup data later.

At HPE Pointnext Services, we have developed a set of solutions that address several of the pain points in the ADT development process. 

The second piece is the data platform itself, building a massive data platform that is extensible to store all the data coming from the autonomous driving test fleet. That needs to also expand as the fleet grows as well as to support different use cases.

The data platform and the development platform are not only massive in terms of the amount of data that it needs to hold and process, but also in terms of the required tooling. We have been developing reference architectures to enable automotive manufacturers, along with the suppliers of those automotive systems, to build their data platforms and provide all the processing that they need so their data scientists can continuously develop autonomous driving strategies and be able to test them in a highly automated way, while also giving access to the data to the additional suppliers.

For example, the sensor suppliers need to see what’s happening to their sensors while they are on the car. The platform that we have been putting together is really concerned with having the flexibility for those different use cases, the scalability to be able to support the data volumes of today, but also to grow — to be able to have the data volumes of the not-too-distant future.

The platform also supports the speed and data locality, so being able to provide high-speed parallel file systems, for example, to feed those ML development systems and help them train the models that they have.

So all of this pulls together the different components we have talked about with the different use cases, but at a scale that is much larger than several of the other use cases, probably put together.

Gardner: It strikes me that the ADT problem, if solved, enables so many other major opportunities. We are talking about micro-data centers that provide high-performance compute (HPC) at the edge. We are talking about the right hybrid approach to the data management problem — what to move, what to keep local, how to then have a lifecycle approach to. So, ADT is really a key use-case scenario.

Why is HPE uniquely positioned to solve ADT that will then lead to so many enabling technologies for other applications?

Longworth: Like you said, the micro-data center — every autonomous driving car essentially becomes a data center on wheels. So being able to provide that compute at the edge to enable the processing of all that sensor data.

If you look at the HPE portfolio of products, there are very few organizations that have edge compute solutions and the required processing power in such small packages. But it’s also about being able to wrap it up in, not only the hardware, but the solution on top, the support, and being able to provide a flexible delivery model.

Lots of organizations want to have a cloud-like experience, not just from the way they consume the technology, but also in the way they pay for the technology. So, by HPE providing everything as-a-service allows being able to pay for it all, as you use it, for your autonomous driving platform. Again, there are very few organizations in the world that can offer that end-to-end value proposition.

Collaborate and corroborate 

Gardner: Iveta, why does it take a team-sport and solution-approach from the data science perspective to tackle these major use cases?

They can attack the complexity of those use cases from each side because it requires not just data science and the hardware but a lot of domain-specific expertise to solve those problems, too. 

Lohovska: I agree with Andy. The way we approach those complex use cases and the fact that you can have them as a service — and not only infrastructure-as-a-service (IaaS) or data-as-a-service (DaaS) — but working on AI and modeling-as-a-service (MaaS). You can have a marketplace for models and being able to plug-and-play different technologies, experiment, and rapidly deploy them allows you to rapidly get value out of those technologies. That is something we are doing on a daily basis with amazing experts and people with the knowledge of the different layers. They can then attack the complexity of those use cases from each side, because it requires not just data science and the hardware, but a lot of domain-specific expertise to solve those problems. This is something we are looking at and we are doing in-house.

And I am extremely happy to say that I have the pleasure to work with all of those amazing people and experts within HPE.

Gardner: And there is a great deal more information available on each of these use cases for AI. There are white papers on the HPE website in Pointnext Services.

What else can people do, Andy, to get ready for these high-level AI use cases that lead to digital business transformation? How should organizations be setting themselves up on a people, process, and technology basis to become adept at AI as a core competency?

Longworth: It is about people, technology, process, and all these things combined. You don’t go and buy AI in a box. You need a structured approach. You need to understand what the use cases are that give value to your organization and to be able to quickly prototype those, quickly experiment with them, and prove the value to your stakeholders.

hpe-logoWhere a lot of organizations get stuck is moving from that prototyping, proof of concept (POC), and proof of value (POV) phase into full production. It is tough getting the processes and pipelines that enable you to transition from that small POV phase into a full production environment. If you can crack that nut, then the next use-cases that you implement, and the next business problems that you want to solve with AI, become infinitely easier. It is a hard step to go from POV through to the full production because there are so many bits involved.

You have that whole value chain from grabbing hold of the data at the point of creation, processing that data, making sure you have the right people and process around that. And when you come out with an AI solution that gives some form of inference, it gives you some form of answer, you need to be able to act upon that answer.

You can have the best AI solution in the world that will give you the best predictions, but if you don’t build those predictions into your business processes, you may well have never made them in the first place.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, data center, enterprise architecture, Hewlett Packard Enterprise, machine learning, professional services, video delivery | Tagged , , , , , , , , , , , , , , , | Leave a comment

Automation and connectivity will enable the modern data center to extend to many more remote locations

grid mainEnterprise IT strategists are adapting to new demands from the industrial edge, 5G networks, and hybrid deployment models that will lead to more diverse data centers across more business settings.

That’s the message from a broad new survey of 150 senior IT executives and data center managers on the future of the data center. IT leaders and engineers say they must transform their data centers to leverage the explosive growth of data coming from nearly every direction.

Yet, according to the Forbes-conducted survey, only a small percentage of businesses are ready for the decentralized and often small data centers that are needed to process and analyze data close to its source.

The next BriefingsDirect discussion on the latest data center strategies unpacks how more self-healing and automation will be increasingly required to manage such dispersed IT infrastructure and support increasingly hybrid deployment scenarios.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Joining us to help learn more about how modern data centers will efficiently extend to the computing edge is Martin Olsen, Vice President of Global Edge and Integrated Solutions at VertivTM. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Martin, what’s driving this movement away from mostly centralized IT infrastructure to a much more diverse topology and architecture?

Martin Olsen

Olsen

Olsen: It’s an interesting question. The way I look at it is it’s about the cloud coming to you. It certainly seems that we are moving away from centralized IT or centralized locations where we process data. It’s now more about the cloud moving beyond that model.

We are on the front steps of a profound re-architecting of the Internet. Interestingly, there’s no finish line or prescribed recipe at this point. But we need to look at processing data very, very differently.

Over the past decade or more, IT has become an integral part of our businesses. And it’s more than just back-end applications like customer relationship management (CRM), enterprise resource planning (ERP), and material requirements planning (MRP) systems that service the organization. It’s also become an integrated fabric to how we conduct our businesses.

Meeting at the edge 

Gardner: Martin, Cisco predicts there will be 28.5 billion connected devices by 2022, and KPMG says 5G networks will carry 10,000 times more traffic than current 4G networks. We’re looking at an “unknown unknown” here when it comes to what to expect from the edge.

Olsen: Yes, that’s right, and the starting point is well beyond just content distribution networks (CDNs), it’s also about home automation, so accessing your home security cameras, adjusting the temperature, and other things around home automation.

That’s now moving to business automation, where we use compute and generate data to develop, design, manufacture, deploy, and operate our offerings to customers in a much better and differentiated fashion.

We’re also trying to improve the customer experience and how we interact with consumers. So billions of devices generating an unimaginable amount of data out there, is what has become known as edge computing, which means more computing done at or near the source of data.

In the past, we pushed that data out for consuming, but now it’s much more about data meets people, it’s data interacting with people in a distributed IT environment. And then, going beyond that is 5G.

We see a paradigm shift in the way we use IT. Take the amount of tech that goes into manufacturing. It’s exploding, with tens of thousands of sensors deployed in just one facility to help dramatically improve productivity and drive efficiency into the business.

We see a paradigm shift in the way we use IT. Take, for example, the amount of tech that goes into a manufacturing facility, especially high-tech manufacturing. It’s exploding, with tens of thousands of sensors deployed in just one facility to help dramatically improve productivity, differentiate, and drive efficiency into the business.

Retail operations, from a compute standpoint, now require location services to offer a personalized experience in both the pre-shop phase as well as when you go into the store, and potentially in the post-shop, or follow-up experience.

We need to deliver these services quickly, and that requires lower latency and higher levels of bandwidth. It’s increasingly about pushing out from a central standpoint to a distributed fashion. We need to be rethinking how we deploy data centers. We need to think about the future and where these data centers are going to go. Where are we going to be processing all of this data?

Where does the data go?

Gardner: The complexity over the past 10 years about factoring cloud, hybrid cloud, private cloud, and multi-cloud is now expanding back down into the organization — whether it’s an environment for retail, home and consumer, and undoubtedly industrial and business-to-business. How are IT leaders and engineers going to update their data centers to exploit 5G and edge computing opportunities despite this complexity?

Olsen: You have to think about it differently around your physical infrastructure. You have the data aspect of where data moves and how you process it. That’s going to sit on physical infrastructure somewhere, and it’s going to need to be managed somehow.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

You should, therefore, think differently about redesigning and deploying the physical infrastructure. How do you operate and manage it? The concept of a data center has to transform and evolve. It’s no longer just a big building. It could be 100, 1,000, or 10,000 smaller micro data centers. These small data centers are going to be located in places we had previously never imagined you would put in IT infrastructure.

And so, the reliance on onsite technical and operational expertise has to evolve, too. You won’t necessarily have that technical support, a data center engineer walking the halls of a massive data center all day, for example. You are going to be in places like some backroom of a retail store, a manufacturing facility, or the base of a cell tower. It could be highly inaccessible.

ecosystemYou’ll need solutions that offer predictive operations, that have self-healing capabilities within them where they can fail in place but still operate as a function of built-in redundancy. You want to deploy solutions that have zero-touch provisioning, so you don’t have to go to every site to set it up and configure it. It needs to be done remotely and with automation built-in.

You should also consider where the applications are going to be hosted, and that’s not clear now. How much bandwidth is needed? It’s not clear. The demand is not clear at this point. As I said in the beginning, there is no finish line. There’s nothing that we can draw up and say, “This is what it’s going to be.” There is a version of it out there that’s currently focused around home automation and content distribution, and that’s just now moving to business automation, but again, not in any prescribed way yet.

You should consider where the applications are going to be hosted, and that’s not clear. How much bandwidth is needed? It’s not clear. There’s nothing that we can draw up and say, “This is what it’s going to be.” 

So we don’t want to adopt the “right” technologies now. And that becomes a real concern for your ability to compete over time because you can outdate yourself really, really quickly if you don’t make the right choices.

Gardner: When you face such change in your architecture and potential decentralization of micro data centers, you still need to focus on security, backup and recovery, and contingency plans for emergencies. We still need to be mission-critical, even though we are distributed. And, as you point out, many of these systems are going to be self-healing and self-configuring, which requires a different set of skills.

We have a people, process, and technology sea change coming. You at Vertiv wanted to find out what people in the field are thinking and how they are reacting to such change. Tell us about the Vertiv-Forbes survey, what you wanted to accomplish, and the top-line findings.

Survey says seek strategic change

Olsen: We wanted to gauge the thinking and gain a sense of what the C-suite, the data center engineers, and the data center community were thinking as we face this new world of edge computing, 5G, and Internet of things (IoT). The top findings show a need for fundamental strategic change. We face a new mixture of architectures that is far more decentralized and with much more modularity, and that will mean a new way to manage and operate these data centers, too.

Based on the survey, 11 percent of C-suite executives don’t believe they are currently updated even to be ahead of current needs. They certainly don’t have the infrastructure ready for what’s needed in the future. It’s much less so with the data center engineers we polled, with only 1 percent of them believing they are ready. That means the vast majority, 99 percent, don’t believe they have the right infrastructure.

avocentThere is also broad agreement that security and bandwidth need to be updated. Concern about security is a big thing. We know from experience that security concerns have stunted remote monitoring adoption. But the sheer quantity of disparate sites required for edge computing makes it a necessity to access, assess, and potentially reconfigure and remotely fix problems through remote monitoring and access.

Vertiv is driving a high level of configurability of instruments so you can take our components and products and put them together in a multitude of different ways to provide the utmost flexibility when you deploy. We are driving modularized solutions in terms of both modular data center and modularity in terms of how it all goes together onsite. And we are adding much more intelligence into our offerings for the remote sites, as well as the connectivity to be able to access, assess, and optimize these systems remotely.

Gardner: Martin, did the survey indicate whether the IT leaders in the field are anticipating or demanding such self-configuration technologies?

Olsen: Some 24 percent of the executives reported that they expect more than 50 percent of data centers will be self-configuring or have zero-touch provisioning by 2025. And about one-third of them say that more than 50 percent of their data centers will be self-healing by then, too.

That’s not to say that they have all of the answers. That’s their prediction and their responses to what’s going to be needed to solve their needs. So, 29 percent of engineers say they don’t know what percentage of the data centers will be self-configuring and self-healing, but there is an overwhelming agreement that it is a capability they need to be thinking about. Vertiv will develop and engineer our offerings going forward based on what’s going to be put in place out there.

Gardner: So there may be more potential points of failure, but there is going to be a whole new set of technologies designed to ameliorate problems, automate, and allow the remote capability to fix things as needed. Tell us about the proper balance between automation and remote servicing. How might they work together?

Make intelligent choices before you act 

Olsen: First of all, it’s not just a physical infrastructure problem. It has everything to do with the data and workloads as well. They go hand-in-hand; it certainly requires a partnership, a team of people and organizations that come together and help.

Driving intelligence into our products and taking that data off of our systems as they operate provides actionable data. You can then offer that analysis up to non-technical people on how to rectify situations and to make changes.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

These solutions also need to communicate with the hypervisor platforms — whether that’s via traditional virtualization or containerization. Fundamentally, you need to be able to decide how and when to move your applications and workloads to the optimal points on the network.

We are trying to alleviate that challenge by making our offerings more intelligent and offering up actionable alarms, warnings, and recommendations to weigh choices across an overall platform. Again, it takes a partnership with the other vendors and services companies. It’s not just from a physical infrastructure standpoint.

Gardner: And when that ecosystem comes together, you can provide a constellation of data centers working in harmony to deliver services from the edge to the consumer and back to the data centers. And when you can do that around and around, like a circuit, great things can happen.

So let’s ground this, if we can, to the business reality. We are going to enable entirely new business models, with entirely new capabilities. Are there examples of how this might work across different verticals? Can you illustrate — when you have constructed decentralized data centers properly — the business payoffs?

Improving remote results 

Olsen: As you point out, it’s all about the business outcomes we can deliver in the field. Take healthcare. There is a shortage of healthcare expertise in rural areas. Being able to offer specialized doctors and advanced healthcare in places that you wouldn’t imagine today requires a new level of compute and network that delivers low latency all the way to the endpoints.

Imagine a truck fitted with a medical imaging suite. That’s going to have to operate somewhat autonomously. The 5G connectivity becomes essential as you process those images. They have to be graphically loaded into a central repository to be accessed by specialists around the world who read the images.

That requires two-way connectivity. A huge amount of data from these images needs to move to provide that higher level of healthcare and a better patient experience in places where we couldn’t do it before.

There will need to be aggregation points throughout the network. You will need compute to reside at multiple levels of the infrastructure. Places like the base of a cell tower could become the focal point.

So 5G plays into that, but it also means being able to process and analyze some of the data locally. There need to be aggregation points throughout the network. You will need compute to reside at multiple levels of the infrastructure. Places like the base of a cell tower could become a focal point for this.

You can imagine having four, five, six times as much compute power sitting in these places along a remote highway that is not easily accessible. So, having technical staff be able to troubleshoot those becomes vital.

There are also uses cases that will use augmented reality (AR). Think of technicians in the field being able to use AR when they dispatch a field engineer to troubleshoot a system somewhere. We can make them as effective as possible, and access expertise from around the world to help troubleshoot these sites. AR becomes a massive part of this because you can overlay what the onsite people are seeing in through 3D glasses or virtual reality glasses and help them through troubleshooting, fixing, and optimizing whatever system they might be working on.

WorkerAgain, that requires compute right at the endpoint device. It requires aggregation points and connectivity all the way back to the cloud. So, it requires a complex network working together. The more advanced these use cases become — the more remote locations we have to think through — we are going to have to deploy infrastructure and access it as well.

Gardner: Martin, when I listen to you describe these different types of data centers with increased complexity and capabilities in the networks, it sounds expensive. But are there efficiencies you gain when you have a comprehensive design across all of the parts of the ecosystem? Are there mitigating factors that help with the total cost?

Olsen: Yes, as the net footprint of compute increases, I don’t think the cost is linear with that. We have proven that with the Vertiv technologies we have developed and already deployed. As the compute footprint increases, there is a fundamental need for driving energy efficiency into the infrastructure. That comes in the form of using more efficient ways of cooling the IT infrastructure, and we have several options around that.

It’s also from new battery technologies. You start thinking about lithium-ion batteries, which Vertiv has solutions around. Lithium-ion batteries make the solution far more resilient, more compact, and it needs much less maintenance when it sits out there.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

So, the amount of infrastructure that’s going to go out there will certainly increase. We don’t think it’s necessarily going to be linear in terms of the cost when you pay close attention to how, as an organization, you deploy edge computing. By considering these new technologies, that’s going to help drive energy efficiency, for example.

Gardner: Were there any insights from the Forbes survey that went to the cost equation? How do the IT executives expect this to shake out?

Energy efficiency partnerships 

Olsen: We found that 71 percent of the C-suite executives said that future data centers will reduce costs. That speaks to both the fact that there will be more infrastructure out there, but that it will be more energy efficient in how it’s run.

It’s also going to reduce the cost of the overall business. Going back to the original discussion around the business outcomes, deploying infrastructure in all these different places will help drive down the overall cost of doing business.

It’s an energy efficiency play both from a very fundamental standpoint in the way you simply power and cool the equipment, and overall, as a business, in the way you deliver improved customer experience and how you deliver products and services for your customers.

Gardner: How do organizations prepare themselves to get out in front of this? As we indicated from the survey findings, not that many say they are prepared. What should they be doing now to change that?

Olsen: Yes, most organizations are unprepared for the future — and not necessarily even in agreement on the challenges. A very small percentage of the respondents, 11 percent of executives believe that their data centers are ahead of current needs, even less so for the data center engineers. Only 44 percent of them say that their data centers are updated regularly. Only 29 percent say their data centers even meet current needs.

To prepare going forward, they should seek partnerships. Get the data centers upgraded, but also think through and understand how organizations like Vertiv have decades of experience in designing, deploying, and operating large data centers from a physical infrastructure standpoint. We use that experience and knowledge base for the data center of tomorrow. It can be a single IT rack or two going to any location.

We take all of our learning and experience and drive it into what becomes the smallest common denominator data center, which could just be a rack. These are modular solutions that are intelligent and can be optimized remotely.

We take all of that learning and experience and drive it into what becomes the smallest common denominator data center, which could just be a rack. So it’s about working with someone who has that experience, already has the data, and has the offerings of configurable, modular solutions that are intelligent and provide accessibility to access, assess, and optimize remotely. And it’s about managing the data that comes off these systems and extracts the value out of it, the way we do that with some of our offering around Vertiv LIFE Services, with very prescriptive, actionable alarms and alerts that we send from our systems.

Very few organizations can do this on their own. It’s about the ecosystem, working with companies like Vertiv, working closely with our strategic partners on the IT side, storage networks, and all the way through to the applications that make it all work in unison.

batteryThink through how to efficiently add compute capacity across all of these new locations, what those new locations should look like, and what the requirements are from a security standpoint.

There is a resiliency aspect to it as well. In harsh environments such as high-tech manufacturing, you need to ensure the infrastructure is scalable and minimizes capital expenditure spending. The modular approach allows building for a future that may be somewhat unknown at this point. Deploying modular systems that you can easily augment and add capacity or redundancy to over time — and that operate via robust remote management platforms — are some of the things you want to be thinking about.

Gardner: This is one of the very few empirical edge computing research assets that I have come across, the Vertiv and Forbes collaboration survey. Where can people find out more information about it if they want more details? How is this going to be available?

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

Olsen: We want to make this available to everybody to review. In the interest of sharing the knowledge about this new frontier, the new world of edge computing, we will absolutely be making this research and study available. I want to encourage people to go visit vertiv.com to find more information and download the research results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, Cyber security, data center, Data center transformation, enterprise architecture, hyperconverged infrastructure, Security, Vertiv | Tagged , , , , , , , , , , , , , , | Leave a comment

How Intility uses HPE Primera intelligent storage to move to 100 percent data uptime

smart head

The next BriefingsDirect intelligent storage innovation discussion explores how Norway-based Intility sought and found the cutting edge of intelligent storage.

Stay with us as we learn how this leading managed platform services provider improved uptime — on the road to 100 percent — and reduced complexity for its end users.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To hear more about the latest in intelligent storage strategies that lead to better business outcomes, please welcome Knut Erik Raanæs, Chief Infrastructure Officer at Intility in Oslo, Norway. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Knut, what trends and business requirements have been driving your need for Intility to be an early adopter of intelligent storage technology?

Knut Erik Raanæs

Raanæs

Raanæs: For us, it is important to have good storage systems that are easy to operate to lower our management costs. At the same time, it gives great uptime for our customers.

Gardner: You are dealing not only with quality of service requirements; you also have very rapid growth. How does intelligent storage help you manage such rapid growth?

Raanæs: By easily having performance trends shown so we can spot when we are running full. If that happens, we can react before we run out of capacity.

Gardner: As a managed cloud service provider, it’s important for you to have strict service level agreements (SLAs) met. Why are the requirements of cloud services particularly important when it comes to the quality of storage services?

Intelligent, worry-free storage 

Raanæs: It’s very important to have good quality of service separation because we have lots of different kinds of customers. We don’t want to have the noise-enabled problem where one customer affects another customer — or even the virtual machine (VM) of one customer affects another VM. The applications should work independently of each other.

That’s why we have been using Hewlett Packard Enterprise (HPE)Nimble Storage. Our quality of service would be much worse at the VM disk level. It’s very good technology.

Gardner: Tell us about Intility, your size, scope, how long you have been around, and some of the major services you provide.

Raanæs: Intility was founded in 2000. We have always been focused on being a managed cloud service provider. From the start, there have been central shared services, a central platform, where we on-boarded customers and they shared email systems, and Microsoft Active Directory, along with all the application backup systems.

Over the last few years, the public cloud has made our customers more open to cloud solutions in general, and to not having servers in the local on-premises room at the office. We have now grown to more than 35,000 users, spread over 2,000 locations across 43 countries. We have 11 shared services datacenters, and we also have customers with edge location deployments due to high latency or unstable Internet connections. They need to have the data close to them.

Gardner: What is required when it comes to solving those edge storage needs?

Customers often want inexpensive solutions. We have to look at different solutions that give the best stability but don’t cost too much. And we need remote management of the solution.

Raanæs: Those customers often want inexpensive solutions. So we have to look at different solutions and pick the one that gives the best stability but that also doesn’t cost too much. We also need easy remote management of the solution, without being physically present.

Gardner: At Intility, even though you’re providing infrastructure-as-a-services (IaaS), you are also providing a digital transformation benefit. You’re helping your customers mature and better manage their complexity as well as difficulty in finding skills. How does intelligent IaaS translate into digital transformation?

Raanæs: When we meet with potential customers, we focus on taking away concerns about infrastructure. They are just going to leave that part to us. The IT people can then just move up in [creating value] and focus on digitalizing the business for their customers.

Gardner: Of course, cloud-based services require overcoming challenges with security, integration, user access management, and single sign on. How are those higher-level services impacted by the need for intelligent storage?

Smart storage security 

Raanæs: With intelligent storage, we can focus on having our security operations center (SOC) monitor responses the instant they see them on our platforms. We can keep a keen eye on our storage systems to make sure that nothing ever happens on the storage. That can be an early signal of something happening.

Gardner: Please describe the journey you have been on when it comes to storage. What systems you have been using? Why have intelligence, insights, and analysis capabilities been part of your adoption?

Girl rack

Raanæs: We started back in 2013 with HPE 3PAR arrays. Before that we used IBM storage. We had multiple single-Redundant Array of Inexpensive Disks (RAID) sets and had to manage hotspots ourselves, so by moving even one VM we had to try and balance it out manually.

In 2013, when we went with the first 3PAR array, we had huge benefits. That 3PAR array used less space and at the same time we didn’t have to manage or even out the hotspots. 3PAR and its active controllers were a great plus for us for many years.

But about one-and-a-half years ago, we started using HPE Nimble arrays, primarily due to the needs of VMware vCenter and quality of service requirements. Also, with the Nimble arrays, the InfoSight technology was quite nice.

Gardner: Right. And, of course, HPE is moving that InfoSight technology into more areas of their infrastructure. How important has InfoSight been for you?

Raanæs: It’s been quite useful. We had some systems that required us to use other third-party applications to give an expansive view of the performance of the environment. But those applications were quite expensive and had functionality that we really didn’t need. So at first we pulled data from the vCenter database and visualized the data. That was a huge start for us. But when InfoSight came along later it gave us even more information about the environment.

Gardner: I understand you are now also a beta customer for HPE Primera storage. Tell us about your experience with Primera. How does that move the needle forward for you?

For 100 percent uptime 

Raanæs: Yes, we have been beta testing Primera, and it has been quite interesting. It was easy to set up. I think maybe 20 minutes from getting it into the rack and just clicking through the setup. It was then operational and we could start provisioning storage to the whole system.

And with Primera, HPE is going in with 100 percent uptime guarantee. Of course, I still expect to deal with some rare incidences or outages, but it’s nice to see a company that’s willing to put their money where their mouth is, and say, “Okay, if there is any downtime or an outage happens, we are going to give you something back for it.”

Gardner: Do you expect to put HPE Primera into production soon? How would you use it first?

With Primera, HPE is going in with 100 percent uptime guarantee. It’s nice to see a company that’s willing to put their money where their mouth is.

Raanæs: So we are currently waiting for our next software upgrade for HPE Primera. Then we are then going to look at putting it into production. The use case is going to be general storage because we have so much more storage demand and need to try to keep it consistent, to make it easier to manage.

Gardner: And do you expect to be able to pass along these benefits of speed of deployment and 100 percent uptime to your end users? How do you think this will improve your ability to deliver SLAs and better business outcomes?

Raanæs: Yes, our end users are going to be quite happy with 100 percent uptime. No one likes downtime — not us, not our customers. And HPE Primera’s speed of deployment means that we have more time to manage other parts of the platform and to get better service out to the customers.

HPE PRimera logoGardner: I know it’s still early and you are still in the proof of concept stage, but how about the economics? Do you expect that having such high levels of advanced intelligence across storage will translate into your ability to do more for less, and perhaps pass some of those savings on?

Raanæs: Yes, I expect that’s going to be quite beneficial for us. Because we are based in Norway, one of our largest expenses is for people. So, the more we can automate by using the systems, the better. I am really looking forward to seeing this improve and getting easier to manage systems and analyze performance within a few hours.

Gardner: On that issue of management, have you been able to use HPE Primera to the degree where you have been able to evaluate its ease of management? How beneficial is that?

Work smarter, not harder 

Raanæs: Yes, the ease of management was quite nice. With Primera you can do the service upgrade more easily. So with 3PAR, we had to schedule an upgrade with the upgrade team at HPE and had to wait a few weeks. Now we can just do the upgrade ourselves.

And hardware replacements are easier, too. We can just get a nice PDF showing you how to replace the parts. So it’s also quite nice.

I also like the part of the service processor in 3PAR that’s now just garnered with Primera; it’s in with the array. So, that’s one less thing to worry about managing.

Gardner: Knut, as we look to the future, other technologies are evolving across the infrastructure scene. When combined with something like HPE Primera, is there a whole greater than the sum of the parts? How will you will be able to use more intelligence broadly and leverage more of this opportunity for simplicity and passing that onto your end users?

Raanæs: I’m hoping that more will come in the future. We are also looking at non-volatile memory express (NVMe). That’s a caching solution and it’s ready to be built into HPE Primera, too. So that’s also quite interesting to see what the future will bring there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, data center, Data center transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, machine learning, VMware | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

A new status quo for data centers–seamless communication from core to cloud to edge

DC mainAs 2020 ushers in a new decade, the forces shaping data center decisions are extending compute resources to new places.

With the challenging goals of speed, agility, and efficiency, enterprises and service providers alike will be seeking new balance between the need for low latency and optimal utilization of workload placement. Hybrid models will therefore include more distributed, confined, and modular data centers at or near the edge.

These are but some of a few top-line predictions on the future state of the modern data center design. The next BriefingsDirect data center strategies discussion with two leading IT and critical infrastructure executives examines how these new data center variations nonetheless must also interoperate seamlessly from core to cloud to edge.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the new state of extensible data centers is Peter Panfil, Vice President of Global Power at VertivTM, and Steve Madara, Vice President of Global Thermal at Vertiv. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The world is rapidly changing in 2020. Organizations are moving past the debate around hybrid deployments, from on-premises to public clouds. Why do we need to also think about IT architectures and hybrid computing differently?

Peter Panfil

Panfil

Panfil: We noticed a trend at Vertiv in our customer base. That trend is toward a new generation of data centers. We have been living with distributed IT, client-server data centers moving to cloud, either a public cloud or a private cloud.

But what we are seeing is the evolution of an edge-to-core, near-real-time data center generation. And it’s being driven by devices everywhere, the “connected-all-the-time” model that all of us seem to be going to.

And so, when you are in a near-real-time world, you have to have infrastructure that supports your near-real-time applications. And that is what the technology folks are facing. I refer to it as a pack of dogs chasing them — the amount of data that’s being generated, the applications running remotely, and the demand for availability, low latency, and driving cost down as much as you possibly can. This is what’s changing how they approach their critical infrastructure space.

Gardner: And so, a new equilibrium is emerging. How is this different from the past?

Madara: If we go back 20 years, everything was centralized at enterprise data centers. Then we decided to move to decentralized, and then back to centralized. We saw a move to colocation as people decided that’s where they could get lower cost to run their apps. And then things went to the cloud, as Peter said earlier.

Steve Madara

Madara

And now, we have a huge number of devices connected locally. Cisco says by late 2020 that it’s going to have 23 billion connected devices, and over half of those are going to be machine-to-machine communications, which, as Peter mentioned earlier, the latency is going to be very, very critical.

An interesting read is Michael Lewis’s book Flash Boys about the arbitrage that’s taking place with the low latency that you have in stock market trading. I think we are going to see more of that moving to the edge. The edge is more like a smart rack or smart row deployment in an existing facility. It’s going to be multi-tenant, because it’s going to be able to be throughout large cities. There could be 20 or 30 of these edge data center sites hosting different applications for customers.

This move to the edge is also going to provide IT resources in a lot of underserved markets that don’t yet have pervasive compute, especially in emerging countries

Gardner: Why is speed so important? We have been talking about this now for years, but it seems like the need for speed to market and speed to value continues to ramp up. What’s driving that?

Moving to the edge, with momentum 

Panfil: There is more than one kind of speed. There is speed of response of the application, that’s something that all of us demand — speed of response of the applications. I have to have low latency in the transactions I am performing with my data or with my applications. So there is the speed of the actual data being transmitted.

There is also speed of deployment. When Steve talked earlier about centralized cloud deployments in these core data centers, your data might be going over a significant distance, hopping along the way. Well, if you can’t live with that latency that gets inserted, then you have to take the IT application and put it closer to the source and consumer of the data. So there is a speed of deployment, from core to edge, that happens.

And the third type of speed is you have to have low-first-cost, high-asset-utilization, and rapid-scalability. So that’s a speed of infrastructure adaptation to what the demands for the IT applications are.

So when we mean speed, I often say it’s speed, speed, and speed. First it’s the data speed, then deploying fast, and then at scale at business-friendly cost and reliability. 

So when we mean speed, I often say it’s speed, speed, and speed. First, it’s the data IT. Once I have data IT speed, how did I achieve that? l did it by deploying fast, in the scale needed for the applications, and lastly at a cost and reliability that makes it tolerable for the businesses.

Gardner: So I guess it’s speed-cubed, right?

Panfil: At least, speed-cubed. Steve, if we had a nickel for every time one of our customers said “speed,” we wouldn’t have to work anymore. They are consumed with the different speeds that they have to deal with — and it’s really the demands of their customers.

Gardner: Vertiv for years has been looking at the data center of the future and making some predictions around what to expect. You have been rather prescient. To continue, you have now identified several areas for 2020, too. Let’s go through those trends.

Steve, Vertiv predicts that “hybrid architectures will go mainstream.” Why did you identify that, and what do you mean?

The future goes hybrid 

Madara: If we look at the history of going from centralized to decentralized, and going to colocation and cloud applications, it shows the ongoing evolution of Internet of Things (IoT) sensors, 5G networks, smart cities, autonomous cars, and how more and more of that data is generated and will need to be processed locally. A lot of that is from machine-to-machine applications.

logoSo when we now talk about hybrid, we have to get very, very close to the source, as far as the processing is concerned. That’s going to be a large-scale evolution that’s going to drive the need for hybrid applications. There is going to be processing at the edge as well as centralized applications — whether it’s in a cloud or hosted in colocation-based applications.

Panfil: Steve, you and I both came up through the ranks. I remember when the data closet down the hall was basically a communications matrix. Its intent was to get communications from wherever we were to wherever our core data center was.

Well, the cloud is not going away. Number two, enterprise IT is not going away. What the enterprise is saying is, “Okay, I am going to take my secret sauce and I am going to put it in an edge data center. I am going to put the compute power as close to my consumer of that data and that application as I possibly can. And then I am going to figure out where the rest of it’s going to go.”

If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can’t, I might even break up my enterprise data center into small or micro data centers that give me even better responses.

“If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can’t, I might even break up my enterprise data center into small or micro data centers that give me even better responses.”

Dana, it’s interesting, there was a recent wholesale market summary published that said the difference between the smaller and the larger wholesale deals widened. So what that says is the large wholesale deals are getting bigger, the small wholesale deals are getting smaller, and that the enterprise-based demand, in deployments under 600 kilowatts, is focused on low-latency and multi-cloud access.

That tells us that our customers, the users of that critical space, are trying to place their IT appliances as close as they can to their customers, eliminating the latency, responding with speed, and then figuring out how to mesh that edge deployment with their core strategy.

Gardner: Our second trend gets back to the speed-cubed notion. I have heard people describe this as a new arms race, because while it might be difficult to differentiate yourself when everyone is using the same public cloud services, you can really differentiate yourself on how well you can conduct yourself at speed.

What kinds of capabilities across your technologies will make differentiation around speed work to an advantage as a company?

The need for speed 

Panfil: Well, I was with an analyst recently, and I said the new reality is not that the big will eat the small — it’s that the fast will eat the slow. And any advantage that you can get in speed of applications, speed of deployment, deploying those IT assets — or morphing the data center infrastructure or critical space infrastructure – helps improve capital efficiency. What many customers tell us is that they have to shorten the period of time between deciding to spend money on IT assets and the time that those asset start creating revenue.

They want help being creative in lowering their first-cost, in increasing asset utilization, and in maintaining reliability. If, holy cow, my application goes down, I am out of business. And then they want to figure out how to manage things like supply chains and forecasting, which is difficult to do in this market, and to help them be as responsive as they can to their customers.

powerMadara: Forecasting and understanding the new applications — whether it’s artificial intelligence (AI) or 5G — the CIOs need to decide where they need to put those applications whether they should be in the cloud or at the edge. Technology is changing so fast that nobody can predict far out into the future as far as to where I will need that capacity and what type of capacity I will need.

So, it comes down to being able to put that capacity in the place where I need it, right when I need it, and not too far in advance. Again, I don’t want to spend the capital, because I may put it in the wrong place. So it’s got to be about tying the demand with the supply, and that’s what’s key as far as the infrastructure.

And the other element I see is technology is changing fast, even on the infrastructure side. For our equipment, we are constantly making improvements every day, making it more efficient, lower cost, and with more capability. And if you put capacity in today that you don’t need for a year or two down the road, you are not taking advantage of the latest, greatest technology. So really it’s coupling the demand to the actual supply of the infrastructure — and that’s what’s key.

Another consideration is that many of these large companies, especially in the colocation market, have their financial structure as a real estate investment trust (REIT). As a result, they need to tie revenue with expenses tighter and tighter, along with capital spending.

Panfil: That’s a good point, Steve. We redesigned our entire large power portfolio at Vertiv specifically to be able to address this demand.

In previous generations, for example, the uninterruptible power supply (UPS) was built as a complete UPS. The new generation is built as a power converter, plus an I/O section, plus an interface section that can be rapidly configured to the customer, or, in some cases, put into a vendor-managed inventory program. This approach allows us to respond to the market and customers quicker.

We were forced to change our business model in such a way that we can respond in real time to these kinds of capacity-demand changes.

Madara: And to add to that, we have to put together more and more modules and solutions where we are bundling the equipment to deliver it faster, so that you don’t have to do testing on site or assembly on site. Again, we are putting together solutions that help the end-user address the speed of the construction of the infrastructure.

also think that this ties into the relationship that the person who owns the infrastructure has with their supplier base. Those relationships have to build in, as Peter mentioned earlier, the ability to do stocking of inventory, of having parts available on-site to go fast.

Gardner: In summary so far, we have this need for speed across multiple dimensions. We are looking at more hybrid architectures, up and down the scale — from edge to core, on-premises to the cloud. And we are also looking at crunching more data and making real-time analytics part of that speed advantage. That means being able to have intelligence brought to bear on our business decisions and making that as fast as possible.

So what’s going on now with the analytics efficiency trend? Even if average rack density remains static due to a lack of space, how will such IT developments as high performance computing (HPC) help make this analysis equation work to the business outcome’s advantage?

High-performance, high-density pods 

Madara: The development of AI applications, machine learning (ML), and what could be called deep learning are evolving. Many applications are requiring these HPC systems. We see this in the areas of defense, gaming, the banking industry, and people doing advanced analytics and tying it to a lot of the sensor data we talked about for manufacturing.

It’s not yet widespread, it’s not across the whole enterprise or the entire data center, and these are often unique applications. What I hear in large data centers, especially from the banks, is that they will need to put these AI applications up on 30-, 40-, 50- or 60-kW racks — but they only have three or four of these racks in the whole data center.

The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of high-density compute. They are going to need to decide how they are going to facilitize for that type of equipment.

The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of high-density compute. And if they are in their own facility, if it’s an enterprise that has its own data center, they will need to decide how they are going to facilitize for that type of equipment.

A lot of the colocation hosting facilities have customers saying, “Hey, I am going to be bringing in the future a couple of racks that are very high density. A lot of these multi-tenant data centers are saying, ‘Oh, how do I provision for these, because my data center was laid out for this average of maybe 8 kW per rack? How do I manage that, especially for data centers that didn’t previously have chilled water to provide liquid to the rack?’”

We are now seeing a need to provide chilled water cooling that would go to a rear door heat exchanger on the back of the rack. It could be chilled water that would go to a rack for chip cooling applications. And again, it’s not the whole data center; it’s a small segment of the data center. But it raises questions of how I do that without overkill on the infrastructure needed.

batteryGardner: Steve, do you expect those small pods of HPC in the data center to make their way out to the edge when people do more data crunching for the low-latency requirements, where you can’t move the data to a data center? Do you expect to have this trend grow more distributed?

Madara: Yes, I expect this will be for more than the enterprise data center and cloud data centers. I think you are going to see analytics applications developed that are going to be out at the edge because of the requirements for latency.

When you think about the autonomous car; none of us know what’s going to be required there for that high-performance processing, but I would expect there is going to be a need for that down at the edge.

Gardner: Peter, looking at the power side of things when we look at the batteries that help UPS and systems remain mission-critical regardless of external factors, what’s going on with battery technology? How will we be using batteries differently in the modern data center?

Battery-powered savings 

Panfil: That’s a great question. Battery technology has been evolving at an incredibly fast rate. It’s being driven by the electric vehicles. That growth is bringing to the market batteries that have a size and weight advantage. You can’t put a big, heavy pack of batteries in a car and hope to have it perform well.

It also gives a long-life expectation. So data centers used to have to decide between long-life, high-maintenance, wet cells and the shorter-life, high-maintenance, valve-regulated lead-acid (VRLA) batteries. In step with the lithium-ion batteries (LIBs) and thin plate pure lead (TPPL) batteries, what’s happened is the total cost of ownership (TCO) has started to become very advantageous for these batteries.

Our sales leadership lead sent me the most recent TCO between either TPPL or LIBs versus traditional VRLA batteries, and the TCO is a winner for the LIBs and the TPPL batteries. In some cases, over a 10-year period, the TCO is a factor of two lower for LIB and TPPL.

Where in the cloud generation of data centers was all about lowest first cost, in this edge-to-core mentality of data centers, it’s about TCO. There are other levers that they can start to play with, too.

So, for example, they have life cycle and operating temperature variables. That used to be a real limitation. Nobody in the data center wanted their systems to go on batteries. They tried everything they could to not have their systems go on the battery because of the potential for shortening the life of their batteries or causing an outage.

Today we are developing IT systems infrastructure that takes advantage of not only LIBs, but also pure lead batteries that can increase the number of [discharge/recharge] cycles. Once you increase the number of cycles, you can think about deploying smart power configurations. That means using batteries not only in the critical infrastructure for a very short period of time when the power grid utility fails, but to use that in critical infrastructure to help offset cost.

If I can reduce utility use at peak demand periods, for example, or I can reduce stress on the grid at specified times, then batteries are not only a reliability play – they are also a revenue-offset play. And so, we’re seeing more folks talking to us about how they can apply these new energy storage technologies to change the way they think about using their critical space.

Also, folks used to think that the longer the battery time, the better off they were because it gave more time to react to issues. Now, folks know what they are doing, they are going with runtimes that are tuned to their operations team’s capabilities. So, if my operations team can do a hot swap over an IT application — either to a backup critical space application or to a redundant data center — then all of a sudden, I don’t need 5 to 12 minutes of runtime, I just need the bridge time. I might only need 60 to 120 seconds.

Now, if I can have these battery times tuned to the operations’ capabilities — and I can use the batteries more often or in higher temperature applications — then I can really start to impact my TCO and make it very, very cost-effective.

Gardner: It’s interesting; there is almost a power analog to hybrid computing. We can either go to the cloud or the grid, or we can go to on-premises or the battery. Then we can start to mix and match intelligently. That’s really exciting. How does lessening dependence on the grid impact issues such as sustainability and conserving energy?

Sustainability surges forward 

Panfil: We are having such conversations with our key accounts virtually every day. What they are saying is, “I am eventually not going to make smoke and steam. I want to limit the number of times my system goes on a generator. So, I might put in more batteries, more LIBs or TPPL batteries, in certain applications because if my TCO is half the amount of the old way, I could potentially put in twice as much, and have the same cost basis and get that economic benefit.”

And so from a sustainability perspective, they are saying, “Okay, I might need at some point in the useful life of that critical space to not draw what I think I need to draw from my utility. I can limit the amount of power I draw from that utility.”

I love all of you out there in data center design, but most of them are designed for peak useage. These changes allow them to design more for the norm of the requirements. That means they can put in less infrastructure, less battery, to right-size their generators; same thing on cooling. 

This is not a criticism, I love all of you out there in data center design, but most of them are designed for peak usage. So what these changes allow them to do is to design more for the norm of the requirements. That means they can put in less infrastructure, the potential to put in less battery. They have the potential to right-size their generators; same thing on the cooling side, to right-size the cooling to what they need and not for the extremes of what that data center is going to see.

From a sustainability perspective, we used to talk about the glass as half-full or half-empty. Now, we say there is too much of a glass. Let’s right-size the glass itself, and then all of the other things you have to do in support of that infrastructure are reduced.

Madara: As we look at the edge applications, many will not have backup generators. We will have alternate energy sources, and we will probably be taking more hits to the batteries. Is the LIB the better solution for that?

UPSPanfil: Yes, Steve, it sure is. We will see customers with an expectation of sustainability, a path to an energy source that is not fossil fuel-based. That could be a renewable energy source. We might not be able to deploy that today, but they can now deploy what I call foundational technologies that allow them to take advantage of it. If I can have a LIB, for example, that stores excess energy and allows me to absorb energy when I’m creating more than I need — then I can consume that energy on the other side. It’s better for everybody.

Gardner: We are entering an era where we have the agility to optimize utilization and reduce our total costs. The thing is that it varies from region to region. There are some areas where compliance is a top requirement. There are others where energy issues are a top requirement because of cost.

What’s going on in terms of global cross-pollination? Are we seeing different markets react to their power and thermal needs in different ways? How can we learn from that?

Global differences, normalized 

Madara: If you look at the size of data centers around the world, the data centers in the U.S. are generally much larger than in Europe. And what’s in Europe is much larger than what we have in other developed countries. So, there are a couple of things, as you mentioned, energy availability, cost of energy, the size of the market and the users that it serves. We may be looking at more edge data centers in very underserved markets that have been in underdeveloped countries.

So, you are going to see the size of the data center and the technology used potentially different to better fit needs of the specific markets and applications. Across the globe, certain regions will have different requirements with regard to security and sustainability.

Even though we have these potential differences, we can meet the end-user needs to right-size the IT resources in that region. We are all more common than we are different in many respects. We all have needs for security, we all have needs for efficiency, it may just be to different degrees.

Panfil: There are different regional agency requirements, different governmental regulations that companies have to comply with. And so what we find, Dana, is that what our customers are trying to do is normalize their designs. I won’t say they are standardizing their design because standardization says I am going to deploy exactly the same way everywhere in the world. I am a fan of Kit Kats and Kit Kats are not the same globally, they vary by region, the same is true for data centers.

So, when you look at how the customers are trying to deal with the regional and agency differences that they have to live with, what they find themselves doing is trying to normalize their designs as much as they possibly can globally, realizing that they might not to be able to use exactly the same power configuration or exactly the same thermal configuration. But we also see pockets where different technologies are moving to the forefront. For example, China has data centers that are running at high voltage DC, 240 volts DC, we have always had 48-volt DC IT applications in the Americas and in Europe. Customers are looking at three things — speed, speed, and speed.

And so when we look at the application, for example, of DC, there used to be a debate, is it AC or DC? Well, it’s not an “or” it’s an “and.” Most of the customers we talk to, for example, in Asia are deploying high-voltage DC and have some form of hybrid AC plus DC deployment. They are doing it so that they can speed their applications deployments.

In the Americas, the Open Compute Project (OCP) deploys either 12 or 48 volts to the rack. I look at it very simply. We have been seeing a move from 2N architecture to N plus 1 architecture in the power world for a decade, this is nothing more than adopting the N plus 1 architecture at the rack level versus the 2N architecture at the rack level.

And so what we see is when folks are trying to, number one, increase the speed; number two, increase their utilization; number three, lower their total cost, they are going to deploy infrastructures that are most advantageous for either the IT appliances that they are deploying or for the IT applications that they are running, and it’s not the same for everybody, right Steve?

You and I have been around the planet way too many times, you are a million miler, so am I. It’s amazing how a city might be completely different in a different time zone, but once you walk into that data center, you see how very consistent they have gotten, even though they have done it completely independently from anybody else.

Madara: Correct!

Consistency lowers costs and risks 

Gardner: A lot of what we have talked about boils down to a need to preserve speed-to-value while managing total cost of utilization. What is there about these multiple trends that people can consider when it comes to getting the right balance, the right equilibrium, between TCO and that all important speed-to-value?

Madara: Everybody strives to drive cost down. The more you can drive the cost down of the infrastructure, the more you can do to develop more edge applications.

I think we are seeing a very large rate of change of driving cost down. Yet we still have a lot of stranded capacity out there in the marketplace. And people are making decisions to take that down without impacting risk, but I think they can do it faster.

Standardization helps drive speed, whether it’s normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every one is different. 

Peter mentioned standardization. Standardization helps drive speed, whether it’s normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every new one is different.

Repeating allows you to build a supply base ecosystem where everybody has the same goal, knows what to do, and can be partners in driving out cost and in driving speed. Those are some of the key elements as we go forward.

Gardner: Peter when we look to that standardization, you also allow for more seamless communication from core to cloud to edge. Why is that important, and how can we better add intelligence and seamless communication among and between all these different distributed data centers?

Panfil: When we normalize designs globally, we take a look at the regional differences, sort out what the regional differences have to be, and then put a proof of concept deployment. And out of that comes a consistent method of procedure.

DC walkersWhen we talk about managing the data center effectively and efficiently, first of all, you have to know what you have. And second, you have to know what it’s doing. And so, we are seeing more folks normalizing their designs and getting consistency. They can then start looking at how much of their available capacity from a design perspective they are actually using both on a normal basis and on a peak basis and then they can determine how much of that they are willing to use.

We have some customers who are very risk-averse. They stay in the 2N world, which is a 50 percent maximum utilization. We applaud them for it because they are not going to miss a transaction.

There are others who will say, “I can live with the availability that an N+1 architecture gives me. I know I am going to have to be prepared for more failures. I am going to have to figure out how to mitigate those failures.”

So they are working constantly at figuring out how to monitor what they have and figure out what the equipment is doing, and how they can best optimize the performance. We talked earlier about battery runtimes, for example. Sometimes they might get short or sometimes they might be long.

As these companies get into this step and repeat function, they are going to get consistency of their methods of procedure. They’re going to get consistency of how their operations teams run their physical infrastructure. They are going to think about running their equipment in ways that is nontraditional today but will become the norm in the next generation of data centers. And then they are going to look at us and say, “Okay, now that I have normalized my design, can I use rapid deployment configuration? Can I put it on a skid, in a container? Can I drop it in place as the complete data center?”

Well, we build it one piece of equipment at a time and stitch it all together. The question that you asked about monitoring, it’s interesting because we talked to a major company just last month. Steve and I were visiting them at their site. And they said, “You know what? We spend an awful lot of time figuring out how our building management system and our data exchange happens at the site. Could Vertiv do some of that in the factory? Could you configure our data acquisition systems? Could you test them there in the factory? Could we know that when the stuff shows up on site that it’s doing the things that it’s supposed to be doing instead of us playing hunt and peck to figure out what the issues are?”

We said, “Of course.” So we are adding that capability now into our factory testing environment. What we see is a move up the evolutionary scale. Instead of buying separate boxes, we are seeing them buying solutions — and those solutions include both monitoring and controls.

Steve didn’t even get a chance to mention the industry-leading Vertiv Liebert® iCOM™ control for thermal. These controls and monitoring systems allow them to increase their utilization rates because they know what they have and what it’s doing.

Gardner: It certainly seems to me, with all that we have said today, that the data center status quo just can’t stand. Change and improvement is inevitable. Let’s close out with your thoughts on why people shouldn’t be standing still; why it’s just not acceptable.

Innovation is inevitable 

Madara: At the end of the day, the IT world is changing rapidly every day. Whether in the cloud or down at the edge, the IT world needs to adjust to those needs. They need to be able to be cut out enough of the cost structure. There is always a demand to drive cost down.

If we don’t change with the world around us, if we don’t meet the requirements of our customers, things aren’t going to work out – and somebody else is going to take it and go for it.

Panfil: Remember, it’s not the big that eats the small, it’s the fast that eats the slow.

Madara: Yes, right.

Panfil: And so, what I have been telling folks is, you got to go. The technology is there. The technology is there for you to cut your cost, improve your speed, and increase utilization. Let’s do it. Otherwise, somebody else is going to do it for you.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, Cyber security, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, hyperconverged infrastructure, Internet of Things, Vertiv | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Intelligent spend management supports better decision-making across modern business functions

financeThe next BriefingsDirect discussion on attaining intelligent spend management explores the findings of a recent IDC survey on paths to holistic business processes improvement.

We’ll now learn how a long history of legacy systems and outdated methods holds companies back from their potential around new total spend management optimization. The payoffs on gaining such a full and data-rich view of spend patterns across services, hiring, and goods includes reduced risk, new business models, and better strategic decisions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To help us chart the future of intelligent spend management, and to better understand how the market views these issues, we are joined by Drew Hofler, Vice President of Portfolio Marketing at SAP Ariba and SAP Fieldglass. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What trends or competitive pressures are prompting companies to seek better ways to get a total spend landscape view? Why are they incentivized to seek broader insights?

Drew Hofler

Hofler

Hofler: After years of grabbing best-of-breed or niche solutions for various parts of the source-to-pay process, companies are reaching the limits of this siloed approach. Companies are now being asked to look at their vendor spend as a whole. Whereas before they would look just at travel and expense vendors, or services procurement, or indirect or direct spend vendors, chief procurement and financial officers now want to understand what’s going on with spend holistically.

And, in fact, from the IDC report you mentioned, we found that 53 percent of respondents use different applications for each type of vendor spend that they have. Sometimes they even use multiple applications within a process for specific types of vendor spend. In fact, we find that a lot of folks have cobbled together a number of different things — from in-house billing to niche vendors – to keep track of all of that.

Managing all of that when there is an upgrade to one particular system — and having to test across the whole thing — is very difficult. They also have trouble being able to reconcile data back and forth.

One of our competitors, for example — to show how this Frankenmonster approach has taken root — tried to build a platform of every source and category of spend across the entire source-to-pay process by acquiring 14 different companies in six years. That creates a patchwork of applications where there is a skim of user interfaces across the top for people to enter, but the data is disconnected. The processes are disconnected. You have to manage all of the different code bases. It’s untenable.

Gardner: There is a big technology component to such a patchwork, but there’s a people level to this as well. More-and-more we hear about the employee experience and trying to give people intelligent tools to make higher-level decisions and not get bogged down in swivel-ware and cutting and pasting between apps. What do the survey results tell us all about the people, process, and technology elements of total spend management?

Unified data reconciliation

Hofler: It really is a combination of people, process, and technology that drives intelligent spend. It’s the idea of bringing together every source, every category, every buying channel for all of your different types of vendor spend so that you can reconcile on the technology side; you can reconcile the data.

For example, one of the things that we are building is master vendor unification across the different types of spend. A vendor that you see — IBM, for example — in one system is going to be the same as in another system. The data about that vendor is going to be enriched by the data from all of the other systems into a unified platform. But to do that you have to build upon a platform that uses the same micro-services and the same data that reconciles across all of the records so that you’re looking at a consistent view of the data. And then that has to be built with the user in mind.

So when we talk about every source, category, and channel of spend being unified under a holistic intelligent spend management strategy, we are not talking about a monolithic user experience. In fact, it’s very important that the experience of the user be tailored to their particular role and to what they do. For example, if I want to do my expenses and travel, I don’t want to go into a deep, sourcing-type of system that’s very complex and based on my laptop. I want to go into a mobile app. I want to take care of that really quickly.

If I’m sourcing some strategic suppliers I certainly can’t do that on just a mobile app. I need data, details, and analysis. And that’s why we have built the platform underneath it all to tie this together.

On the other hand, if I’m sourcing some strategic suppliers I certainly can’t do that on just a mobile app. I need data, details, and analysis. And that’s why we have built the platform underneath it all to tie this together even while the user interfaces and the experience of the user is exactly what they need.

When we did our spend management survey with IDC, we had more than 800 respondents across four regions. The survey showed a high amount of dissatisfaction because of the wide-ranging nature of how expense management systems interact. Some 48 percent of procurement executives said they are dissatisfied with spend management today. It’s kind of funny to me because the survey showed that procurement itself had the highest level of dissatisfaction. They are talking about their own processes. I think that’s because they know how the sausages are being made.

Gardner: Drew, this dissatisfaction has been pervasive for quite a while. As we examine what people want, how did the survey show what is working? What gives them the data they need, and where does it go next?

Let go of patchwork 

Hofler: What came out of the survey is that part of the reason for that dissatisfaction is the multiple technologies cobbled together, with lots of different workflows. There are too many of those, too much data duplication, too many discrepancies between systems, and it doesn’t allow companies to analyze the data, to really understand in a holistic view what’s going on.

In fact, 47 percent of the procurement leaders said they still rely on spreadsheets for spend analysis, which is shocking to me, having been in this business for a long time. But we are much further along the path in helping that out by reconciling master data around suppliers so they are not duplicating data.

IDCIt’s also about tying together, in an integrated and seamless way, the entire process across different systems. That allows workflow to not be based on the application or the technology but on the required processes. For example, when it comes to installing some parts to fix a particular machine, you need to be able to order the right parts from the right suppliers but also to coordinate that with the right skilled labor needed to install the parts.

If you have separate systems for your services, skilled labor, and goods, you may be very disconnected. There may be parts available but no skilled labor at the time you need in the area you need. Or there may be the skilled labor but the parts are not available from a particular vendor where that skilled labor is.

What we’ve built at SAP is the ability to tie those together so that the system can intelligently see the needs, assess the risks such as fluctuations in the labor market, and plan and time that all together. You just can’t do that with cobbled together systems. You have to be able to have a fully and seamlessly integrated platform underneath that can allow that to happen.

Gardner: Drew, as I listen to you describe where this is going, it dovetails with what we hear about digital transformation of businesses. You’re talking not just about goods and services, you are talking about contingent labor, about all the elements that come together from modern business processes, and they are definitely distributed with a lifecycle of their own. Managing all that is the key.

Now that we have many different moving parts and the technology to evaluate and manage them, how does holistic spend management elevate what used to be a series of back-office functions into a digital business transformation value?

Hofler: Intelligent spend management makes it possible for all of the insights that come from these various data points — by applying algorithms, machine learning (ML), and artificial intelligence (AI) — to look at the data holistically. It can then pull out patterns of spend across the entire company, across every category, and it allows the procurement function to be at the nexus of those insights.

If you think of all the spend in a company, it’s a huge part of their business when you combine direct, indirect, services, and travel and expenses. You are now able to apply those insights to where there are the price fluctuations, peaks and valleys in purchasing, versus what the suppliers and their suppliers can provide at a certain time.

It’s an almost infinite amount of data and insights that you can gain. The procurement function is being asked to bring to the table not just the back-office operational efficiency but the insights that feed into a business strategy and the business direction. It’s hard to do that if you have disconnected or cobbled-together systems and a siloed approach to data and processes. It’s very difficult to see those patterns and make those connections.

But when you have a common platform such as SAP provides, then you’re able to get your arms around the entire process. The Chief Procurement Officer (CPO) can bring to the table quite a lot of data and the insights and that show the company what they need to know in order to make the best decisions.

Gardner: Drew, what are the benefits you get along the way? Are there short-, medium-, and long-term benefits? Were there any findings in the IDC survey that alluded to those various success measurements?

Common platform benefits 

Hofler: We found that 80 percent of today’s spend managers’ time is spent on low-level tasks like invoice matching, purchase requisitioning, and vendor management. That came out of the survey. With the tying together of the systems and the intelligence technologies infused throughout, those things can be automated. In some cases, they can become autonomous, freeing up time for more valuable pursuits for the employees.

New technologies can also help, like APIs for ecosystem solutions. This is one of the great short-term benefits if you are on an intelligent spend management platform such as SAP’s. You become part of a network of partners and suppliers. You can tap into that ecosystem of partners for solutions aligned with core spend management functions.

Celonis, for example, looks at all of your workflows across the entire process because they are all integrated. It can see it holistically and show duplication and how to make those processes far more efficient. That’s something that can be accessed very quickly.

Longer-term, companies gain insights into the ebbs and flows of spending, cost, and risk. They can begin to make better decisions on who to buy from based on many criteria. They can better choose who to buy from. They start to understand the risks across entire supply chains.

Longer-term, companies gain insights into the ebbs and flows of spending, cost, and risk. They can begin to make better decisions on who to buy from based on many criteria. They can better choose who to buy from. They can also in a longer-term situation start to understand the risks involved across entire supply chains.

One of the great things about having an intelligent spend platform is the ability to tie in through that network to other datasets, to other providers, who can provide risk information on your suppliers and on their suppliers. It can see deep into the supply chain and provide risk analytics to allow you to manage that in a much better way. That’s becoming a big deal today because there is so much information, and social media allows information to pass along so quickly.

When a company has a problem with their supply chain — whether that’s reputational or something that their suppliers’ suppliers are doing — that will damage their brand. If there is a disruption in services, that comes out very quickly and can very quickly hit the bottom line of a company. And so the ability to moderate those risks, to understand them better, and to put strategies together longer term and short-term makes a huge difference. An intelligent spend platform allows that to happen.

Gardner: Right, and you can also start to develop new business models or see where you can build out the top line and business development. It makes procurement not just about optimization, but with intelligence to see where future business opportunities lie.

Comprehend, comply, control 

Hofler: That’s right, you absolutely can. Again, it’s all about finding patterns, understanding what’s happening, and getting deeper understanding. We have so much data now. We have been talking about this forever, the amount of data that keeps piling up. But having an ability to see that holistically, have that data harmonized, and the technological capability to dive into the details and patterns of that data is really important.

Ariba logoAnd that data network has, in our case, more than 20 years’ worth of spend data, with more than $13 trillion in lifetime of spend data and more than $3 trillion a year of transactions moving through our network – the Ariba Network. So not only do companies have the technologies that we provide in our intelligent spend management platform to understand their own data, but there is also the capability to take advantage of rationalized data across multiple industries, benchmarks, and other things, too, that affect them outside of their four walls.

So that’s a big part of what’s happening right now. If you don’t have access into those kinds of insights, you are operating in the dark these days.

Gardner: Are there any examples that illustrate some of the major findings from the IDC survey and show the benefits of what you have described?

Hofler: Danfoss, a Danish company, is a customer of ours that produces heating and cooling drives, and power solutions; they are a large company. They needed to standardize disparate enterprise resource planning (ERP) systems across 72 factories and implement services for indirect spend control and travel across 100 countries. So they have a very large challenge where there is a very high probability for data to become disconnected and broken down.

That’s really the key. They were looking for the ability to see one version of truth across all the businesses, and one of the things that really drives that need is the need for compliance. If you look at the IDC survey findings, close to half of executive officers are particularly concerned with compliance and auditing in spend management policy. Why? Because it allows both more control and deeper trust in budgeting and forecasting, but also because if there are quality issues they can make sure they are getting the right parts from the right suppliers.

The capability for Danfoss to pull all of that together into a single version of truth — as well as with their travel and expenses — gives them the ability to make sure that they are complying with what they need to, holistically across the business without it being spotty. So that was one of the key examples.

Another one of our customers, Swisscom, a telecommunications company in Switzerland, a large company also, needed intelligent spend management to manage their indirect spend and their contingent workforce.

They have 16,000 contingent workers, with 23,000 emails and a couple of thousand phone calls from suppliers on a regular basis. Within that supply chain they needed to determine supplier selection and rates on receipt of purchase requisitions. There were questions about supplier suitability in the subsequent procurement stages. They wanted a proactive, self-service approach to procurement to achieve visibility into that, as well as into its suppliers and the external labor that often use and install the things that they procure.

By moving from a disconnected system to the SAP intelligent spend offering, they were able to gain cohesive information and a clear view of their processes — consumer, supplier, procurement, and end-user services.

So, by moving from a disconnected system to the SAP intelligent spend offering, they were able to gain cohesive information and a clear view of their processes, which includes those around consumer, supplier, procurement, and end user services. They said that using this user-friendly platform allowed them to quickly reach compliance and usability by all of their employees across the company. It made it very easy for them to do that. They simplified the user experience.

And they were able to link suppliers and catalogs very closely to achieve a vision of total intelligent spend management using SAP Fieldglass and SAP Ariba. They said they transformed procurement from a reactive processing role to one of proactively controlling and guiding, thanks to uniform and transparent data, which is really fundamental to intelligent spend.

Gardner: Before we close out, let’s look to the future. It sounds like you can do so much with what’s available now, but we are not standing still in this business. What comes next technologically, and how does that combine with process efficiencies and people power — giving people more intelligence to work with? What are we looking for next when it comes to how to further extend the value around intelligent spend management?

Harmony and integration ahead 

Hofler: Extending the value into the future begins with the harmonization of data and the integration of processes seamlessly. It’s process-driven, and it doesn’t really matter what’s below the surface in terms of the technology because it’s all integrated and applied to a process seamlessly and holistically.

What’s coming in the future on top of that, as companies start to take advantage of this, is that more intelligent technologies are being infused into different parts of the process. For example, chatbots and the ability for users to interact with the system in a natural language way. Automation of processes is another example, with the capability to turn some processes into being fully autonomous, where the decisions are based on the learning of the machines.

screenshot MacThe user interaction can then become one of oversight and exception management, where the autonomous processes take over and manage when everything fits inside of the learned parameters. It then brings in the human elements to manage and change the parameters and to manage exceptions and the things that fall outside of that.

There is never going to be removal of the human, but the human is now able with these technologies to become far more strategic, to focus more on analytics and managing the issues that need management and not on repetitive processes that can be handled by the machine. When you have that connected across your entire processes, that becomes even more efficient and allows for more analysis. So that’s where it’s going.

Plus, we’re adding more ecosystem partners. When you have a networked ecosystem on intelligent spend, that allows for very easy connections to providers who can augment the core intelligent spend functions with data. For example, for attaining global tax, compliance, risk, and VAT rules through partners like American Express and Thomson Reuters. All of these things can be added. You will see that ecosystem growing to continue to add exponential value to being a part of an intelligent spend management platform.

Gardner: There are upcoming opportunities for people to dig into this and understand it and find the ways that it makes sense for them to implement, because it varies from company to company. What are some ways that people can learn details?

Hofler: There is a lot coming up. Of course, you can always go to ariba.com, fieldglass.com or sap.com and find out about our intelligent spend management offerings. We will be having our SAP Ariba Live conference in Las Vegas in March, and so tons and tons of content there, and lots of opportunity to interact with other folks who are in the same situation and implementing these similar things. You can learn a lot.

We are also doing a webinar with IDC to dig into the details of the survey. You can find information about that on ariba.com, and certainly if you are listening to this after the fact, you can hear the recording of that on ariba.com and download the report.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, data analysis, machine learning, procurement, risk assessment, SAP, SAP Ariba, Spot buying, supply chain | Tagged , , , , , , , , , , , , , , , | Leave a comment

How an MSP brings comprehensive security services to diverse clients

Polaris spin

As businesses move more of their IT services to the cloud, reducing complexity and making sure that security needs are met throughout the migration process are now top of mind.

For a UK managed services provider (MSP), finding the right mix of security strength and ease-of-use for its many types of customers became a top priority.

The next managed services security management edition of BriefingsDirect explores how Northstar Services, Ltd. in Bristol-area England adopted Bitdefender Cloud Security for Managed Service Providers (MSPs) to both improve security for their end users and to make managing that security at scale and from the cloud easier than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to discuss the role of the latest Bitdefender security technology — and making MSPs more like security services providers — is John Williams, Managing Director at Northstar Services, Ltd. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the top trends driving the need for an MSP such as Northstar to provide even better security services?

Williams: We used to get lots of questions regarding stability for computers. They would break fairly regularly and we’d have to do hardware changes. People were interested in what software we were going to load — what the next version of this, that, and the other was — but those discussions have changed a great deal. Now everybody is talking about security in one form or another.

Gardner: Whenever you change something — whether it’s configurations, the software, or the service provider, like a cloud — it leaves gaps that can create security problems. How can you be doubly sure when you make changes that the security follows through?

The value of visibility, 24-7 

John Williams

Williams

Williams: We used to install a lot of antivirus software on centralized servers. That was very common. We would set up a big database and install security software on there, for example. And then we would deploy it to the endpoints from those servers, and it worked fairly well. Yet it was quite a lot of work to maintain it.

But now we are supporting people who are so much more mobile. Some customers are all out and about on the road. They don’t go to the office. They are servicing their customers, and they have their laptop. But they want the same level of security as they would have on a big corporate network.

So we have defined the security products that give us visibility of what’s happening. It means that we don’t have to know that they are up to date. We have to manage those clients wherever they are on whatever device they have — all from one place.

Gardner: Even though these customers are on the frontline, you’re the one as the MSP they are going to call up when things don’t go right.

Williams: Yes, absolutely. We have lots of customers who don’t have on-site IT resources. They are not experts. They often have small businesses with hundreds of users. They just want to call us, find out what’s going on when they see a problem on their computers, and we have got to know whether that’s a security issue or an application that’s broken.

But they are very concerned that we have that visibility all of the time. Our engineers need to be able to access that easily and address it as soon as a call comes in.

Gardner: Before we learn more about your journey to solving those issues, tell us about Northstar. How long have you been around and what’s the mainstay of your business?

Williams: I have been running Northstar for more than 20 years now, since January 1999. I had been working in IT as an IT support engineer in large organizations for a few years, but I really wanted to get involved in looking after small businesses.

People appreciate it when you make an effort. They want to tell you that you did a good job, and they want to know that someone is paying attention to them.

I like that because you get direct feedback. People appreciate it when you make an effort. They want to tell you that you did a good job, and they want to know that someone is paying attention to them.

So it was a joy to be able to get that up and going. We have a great team here now and that’s what gets me out of bed in the morning — working with our team to look after our customers.

Gardner: Smaller businesses moving to the cloud has become more the norm lately. How are your smaller organizations managing that? Sometimes with the crossover — the hybrid period between having both on-premises devices as well as cloud services — can be daunting. Is that something you are helping them with?

Moving to cloud step-by-step 

Williams: Yes, absolutely. We often see circumstances where they want to move one set of systems to the cloud before they want to move everything to the cloud. So they generally are on a trend where they want to get rid of in-house services, especially for the smaller end of the market, for customers who are smaller. But they often have legacy systems that they can’t easily port off the services from. They might have been custom written or are older versions that they can’t afford to upgrade at this point. So we end up supporting partly in the cloud and partly on-premises.

And some customers, that’s their strategy. They take a particular workload, a database, for example, or some graphics software that they use, that runs brilliantly on servers in their offices. But they want to outsource other applications.

So, when we look at security, we need software that’s going to be able to work across those different scenarios. It can’t just be one or the other. It’s no good if it’s just on-premises, and no good if it’s just in the cloud. It has to be able to do all of that, all from one console because that’s what we are supporting.

PC

Gardner: John, what were your requirements when you were looking for the best security to accomplish this set of requirements? What did you look for and how did your journey end?

Williams: Well, you can talk about the things being easy to manage, things being visible and with good reporting. All those things are important, and we assessed all of those. But the bottom line is, does it pick up infections? Is it able to keep those units secure and safe? And when an infection has happened, does it clean them up or stop them in their tracks quickly?

That has to be the number one thing, because whatever other savings you might make in looking after security, the fact that something that’s trying to do something bad is blocked — that has to be number one; stopping it in its tracks and getting it off that unit as quickly as possible. The sooner it’s stopped, the less damage and the less time the engineers have to spend rebuilding the units that have been killed by viruses or malware.

And we used to do quite a lot of that. With the previous antivirus security software we used, there was a constant stream of cleaning up after infections. Although it would detect and alert us, very often the damage was already done. So, we had a long period of repairing that, often rebuilding the whole operating system (OS), which is really inconvenient for customers.

And again, coming back to the small businesses, they don’t have spare PCs hanging around that they can just get out of the cupboard and carry on. Very often that’s the most vital kit that they own. Every moment it’s out of action, that’s directly affecting their bottom line. So detecting infections and stopping them in their tracks was our number-one criteria when we were looking.

Gardner: In the best of all worlds, the end user is not even aware that they were infected, not aware it was remediated, not having to go through the process of rebuilding. That’s a win-win for everyone.

Automation around security is therefore top of mind these days. What you have been able to do with Bitdefender Cloud Security for MSPs that accomplishes that invisibility to the end user — and also helps you with automation behind the scenes?

Stop malware in its tracks 

Williams: Yes, the stuff was easy to deploy. But what it boils down to is that we just don’t get as many issues to have to automate the resolution for. So automation is important, and the things it does are useful. But the number of issues that we have to deal with is so few now that even if we were to 100 percent automate, it wouldn’t make a massive savings, because it’s not interrupting us very much.

rich bitdefender logoIt’s stopping malware in its tracks and cleaning it up. Most of the time we are seeing that it has done it, rather than us having to automate a script to do some removal or some changes or that kind of thing. It has already done it. I suppose that is automated, if you think about it, yes.

Gardner: You said it’s been a dramatic difference between the past and now with the number of issues to deal with. Can you qualify that?

Williams: In the three or four years we have used Bitdefender, when we look at the number of tickets that we used to get in for antivirus problems on people’s laptops and PCs, they have just dropped to such a low level now, it’s a tiny proportion. I don’t think it’s even coming up on a graph.

When we look at the number of tickets we used to get in for antivirus problems, since we have used Bitdefender they have just dropped to such a low level now, it’s a tiny proportion. It doesn’t even come up on a graph.

You record the type of ticket that comes in, and it’s a printer issue, a hardware issue. The virus removal tickets are not featuring high enough to even appear on the graph because Bitdefender is just dealing with those infections and fixing them without having to get to them and rebuild PCs.

Gardner: When you defend a PC, Mac or mobile device, that can have a degradation effect. Users will complain about slow apps, especially when the antivirus software is running. Has there been an improvement in terms of the impact of the safety net when it comes to your use of Bitdefender Cloud Security for MSPs?

Williams: Yes, it’s much lighter on the OS than the previous software that we were using. We were often getting calls from customers to say that their units were running slowly because of the heavy load it was having to do in order to run the security software. That’s the exact opposite of what you want. You are putting this software on there so that they get a better experience; in other words, they are not getting infected as often.

But then you’re slowing down their work every day, I mean, that’s not a great trade-off. Security is vital but if it has such a big impact on them that they are losing time by just having it on there — then that’s not working out very well.

Now [with Bitdefender Cloud Security for MSPs] it’s light enough from the that it just isn’t an issue. We don’t get customers saying, “Since you put the antivirus on my laptops, it seems to be slower.” In fact, it’s usually the opposite.

Gardner: I’d like to return to the issue of cloud migration. It such a big deal when people move across a continuum of on-premises, hybrid, and cloud – and be able to move while security is maintained. It’s like changing the wings on an airplane and keeping it flying at the same time.

What is it about the way that Bitdefender has architected its solution that helps you, as a service provider, guide people through that transition but not lose a sense of security?

Don’t worry, be happy 

Williams: It’s because we are managing all of the antivirus licenses in the cloud, whether they are on-premises, inside an office where they are using those endpoints,  or whether they are out and about; whether it’s a client-server running in cloud services or running on-premises, we are putting the same software on there and managing it in the same console. It means we don’t worry about that security piece. We know that whatever they change to, whatever they are coming from, we can put the same software on and manage it in the same place — and we are happy.

Gardner: As a service provider I’m sure that the amount of man hours you have to apply to different solutions directly affects your bottom line. Is there something about the administration of all of this across your many users that’s been an improvement? The GravityZone Cloud Management console, for example, has that allowed you to do more with less when it comes to your internal resources?

Williams: Yes, and the way that I gauge that is the amount of time. Engineers want to do an efficient job, that’s what they like, they want to get to the root of problems and fix them quickly. So any piece of software or tool that doesn’t work efficiently for them, I get a long list of complaints about on a regular basis. All engineers want to fix things fast because that’s what the customer wants, and they are working on their behalf.

Before, I would have constant complaints about how difficult it was to manage and deploy software on the units if they needed to be decommissioned. It was just troublesome. But now I don’t get any complaints over it. The staff is nothing but complimentary about the software. That just makes me happy because I know that they are able to work with it, which means that they are doing the job that they want to do, which is helping our customers and keeping them happy. So yes, it’s much better.

Gardner: Looking to the future, is there something that you are interested in seeing more of? Perhaps around encryption or the use of machine learning (ML) to give you more analytics as to what’s going on? What would you like to see out of your security infrastructure and services from the cloud in the next couple of years?

The devil’s in the data detail 

Williams: One thing that customers are talking to us about quite a bit now is data security. So they are thinking more about the time when they are going to have to report the fact that they’ve been attacked. And no software on earth is perfect. The whole point of security is that the threat continually evolves.

At the point where you’ve had a breach of some kind, you want to understand what’s happened. And so, having information back from the security software that helps you to understand how the breach happened — and the extent of it — that’s becoming really important to customers. When they submit those reports, as legally they have to do, they want to have accurate information to say, “We had an infection, and that’s it.” If they don’t know exactly what the extent of it was – or whether any data was accessed or infected or encrypted without having that detail — that’s a problem.

keyboardSo the more information that we can gain from the security software about the extent, that’s going to be more important going forward.

Gardner: Anything else come to mind about what you’d like to see from the technology side?

Williams: So automation is important and that artificial intelligence (AI) side of it where the software itself learns about what’s happening and can give you an idea when it spots something that’s out of the ordinary — that will be more useful as time goes on.

Gardner: John, what advice do you have for other MSPs when it comes to a security, a better security posture?

Williams: Don’t be afraid of defining the securing services. You have to lead that conversation, I think. That’s what customers want to know. They want to know that you have thought about it, and that’s at the very full front of your mind.

We meet our customers regularly. The first item on the agenda is security. We like to talk about where they are, what’s the next thing that they can do to make sure they are doing everything they can to protect the data they have gathered from their customers, and to look after their data about their staff, too, and to keep their services running.

We go meet our customers regularly and we usually have a standard agenda that we use. The first item on the agenda is security. And that journey for each customer is different. They are starting from different places. So we like to talk about where they are, what’s the next thing that they can do to make sure they are doing everything they can to protect the data they have gathered from their customers, and to look after their data about their staff, too, and to keep their services running.

We put that at the top of the agenda for every meeting. That’s a great way of behaving as a service provider. But, of course, in order to do that, to deliver on that, you have to have the right tools. You have to say, “Okay, if I am going to be in that role to help people with a security, I have to have those tools in place.”

If they are complicated, difficult to use, and hard to implement — then that’s going to make it horrible. But if they are simple and give you great visibility, then you are going to be able to deliver a service that customers will really want to buy.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, Help desk, managed services, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Better IT security comes with ease in overhead for rural Virginia county government

Caroline_County_Courthouse 2The next public sector security management edition of BriefingsDirect explores how managing IT for a rural Virginia county government means doing more with less — even as the types and sophistication of cybersecurity threats grow.

For County of Caroline, a small team of IT administrators has built a technically advanced security posture that blends the right amounts of automation with flexible, cloud-based administration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share their story on improving security in a local government organization are Bryan Farmer, System Technician, and David Sadler, Director of Information Technology, both for County of Caroline in Bowling Green, Virginia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dave, tell us about County of Caroline and your security requirements. What makes security particularly challenging for a public sector organization like yours?

Sadler: As everyone knows, small governments in the State of Virginia — and all across the United States and around the world — are being targeted by a lot of bad guys. For that reason, we have the responsibility to safeguard the data of the citizens of this county — and also of the customers and other people that we interact with on a daily basis. It’s a paramount concern for us to maintain the security and integrity of that data so that we have the trust of the people we work with.

Gardner: Do you find that you are under attack more often than you used to be?

David Sadler

Sadler

Sadler: The headlines of nearly any major newspaper you see, or news broadcasts that you watch, show what happens when the bad guys win and the local governments lose. Ransomware, for example, happens every day. We have seen a major increase in these attacks, or attempted attacks, over the past few years.

Gardner: Bryan, tell us a bit about your IT organization. How many do you have on the frontlines to help combat this increase in threats?

Farmer: You have the pleasure today of speaking with the entire IT staff in our little neck of the woods. It’s just the two of us. For the last several years it was a one-man operation, and they brought me on board a little over a year-and-a-half ago to lend a hand. As the county grew, and as the number of users and data grew, it just became too much for one person to handle.

Gardner: You are supporting how many people and devices with your organization?

Small-town support, high-tech security

Bryan Farmer (1)

Farmer

Farmer: We are mainly a Microsoft Windows environment. We have somewhere in the neighborhood of 250 to 300 users. If you wrap up all of the devices, Internet of Things (IoT) stuff, printers, and things of that nature, it’s 3,000 to 4,000 devices in total.

Sadler: But the number of devices that actually touch our private network is in the neighborhood of around 750.

Farmer: We are a rural area so we don’t have the luxury of having fiber in between all of our locations and sites. So we have to resort to virtual private networks (VPNs) to get traffic back and forth. There are airFiberconnections, and we are doing some stuff over the air. We are a mixed batch. There is a little bit of everything here.

Gardner: Just as any business, you have to put your best face forward to your citizens, voters, and taxpayers. They are coming for public services, going online for important information. How large is your county and what sort of applications and services you are providing to your citizens?

Farmer: Our population is 30,000?

Sadler: Probably 28,000 to 30,000 people, yes.

Farmer: A large portion of our county is covered by a U.S. Army training base, it’s a lot of nonliving area, so to speak. The population is condensed into a couple of small areas.

We host a web site and forum. People can look up their taxes, permit prices, and basic information that the average citizen will need.

We host a web site and forum. It’s not as robust as what you would find in a big city or a major metropolitan area, but people can look up their taxes, permit prices, things of that nature; basic information that the average citizen will need such as utility information.

Gardner: With a potential of 30,000 end users — and just two folks to help protect all of the infrastructure, applications, and data — automation and easy-to-use management must be super important. Tell us where you were in your security posture before and how you have recently improved on that.

Finding a detection solution

Sadler: Initially when I started here, and I came over from the private sector, we were running one of the big companies that had a huge name but was basically not showing us the right amount of good protection, you could say.

So we switched to a second company, Kaspersky, and immediately we started finding detections of existing malware and different anomalies in the network that had existed for years without protection from Symantec. So we settled on Kaspersky. And anytime you go to an enterprise-level antivirus (AV) endpoint solution, the setup, adjustment, and on-boarding process takes longer than what a lot of people would lead you to believe.

It took us about six months with Kaspersky. I was by myself, so it took me about six months to get everything set up and running like it should, and it performed extremely well. I had a lot of granularity as far as control of firewalls and that type of product.

The granularity is what we like because we have users that have a broad range of needs. We have to be able to address all of those broad ranges under one umbrella.

Many of the different AV endpoint solutions we evaluated lacked the granularity we wanted to address the needs of everyone with one product. We spend six months evaluating and we landed on Bitdefender.

Unfortunately, when the US Department of Homeland Security decided to at first recommend that you not use [Kaspersky] and then later banned that product from use, we were forced to look for a replacement solution, and we evaluated multiple different products.

Again, what we were looking for was granularity because we wanted to be able to address the needs of everyone under the umbrella with one particular product. Many of the different AV endpoint solutions we evaluated lacked that granularity. It was, more or less, another version of the software that we started with. They didn’t give a real high level of protection or did not allow for adjustment.

When we started evaluating a replacement, we were finding things that we could not do with a particular product. We spent probably about six months evaluating different products — and then we landed on Bitdefender.

Now, coming from the private sector and dealing with a lot of home users, my feelings for Bitdefender were based on the reputation of their consumer-grade product. They had an extremely good reputation in the consumer market. Right off the bat, they had a higher score when we started evaluating. It doesn’t matter how easy a product is to use or adjust if their basic detection level is low, then everything else is a waste of time.

rich bitdefender logoBitdefender right off the bat has had a reputation for having a high level of detection and protection as well as a low impact on the systems. Being a small, rural county government, we use machines that are unfortunately a little bit older than what would be recommended, five to six years old. We are using some older machines that have lower processing power, so we could not take a product that would kill the performance of the machine and make it unusable.

During our evaluations we found that Bitdefender performed well. It did not have a lot of system overhead and it gave us a high level of protection. What’s really encouraging is when you switch to a different product and you start scaling your network and find threats that had been existing there for years undetected. Now you know at least you are getting something for your money, and that’s what we found with Bitdefender.

Gardner: I have heard that many times. It has to, at the core, be really good at detecting. All the other bells and whistles don’t count if that’s not the case. Once you have established that you are detecting what’s been there, and what’s coming down the wire every day, the administration does become important.

Bryan, what is the administration like? How have you improved in terms of operations? Tell us about the ongoing day-to-day life using Bitdefender.

Managing mission-critical tech

Farmer: We are Bitdefender GravityZone users. We host everything in the cloud. We don’t have any on-premises Bitdefender machines, servers, or anything like that, and it’s nice. Like Dave said, we have a wide range of users and those users have a wide range of needs, especially with regards to Internet access, web page access, stuff like that.

For example, a police officer or an investigator needs to be able to access web sites that a clerk in the treasurer’s office just doesn’t need to be able to access. To be able to sit at my desk or take my laptop out anywhere that I have an Internet connection and make an adjustment if someone cannot get to somewhere that they need is invaluable. It saves so much time.

We don’t have to travel to different sites. We don’t have to log-in to a server. I can make adjustments from my phone. It’s wonderful to be able to set up these different profiles and to have granular control over what a group of people can do.

We can adjust which programs they can run. We can remove printing from a network. There are so many different ways that we can do it, from anywhere as long as we have a computer and Internet access. Being able to do that is wonderful.

Gardner: Dave, there is nothing more mission-critical than a public safety officer and their technology. And that technology is so important to everybody today, including a police officer, a firefighter, and an emergency medical technician (EMT). Any feedback when it comes to the protection and the performance, particularly in those mission-critical use cases?

Sadler: Bitdefender has allowed us the granularity to be able to adjust so that we don’t interfere with those mission-critical activities that the police officer or the firefighter are trying to perform.

Our security service is hosted in the cloud, and we have found that that is an actual benefit. Bitdefender GravityZone offers us the capability to monitor as well as adjust on machines that never see our network. 

So initially there was an adjustment period. Thank goodness everybody was patient during that process and I think now we are finally — about a year into the process, a little over a year — and we have gotten stuff set pretty good. The adjustments that we are having to make now are minor. Like Bryan said, we don’t have an on-premises security server here. Our service is hosted in the cloud, and we have found that that is an actual benefit. Before, with having a security server and the software hosted on-premises, there were machines that didn’t touch the network. We are looking at probably 40 to 50 percent of our machines that we would have had to manage and protect [manually] because they never touch our network.

The Bitdefender GravityZone cloud-based security product offers us the capability to be able to monitor for detections, as well as adjust firewalls, etc., on machines that we never touch or never see on our network. It’s been a really nice product for us and we are extremely happy with its performance.

endpoint-security-solution

Gardner: Any other metrics of success for a public sector organization like yours with a small support organization? In a public sector environment you have to justify your budget. When you tell the people overseeing your budget why this is a good investment, what do you usually tell them?

Sadler: The benefit we have here is that our bosses are aware of the need to secure the network. We have cooperation from them. Because we are diligent in our evaluation of different products, they pretty much trust our decisions.

Justifying or proving the need for a security product has not been a problem. And again, the day-to-day announcements that you see in the newspaper and on web sites about data breaches or malware infections — all that makes justifying such a product easier.

Gardner: Any examples come to mind that have demonstrated the way that you like to use these products and these services? Anything come to mind that illustrates why this works well, particularly for your organization?

Stop, evaluate, and reverse infections

Farmer: Going back to the cloud hosting, all a machine has to do is touch the Internet. We have a machine in our office here right now that one of our safety officials had and we received an email notification that something was going on. That machine needed to be disinfected, we needed to take a look at this machine.

The end-user didn’t have to notice it. We didn’t have to wait until it was a huge problem or a ransomware thing or whatever the case may be. We were notified automatically in advance. We were able to contact the user and get to the machine. Thankfully, we don’t think it was anything super-critical, but it could have been.

That automation was fantastic, and not having to react so aggressively, so to speak. So the proactivity that a solution like Bitdefender offers is outstanding.

Gardner: Dave, anything come to mind that illustrates some of the features or functions or qualitative measurements that you like?

Sadler: Yes, with Bitdefender GravityZone, it will sandbox a suspicious activity and watch its actions and then roll back if something bad is going on.

We actually had a situation where a vendor that we use on a regular basis from a large company, well-respected, called in to support a machine that they had in one of our offices. We were immediately notified via email that a ransomware attack was being attempted.

We were notified on our end that a ransomware attack had been stopped, evaluated, and reversed by Bitdefender. Not once, but twice in a row. We were immediately able to contact that office say, “Hey, stop what your are doing.”

So this vendor was using a remote desktop application. Somehow the end-user got directed to a bad site, and when it failed the first time on their end, all they could tell was, “Hey, my remote desktop software is not working.” They stopped and tried it again.

We were notified on our end that a ransomware attack had been stopped, evaluated, and reversed by Bitdefender. Not once, but twice in a row. So we were immediately able to contact that office and say, “Hey, stop what you are doing.”

Then we followed up by disconnecting that computer from the network and evaluating them for infection, to make sure that everything had been reversed. Thank goodness, Bitdefender was able to stop that ransomware attack and actually reverse the activity. We were able to get a clean scan and return that computer back to service fairly quickly.

Gardner: How about looking to the future? What would you like to see next? How would you improve your situation, and how could a vendor help you do that?

Meeting government requirements

Sadler: The State of Virginia just passed a huge bill dealing with election security and everybody knows that that’s a huge, hot topic when it comes to security right now. And because most of the localities in Virginia are independent localities, the state passed a bill that allows state Department of Elections and the US Homeland Security Department to step in a little bit more to the local governments and monitor or control the security of the local governments, which in the end is going to be a good thing.

But a lot of the products or solutions that we are now being required to be able to answer about are already answered by the Bitdefender product. For example, automated patch management notification of security issues.

So, Bitdefender right now is already answering a lot of the new requirements. The one thing that I would like to see … from what I understand the cloud-based version of Bitdefender does not allow you to do mobile device management. And that’s going to be required by some of these regulations that are coming down. So it would be really nice if we could have one product that would do the mobile device management as well as the cloud-based security protection for a network.

pring_Grove_gateGardner: I imagine they hear you loud and clear on that. When it comes to compliance like you are describing from a state down to a county, for example, many times there are reports and audits that are required. Is that something that you feel is supported well? Are you able to rise to that occasion already with what you have installed?

Farmer: Yes, Bitdefender is a big part of us being able to remain compliant. The Criminal Justice Information Services (CJIS) audit is one we have to go through on a regular basis. Bitdefender helps us address a lot of the requirements of those audits as well as some of the upcoming audits that we haven’t seen yet that are going to be required by this new regulation that was just passed this past year in the Commonwealth of Virginia.

But from the previews that we are getting on the requirements of those newly passed regulations, it does appear that Bitdefender is going to be able to help us address some of those needs, which is good. By far, it’s the capability to be able to answer some of those needs with Bitdefender that is superior to the products that we have been using in the past.

Gardner: Given that many other localities, cities, towns, municipalities, counties are going to be facing similar requirements, particularly around election security, for example, what advice would you give them, now that you have been through this process? What have you learned that you would share with them so that they can perhaps have an easier go at it?

Research reaps benefits in time, costs 

Farmer: I have seen in the past a lot of places that look at the first line item, so to speak, and then make a decision on that. Then when they get down the page a little bit and see some of the other requirements, they end up in situations where they have two, three, or four pieces of software, and a couple of different pieces of hardware, working together to accomplish one goal. Certainly, in our situation, Bitdefender checks a lot of different boxes for us. If we had not taken the time to research everything properly and get into the full breadth of what’s capable, we could have spent a lot more money and created a lot more work and headaches for ourselves.

A lot of people in IT will already know this, but you have to do your homework. You have to see exactly what you need and get a wide-angle view of it and try to choose something that helps do all of those things. Then automate off-site and automate as much as you can to try to use your time wisely and efficiently.

Gardner: Dave, any advice for those listening? What have you learned that you would share with them to help them out?

The breadth of the protection that we are getting from Bitdefender has been a major plus. Find the product that your can put together under one big umbrella so you have one point of adjustment from one single control panel.

Sadler: The breadth of the protection that we are getting from Bitdefender has been a major plus. So again, like Bryan said, find the product that you can put together under one big umbrella — so that you have one point of adjustment. For example, we are able to adjust firewalls, virus protection, and off-site USB protection — all this from one single control panel instead of having to manage four or five different control panels for different products.

It’s been a positive move for us, and we look forward to continuing to work with that product and we are watching the new product still under development. We see new features coming out constantly. So if anyone from Bitdefender is listening, keep up the good work. We will hang in there with you and keep working.

But the main thing for IT operators is to evaluate your possibilities, evaluate whatever possible changes you are going to make before you do it. It can be an investment of money and time that goes wasted if you are not sure of the direction you are going in. Use a product that has a good reputation and one that checks off all the boxes like Bitdefender.

Farmer: In a lot of these situations, when you are working with a county government or a school you are not buying something for 30, 60, or 90 days – instead you are buying a year at a time. If you make an uninformed decision, you could be putting yourself in a jam time- and labor-wise for the next year. That stuff has lasting effects. In most counties, we get our budgets and that’s what we have. There are no do-overs on stuff like this. So, it speaks back to making a well-informed decision the first time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in artificial intelligence, Bitdefender, Business intelligence, BYOD, Cloud computing, Cyber security, data analysis, Government, machine learning, mobile computing, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Delivering a new breed of patient access best practices requires an alignment of people, process, and technology

receptionistThe next BriefingsDirect healthcare finance insights discussion explores the rapidly changing ways that caregiver organizations on-board and manage patients.

How patients access their healthcare is transitioning to the digital world — but often in fits and starts. This key process nonetheless plays a major role in how patients perceive their overall experiences and determines how well providers manage both care and finances.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to unpack the people, process, and technology elements behind modern patient access best practices. To learn more, we are joined by an expert panel: Jennifer Farmer, Manager of Patient Access and Admissions at Massachusetts Eye and Ear Infirmary in Boston; Sandra Beach, Manager of the Central Registration Office, Patient Access, and Services and Pre-Services at Cooley Dickinson Healthcare in Northampton, Mass., and Julie Gerdeman, CEO of HealthPay24 in Mechanicsburg, Penn. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jennifer, for you and your organization, how has the act of bringing a patient into a healthcare environment — into a care situation — changed in the past five years?

Jennifer Farmer headshot

Farmer

Farmer: The technology has exploded and it’s at everyone’s fingertips. So five years ago, patients would come to us, from referrals, and they would use the old-fashioned way of calling to schedule an appointment. Today it is much easier for them. They can simply go online to schedule their appointments.

They can still do walk-ins as they did in the past, but it’s much easier access now because we have the ways and means for the patients to be triaged and given the appropriate information so they can make an appointment right then and there, versus waiting for their provider to call to say, “Hey, we can schedule your appointment.” Patients just have it a lot easier than they did in the past.

Gardner: Is that due to technology? It seems to me that when I used to go to a healthcare organization they would be greeting me by handing me a clipboard, but now they are always sitting at a computer. How has the digital experience changed this?

Farmer: It has changed it drastically. Patients can now complete their accounts online and so the person sitting at the desk already has that patient’s information. So the clipboard is gone. That’s definitely something patients like. We get a lot of compliments on that.

It’s easier to have everything submitted to us electronically, whether it’s medical records or health insurance. It’s also easier for us to communicate with the patient through the electronic health record (EHR). If they have a question for us or we have a question for them, the health record is used to go back and forth.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

There are not as many phone calls as there used to be, not as many dropped ends. There is also the advent of telemedicine these days so doctors can have a discussion or a meeting with the patient on their cell phones. Technology has definitely changed how medicine is being distributed as well as improving the patient experience.

Gardner: Sandra, how important is it to get this right? It seems to me that first impressions are important. Is that the case with this first interception between a patient and this larger, complex healthcare organization and even ecosystem?

Sandra Beach headshot

Beach

Beach: Oh, absolutely. I agree with Jennifer that so many things have changed over the last five years. It’s a benefit for patients because they can do a lot more online, they can electronically check-in now, for example, that’s a new function. That’s going to be coming with [our healthcare application] Epicso that patients can do that all online.

The patient portal experience is really important too because patients can go in there and communicate with the providers. It’s really important for our patients as telemedicine has come a huge distance over the years.

Gardner: Julie, we know how important getting that digital trail of a patient from the start can be; the more data the better. How have patient access best practices been helped or hindered by technology? Are the patients perceiving this as a benefit?

Gerdeman: They are. There has been a huge improvement in patient experience from technology and the advent and increase in technology. A patient is also a consumer. We are all just people and in our daily lives we do more research.

So, for patient access, even before they book an appointment, either online or on the phone, they pull out their phones and do a ton of research about the provider institution. That’s just like folks do for anything personal, such as a local service like a dry cleaning or a haircut. For anything in your neighborhood or community, you do the same for your healthcare because you are a consumer.

The same level of consumer support that’s expected in our modern daily lives has now come to be expected with our healthcare experiences. Leveraging technology for access is just beginning and will continue to impact healthcare.

Julie Gerdeman headshot

Gerdeman

The same level of consumer support that’s expected in our modern daily lives has now come to be expected with our healthcare experiences. Leveraging technology for access, and as Jennifer and Sandra mentioned, the actual clinical experience — via telemedicine and digital transformation — is just getting into and will continue to impact healthcare.

Gardner: We have looked at this through the lens of the experience and initial impressions — but what about economics? When you do this right, is there a benefit to the provider organization? Is there a benefit to the patient in terms of getting all those digital bits and bytes and information in the right place at the right time? What are the economic implications, Jennifer?

Technology saves time and money

Farmer: They are two-fold. One, the economic implication for a patient is tht they don’t necessarily have to take a day off from work or leave work early. They are able to continue via telemedicine, which can be done through the evening. When providers offer evening and weekend appointments, that’s to satisfy the patient so they don’t have to spend as much time trying to rearrange things, get daycare, or pay for parking.

For the provider organization, the economic implications are that we can provide services to more patients, even as we streamline certain services so that it’s all more efficient for the hospital and the various providers. Their time is just as valuable as anyone else’s. They also want to reduce the wait times for someone to see a patient.

The advent of using technology across different avenues of care reduces that wait time for available services. The doctors and technicians are able to see more patients, which obviously is an economic positive for the hospital’s bottom line.

Gardner: Sandra, patients are often not just having one point of intersection, if you will, with these provider organizations. They probably go to a clinic, then a specialist, perhaps rehabilitation, and then use pharmaceutical services. How do we make this more of a common experience for how patients intercept such an ecosystem of healthcare providers?

Beach: I go back to the EHRs that Jennifer talked about. With us being in a partner system, no matter where you go — you could go to a rehab appointment, a specialist, to the cancer center in Boston — all your records are accessible for the physicians, and for the patients. That’s a huge step in the right direction because, no matter where the patient goes, you can access the records, at least within our system.

Gardner: Julie, to your point that the consumer experience is dictating people’s expectations now, this digital trail and having that common view of a patient across all these different parts of the organization is crucial. How far along are we with that? It seems to me that we are not really fully baked across that digital experience.

desktop shotGerdeman: You’re right, Dana. I think the partner approach is an amazing exception to the rule because they are able to see and share data across their own network.

Throughout the rest of the country, it’s a bit more fractured and splintered. There remains a lot of friction in accessing records as you move — even in some cases within the same healthcare system — from a clinic or the emergency department (ED) into the facility or to a specialist.

The challenge is one of interoperability of data and integration of that data. Hospitals continue to go through a lot of mergers and acquisitions, and every acquisition creates a new challenge.

From the consumer perspective, they want that to be invisible. It should be invisible, the right data should be on their phones regardless of what the encounter was, what the financial obligation for the encounter was — all of it. So that’s the expectation and what’s still happening. There is a way to go in terms of interoperability and integration from the healthcare side.

Gardner: We have addressed the process and the technology, but the third leg on the stool here is the people. How can the people who interact with patients at the outset foster a better environment? Has the role and importance of who is at that initial intercept with the patient been elevated? So much rides on getting the information up front. Jennifer, what about the people in the role of accessing and on-boarding patients, what’s changed with them?

Get off to a great start

Farmer: That is the crux of the difference between a good patient experience and a terrible patient experience, that first interaction. So folks who are scheduling appointments and maybe doing registration — they may be at the information desk — they are all the drivers to making sure that that patient starts off with a great experience.

Most healthcare organizations are delving into different facets of customer service in order to ensure that the patient feels great and like they belong when they come into an organization. Here at Mass. Eye and Ear, we practice something called Eye Care. Essentially, we think about how you would want yourself and your family members to be treated, to make sure that we all treat patients who walk in the door like they are our family members.

When you lead with such a positive approach it downstreams into that patient’s feelings of, “I am in the right place. I expect my care to be fantastic. I know that I’m going to receive great care.” Their positive initial outlook generally reflects the positive outcome of their overall visit.

Most providers are siloed, with different areas or departments. That means patients would hear, “Oh, sorry, we can’t help you. That’s not our area.” To make it a more inclusive experience, everyone in the organization is a brand ambassador.

This has changed dramatically even within the past two to three years. Most providers are siloed, with different areas or departments. That means patients would hear, “Oh, sorry, we can’t help you. That’s not our area.” To make it a more inclusive experience, everyone in the organization is a brand ambassador.

We have to make sure that people understand that, to make it more inclusive for the patient and less hectic for the patient, no matter where you are within a particular organization. I’m sure Sandra can speak to this as well. We are all important to that patient, so if you don’t know the answer, you don’t have to say, “I don’t know.” You can say, “Let me get someone who can assist you. I’ll find some information for you.”

It shouldn’t be work for them when patients walk in the door. They should be treated as a guest, welcomed and treated as a family member. Three or four years ago, it was definitely the mindset of, “Not my job.” At other organizations that I visit, I do see more of a helpful environment, which has changed the patient perception of hospitals as well.

Beach: I couldn’t agree more, Jennifer. We have the same thing here as with your Eye Care. I ask our staff every day, “How would you feel if you were the patient walking in our door? Are we greeting patients with a nice, warm, friendly smile? Are we asking, ‘How can I help you today?’ Or, ‘Good morning, what can I do for you today?’”

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

We keep that at the forefront for our staff so they are thinking about this every time that they greet a patient, every day they come to work, because patients have choices, patients can go to other facilities, they can go to other providers.

We want to keep our patients within our healthcare system. So it’s really important that we have a really good patient experience on the front end, because Jennifer is correct, it has a positive outcome on the back end. If they start off in the very beginning with a scheduler or a registrar or an ED check-in person, and they are not greeted in a friendly, warm atmosphere, then typically that’s what sets off their total visit. That seems to be what they remember. That first interaction is really what they remember.

Gardner: Julie, this reflects back on what’s been happening in the consumer world around the user experience. It seems obvious.

So I’m curious about this notion of competition between healthcare providers. That might be something new as well. Why do healthcare provider organizations need to be thinking about this perception issue? Is it because people could pick up and choose to go somewhere else? How has competition changed the landscape when it comes to healthcare?

Competing for consumers’ care 

Gerdeman: Patients have choices. Sandra described that well. Patients, particularly in metropolitan or suburban areas, have lots of options for primary care, specialty care, and elective procedures. So healthcare providers are trying to respond to that.

In the last few years you have seen not just consumerism from the patient experience, but consumerism in terms of advertising, marketing, and positioning of healthcare services — like we have never seen before. That competition will continue and become even more fierce over time.

mee-exteriorProviders should put the patient at the center of everything that they do. Just as Jennifer and Sandra talked about, putting the patient at the heart and then showing empathy from the very first interaction. The digital interaction needs to show empathy, too. And there are ways to do that with technology and, of course, the human interaction when you are in the facility.

Patients don’t want to be patients most of the time. They want to be humans and live their lives. So, the technology supporting all of that becomes really crucial. It has to become part of that experience. It has to arm the patient access team and put the data and information at their fingertips so they can look away from a computer or a kiosk and interact with that patient on a different level. It should arm them to have better, empathic interactions and build trust with the patient, with the consumer.

Gardner: I have seen that building competition where I live in New Hampshire. We have had two different nationally branded critical-care clinics open up — pop-up like mushrooms in the spring rain — in our neighborhood.

Let’s talk about the experience not just for the patient but for that person who is in the position of managing the patient access. The technology has extended data across the partner organization. But still technology is often not integrated in the back end for the poor people who are jumping between four and five different applications — often multiple systems — to on-board patients.

What’s the challenge from the technology for the health provider organization, Jennifer?

One system, one entry point, it’s Epic

Farmer: That used to be our issue until we gained the Epic system in 2016. People going into multiple applications was part of the issue with having a positive patient experience. Every entry point that someone would go to, they would need to repeat their name and date of birth. It looked one way in one system and it looked another way in a different system. That went away with Epic.

Epic is one system, the registration or the patient access side. It is also the coding side, it’s billing, it’s medical records, it’s clinical care, medications, it’s everything.

So for us here at Mass. Eye and Ear, no matter where you go within the organization, and as Sandra mentioned earlier, we are part of the same Partners HealthCare system. You can actually go to any Partners facility and that person who accesses your account can see everything. From a patient access standpoint, they can see your address and phone number, your insurance information, and who you have as an emergency contact.

There isn’t that anger that patients had been feeling before, because now they are literally giving their name and date of birth only as a verification point. It does make it a lot easier for our patients to come through the door, go to different departments for testing, for their appointment, for whatever reason that they are here, and not have to show their insurance card 10 times.

If they get a bill in the mail and they are calling our billing department, they can see the notes that our financial coordinators, our patient access folks, put on the account when they were here two or three months ago and help explain why they might have gotten a bill. That’s also a verification point, because we document everything.

So, a financial coordinator can tell a patient they will get a bill for a co-pay or for co-insurance and then they get that bill, they call our billing team, they say, “I was never told that,” but we have documentation that they were told. So, it’s really one-stop shopping for the folks who are working within Epic. For the patient, nine times out of 10 they just can go from floor to floor, doctor to doctor, and they don’t have to show ID again, because everything is already stored in Epic.

Beach: I agree because we are on Epic as well. Prior to that, three years ago, it would be nothing for my registrars to have six, seven systems up at the same time and have to toggle back and forth. You run a risk by doing that, because you have so many systems up and you might have different patients in the system, so that was a real concern.

If a patient came in and didn’t have an order from the provider, we would have to call their office. The patient would have to wait. We might call two or three times.

Now we have one system. If the patient doesn’t have the order, it’s in the computer system. We just have to bring it up, validate it, patient gets checked in, patient has their exam, and there is no wait. It’s been a huge win for us and for our patients.

Now, we have one system. If the patient doesn’t have the order, it’s in the computer system. We just have to bring it up, validate it, patient gets checked in, patient has their exam, and there is no wait. It’s been a huge win for us for sure — and for our patients.

Gardner: Privacy and compliance regulations play a more important role in the healthcare industry than perhaps anywhere else. We have to not only be mindful of the patient experience, but also address these very important technical issues around compliance and security. How are you able to both accomplish caring for the patient and addressing these hefty requirements?

It’s healthy to set limits on account access

Farmer: Within Epic, access is granted by your role. Staff that may be working in admitting or the ED or anywhere within patient access, but they don’t have access to someone’s medication list or their orders. However, another role may have access.

Compliance is extremely important. Access is definitely something that is taken very seriously. We want to make sure that staff are accessing accounts appropriately and that there are guardrails built in place to prevent someone from accessing accounts if they should not be.

For instance, within the Partners HealthCare system, we do tend to get people of a certain status; we get politicians, we get celebrities, we get heads of state, public figures that go to various hospitals, even outside of Partners that are receiving care. So we have locks on those particular accounts for employees. Their accounts are locked.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

So if you try to access the account, you get a hard stop. You have to complete why you are accessing this account, and then it is reviewed immediately. And if it’s determined that your role has nothing to do with it, you should not be accessing this particular account, then the organization does takes necessary steps to investigate and either say yes, they had a reason to be in this account, or no, they did not, and the potential of termination is there.

But we do take privacy very seriously within the system and then outside of the system. We make sure we are providing a safe space for people to be able to provide us with their information. It is on the forefront, it drives us, and folks definitely are aware because it is part of their training.

Cooley-DickinsonBeach: You said it perfectly, Jennifer. Because we do have a lot of people that are high profile and that do come through our healthcare systems the security, I have to say, is extremely tight on records. And so it should be. If you are in a record, and you shouldn’t be there, then there are consequences to that.

Gardner: Julie, in addition to security and privacy we have also had to deal with a significant increase in the complexity around finances and payments given how insurers and the payers work. Now there are more copays, more kinds of deductibles. There are so many different plans: platinum, gold, silver, bronze.

In order to keep the goal of a positive patient experience, how are we addressing this new level of complexity when it comes to the finances and payments? Do they go hand-in-hand, the patient experience, the access, and the economics?

A clean bill of health for payment

Gerdeman: They do, and they should, and they will continue to. There will remain complexity in healthcare. It will improve certainly over time, but with all of the changes we have seen complexity is a given. It will be there. So how to handle the complexity, with technology, with efficient process, and with the right people becomes more and more important.

There are ways to make the complex simple with the right technology. On the back end, behind that amazing patient experience — both the clinical experience and also the financial experience – we try to shield the patient. At HealthPay24 we are focused on financial experience and taking all of the data that’s behind there and presenting it very simply to a patient.

That means one small screen on the phone — with different encounters and different back ends – of being able to present that very simply for our patients to meet their financial obligations. They are not concerned that the ED had one different electronic medical record (EMR) than the specialist. That’s really not the concern of the patient, nor should it be. It’s the concern of how the providers can use technology in the back end to then make it simple and change that experience.

We talked about loyalty, and that’s what drives loyalty. You are going to keep coming back to a great experience, with great care, and ease of use. So for me, that’s all crucial as we go forward with healthcare – the technology and the role it plays.

Gardner: And Jennifer and Sandra, how do you see the relationship between the proper on-boarding, access, and experience and this higher complexity around the economics and finance? Do you see more of the patient experience addressing the economics?

Farmer: We have done an overhaul of our system, where it concerns patients, for paying bills or for not having health insurance. Our financial coordinators are there to assist our patients, whether by phone, email, in person. There are lots of different programs we can introduce patients to.

We are certified counselors for the Commonwealth of Massachusetts. That means we are able to help the patient apply for health insurance through the Health Connector for Massachusetts as well as for the state Medicaid program called MassHealth. And so we are here to help those patients go through that process.

We also have an internal program that can assist patients with paying their bills. We talk to patients about different credit cards that are available for those that may qualify. And essentially, the bottom line too is somebody just once again on a payment plan. So, we take many factors, and we try to make it work as best as we can for the patient.

At the end of the day, it’s about that patient receiving care and making sure that they are feeling good about it. We definitely try to meet their needs and introduce them to different things. We are here to support them, and at the end of the day it’s again about their care. If they can’t pay anything right now, but they obviously need immediate medical services, then we assure them, let’s focus on your care. We can talk about the back end or we can talk about your bills at a different point.

We do provide them with different avenues, and we are pretty proud of that because I like to believe that we are successful with it and so it helps the patient overall.

Gerdeman: It really does go to that patients want to meet their obligations, but they need options to be able to do that. Those options become really important — whether it’s a loan program, a payment plan, applying for financial assistance – and technology can enable all of these things.

For HealthPay24, we enable an eligibility check right in the platform so you don’t have to worry about others knowing. You can literally check for eligibility by clicking a button and entering a few fields to know if you should be talking to financial counseling at a provider.

You can apply for payment plans, if the providers opt for that. It will be proactively offered based on demographic data to a patient through the platform. You can also apply for loans, for revolving credit, through the platform. Much of what patients want and need financially is now available and enabled by technology.

Gardner: Sandra, such unification across the financial, economic, and care giving roles strikes me as something that’s fairly new.

Beach: Yes, absolutely it is. We have a program in our ED, for example, that we instituted a year ago. We offer an ED discharge service so when the patient is discharged, they stop at our desk and we offer these patients a wide variety of payment options. Or maybe they are homeless and they are going through a tough time. We can tell them where they can go to get a free meal or spend the night. There are a whole bunch of programs available.

That’s important because we will never turn a patient away. And when patients come through our ED, they need care. So when they leave, we want to be able to help them as much as we can by supporting them and giving them these options. 

That’s important because we will never turn a patient away. And when patients come through our ED, they need care. So when they leave, we want to be able to help them as much as we can by supporting them and giving them these options.

We have also made phone calls for our patients as well. If they need to get someplace just to spend the night, we will call and we will make that arrangement for those patients. So when they leave, they know they have a place to go. That’s really important because people go through hard times.

Gardner: Sandra, do you have any other examples of processes or approaches to people and technology that you have put in place recently? What have been some of the outcomes?

Check-in at home, spend less time waiting 

Beach: Well, the ED discharge service has made a huge impact. We saw probably 7,000-8,000 patients through that desk over the last year. We really have helped a lot of patients. But we are also there just to lend an ear. Maybe they have questions about what the doctor just said to them, but they really weren’t sure what he said. So it’s just made a huge impact for our patients here.

Gardner: Jennifer, same question, any processes you have put in place, examples of things that have worked and what are the metrics of success?

Farmer: We just rolled out e-check-in. So I don’t have any metrics on it just yet, but this is a process where the patient can go to their MyChart or their EHR and check in for an appointment prior to the day. They can also pay their copay. They can provide us with updates to their insurance information, address or phone number, so when they actually come to their appointment, they are not stopping at the desk to sign in or do check in.

That seems to be a popular option for the office visitor currently piloting this, and we are hoping for a big success. It will be rolled out to other entities, but right now that is something that we are working on. It’s tying in the technology, the patient care, for the patient access. It’s tying in the ease of the check-in with that patient. And so again, we are hoping that we have some really positive metrics on that.

Gardner: What sort of timeframe are we talking about here in terms of start to finish from getting that patient into their care?

Farmer: So if they are walking in the door because they have already done e-check-in, they are immediately going in for their appointment, because they are showing up on time, they are expected, they are going right in, so the time that the patient is sitting there waiting in line, sitting in the waiting area, that’s being reduced; the time that they have to talk to someone about any changes or confirming everything that we have on their account, that time is being reduced.

MEE-ExteriorAnd then we are hoping to test this in a pilot program for the next month to six weeks to see what kind of data we can get and hopefully that will — just across the board, just help with that check in process for patients and reduce that time for the folks who are at the desk and they can focus on other tasks as well. So we are giving them back their time.

Gardner: Julie, this strikes me in the parlance of other industries as just-in-time healthcare, and it’s a good move. I know you deal with a national group of providers and payers. Any examples, Julie, that demonstrate and illustrate the positive direction we are going with patient access and why technology is an important part of that?

Just-in-time wellness

Gerdeman: I refer to Christopher Penn’s model of People, Process, and Technology here, Dana, because when people touch process, there is scale, and when process and technology intersect, there is automation. But most importantly, when people intersect with technology, there is innovation, and what we are seeing is not just incremental innovation — but huge leaps in innovation.

What Jen just described as that experience of just-in-time healthcare, that is literally a huge need, that’s a leap, right? We have come to expect it when we reserve a table via OpenTable, when we e-check-in for a hair appointment. I go back to that consumer experience, but that innovation, right, that’s happening all across healthcare.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

One of the things that we just launched, which we are really excited about, is predictive analytics tied to the payment platform. If you know and can anticipate the behaviors and the patterns of a demographic of patients, financially speaking, then it will help ease the patient experience in what they owe, how they pay, and what’s offered to them. It boosts the bottom line of providers, because they are going to get increased revenue collection.

So where predictive analytics is going in healthcare and tying that to the patient experience and to the financial systems, I think will become more and more important. And that leads to even more — there is so much emerging technology on the clinical side and we will continue to see more emerging technology on the back-end systems and the financial side as well.

Gardner: Before we close out, perhaps a look to the future, and maybe even a wish list. Jennifer, if you had a wish list for how this will improve in the next few years, what’s missing, what’s yet to come, what would you like to see available with people, process, and technology?

Farmer: I go back to just patient care, and while we are in a very good spot right now, it can always improve. We need more providers, we need more technicians, we need more patient access folks, and the sense of being able to take care of people because the population is growing and whether you know it or not, you are going to need a doctor at some point.

So I think continuing on the path that we are on of providing excellent customer service, listening to patients, being empathetic. Also providing them with options; different appointment times, different finance options, different providers, it can only get better.

Beach: I absolutely agree. We have a really good computer system, we have the EMRs, but I would have to agree with Jennifer as well that we really need more providers. We need more nurses to take care of our patients.

Gardner: So it comes down to human resources. How about those front-line people who are doing the patient access intercept? Should they have an elevated status, role, and elevated pay schedule?

Farmer: It’s really tough for the patient access people because on the front line — every minute of every day, eight to 10 hours a day — they are working on that front line, so sometimes that’s tough.

It’s really important that we keep training with them. We give them options of going to customer service classes, because their role has changed from basically checking in a patient to now making sure if their insurance is correct. We have so many different insurance plans these days. To know each of those elevates that registrar to be almost an expert in that field in order to be able to help the patient and get them through that registration process, and the bottom line — to get reimbursed for those services. So it’s really come a long way.

Gardner: Julie, on this future perspective, what do you think will be coming down the pike for provider organizations like Jennifer and Sandra’s in terms of technology and process efficiency? How will the technology become even more beneficial?

Gerdeman: It’s going to be a big balancing act. What I mean by that is we are now officially more of an older country than a younger country in terms of age. People are living longer, they need more care than ever before, and we need the systems to be able to support that. So, everything that was just described is critical to support our aging population.

We have a whole other generation entering into healthcare as patients, as providers, and as technologists. This new generation has a completely different expectation of what that experience should and will be. 

But what I mean by the balancing act is we have a whole other generation entering into healthcare as patients, as providers, and as technologists. This new generation has a completely different expectation of what that experience should and will be. They might have an expectation that their wearable device should give all of that data to a provider. That they wouldn’t need to explain it, that it should all be there all day, not just that they walk in and have just-in-time, but all the health data is communicated ahead of time, before they are walking in and then having a meaningful conversation about what to do.

This new generation is going to shift us to wellness care, not just care when we are sick or injured. I think that’s all changing. We are starting to see the beginnings of that focus on wellness. And wearables and devices, and how they are used, the providers are going to have to juggle that with the aging population and traditional services — as well as the new. Technology is going to be a key, core part of that going forward.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in artificial intelligence, Business intelligence, Cloud computing, contact center, CRM, electronic medical records, Enterprise transformation, healthcare, Identity, machine learning, professional services, Security, supply chain, User experience | Tagged , , , , , , , , , , | Leave a comment

How security designed with cloud migrations in mind improves an enterprise’s risk posture top to bottom

DominosThe next BriefingsDirect data security insights discussion explores how cloud deployment planners need to be ever-vigilant for all types of cybersecurity attack vectors. Stay with us as we examine how those moving to and adapting to cloud deployments can make their data and processes safer and easier to recover from security incidents.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about taking the right precautions for cloud and distributed data safety we welcome two experts in this field, Mark McIntyre, Senior Director of Cybersecurity Solutions Group at Microsoft, and Sudhir Mehta, Global Vice President of Product Management and Strategy at Unisys. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, what’s changed in how data is being targeted for those using cloud models like Microsoft Azure? How is that different from two or three years ago?

Mark McIntyre

McIntyre

McIntyre: First of all, the good news is that we see more and more organizations around the world, including the US government, but broadly more global, pursuing cloud adoption. I think that’s great. Organizations around the world recognize the business value and I think increasingly the security value.

The challenge I see is one of expectations. Who owns what, as you go to the cloud? And so we need to be crisper and clearer with our partners and customers as to who owns what responsibility in terms of monitoring and managing in a team environment as you transition from a traditional on-premises environments all the way up into a software-as-a-services (SaaS) environment.

Gardner: Sudhir, what’s changed from your perspective at Unisys as to what the cloud adoption era security requirements are?

Sudhir Mehta

Mehta

Mehta: When organizations move data and workloads to the cloud, many of them underestimate the complexities of securing hybrid, on-premises, and cloud ecosystems. A lot of the failures, or what we would call security breaches or intrusions, you can attribute to inadequate security practices, policies, procedures, and misconfiguration errors.

As a result, cloud security breach reports have been on the rise. Container technology adds flexibility and speed-to-market, but it is also introducing a lot of vulnerability and complexity.

A lot of customers have legacy, on-premises security methodologies and technologies, which obviously they can no longer use or leverage in the new, dynamic, elastic nature of today’s cloud environments.

Gartner estimates that through 2022 at least 95 percent of cloud security failures will be the customers’ fault. So the net effect is cloud security exposure, the attack surface, is on the rise. The exposure is growing.

Change in cloud worldwide 

Gardner: People, process, and technology all change as organizations move to the cloud. And so security best practices can fall through the cracks. What are you seeing, Mark, in how a comprehensive cloud security approach can be brought to this transition so that cloud retains its largely sterling reputation for security?

McIntyre: I completely agree with what my colleague from Unisys said. Not to crack a joke — this is a serious topic — but my colleagues and I meet a lot with both US government and commercial counterparts. And they ask us, “Microsoft, as a large cloud provider, what keeps you awake at night? What are you afraid of?”

It’s always a delicate conversation because we need to tactfully turn it around and say, “Well, you, the customer, you keep us awake at night. When you come into our cloud, we inherit your adversaries. We inherit your vulnerabilities and your configuration challenges.”

We need to be really clear with our customers about the technologies that they need to make themselves more secure. We need to give them awareness into their posture so it’s built right into the fabric of the cloud service.

As our customers plan a cloud migration, it will invariably include a variety of resources being left on-premises, in a traditional IT infrastructure. We need to make sure that we help them understand the benefits already built into the cloud, whether they are seeking infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or SaaS. We need to be really clear with our customers — through our partners, in many cases – about the technologies that they need to make themselves more secure. We need to give them awareness into their posture so that it is built right into the fabric of the cloud service.

Gardner: Sudhir, it sounds as if organizations who haven’t been doing things quite as well as they should on-premises need to be even more mindful of improving on their security posture as they move to the cloud, so that they don’t take their vulnerabilities with them.

From Unisys’s perspective, how should organizations get their housecleaning in order before they move to the cloud?

Don’t bring unsafe baggage to the cloud 

Mehta: We always recommend that customers should absolutely first look at putting their house in order. Security hygiene is extremely important, whether you look at data protection, information protection, or your overall access exposure. That can be from employees working at home or through to vendors or third-parties — wherever they have access to a lot of your information and data.

azure bugFirst and foremost, make sure you have the appropriate framework established. Then compliance and policy management are extremely important when you move to the cloud and to virtual and containerized frameworks. Today, many companies do their application development in the cloud because it’s a lot more dynamic. We recommend that our customers make sure they have the appropriate policy management, assessments, and compliance checks in place for both on-premises and then for your journey to the cloud.

Learn More About  Cyber Recovery

With Unisys Stealth 

The net of it is, if you are appropriately managed when you are on-premises, chances are as you move from hybrid to more of a cloud-native deployment and/or cloud-native services, you are more likely to get it right. If you don’t have it all in place when you are on-premises, you have an uphill battle in making sure you are secured in the cloud.

Gardner: Mark, are there any related issues around identity and authentication as organizations move from on-premises to outside of their firewall into cloud deployment? What should organizations be thinking about specifically around identity and authentication?

Avoid an identity crisis

McIntyre: This is a huge area of focus right now. Even within our own company, at Microsoft, we as employees operate in essentially an identity-driven security model. And so it’s proper that you call this out on this podcast.

Face IDThe idea that you can monitor and filter all traffic, and that you are going to make meaningful conclusions from that in real time — while still running your business and pursuing your mission — is not the best use of your time and your resources. It’s much better to switch to a more modern, identity-based model where you can actually incorporate newer concepts.

Within Microsoft, we have a term called Modern Workplace. It’s a reflection of the fact that government organizations and enterprises around the world are having to anticipate and hopefully provide a collaborative work environment where people can work in a way that reflects their personal preferences around devices and working at home or on the road at a coffee shop or restaurant — or whatever. The concept of work has changed around enterprise and is definitely forcing this opportunity to look at creating a more modern identity framework.

Zero Trust networking and micro-segmentation initiatives recognize that we know people need to keep working and doing their jobs wherever they are. The idea is to accept the fact that people will always cause some level of risk to the organization.

If you look at some of the initiatives in the US government right now, we hear the term Zero Trust. That includes Zero Trust networking and micro-segmentation. Initiatives like these recognize that we know people need to keep working and doing their jobs wherever they are. The idea is to accept the fact that people will always cause some level of risk to the organization.

We are curious, reasonably smart, well-intentioned people, and we make mistakes, just like anybody else. Let’s create an identity-driven model that allows the organization to get better insight and control over authentications, requests for resources, end-to-end, and throughout a lifecycle.

Gardner: Sudhir, Unisys has been working with a number of public-sector organizations on technologies that support a stronger posture around authentication and other technologies. Tell us about what you have found over the past few years and how that can be applied to these challenges of moving to a cloud like Microsoft Azure.

Mehta: Dana, going back in time, one of the requests we had from the US Department of Defense (DoD) on the networking side, was a concern around access to sensitive information and data. Unisys was requested by the DoD to develop a framework and implement a solution. They were looking at more of a micro-segmentation solution, very similar to what Mark just described.

So, fast forward, since then we have deployed and released a military-grade capability called Unisys Stealth®, wherein we are able to manage micro-segmentation, what we classify as key-based, encrypted micro-segmentation, that controls access to different hosts or endpoints based on the identity of the user. It permits only authorized users to communicate with approved endpoints and denies unauthorized communications, and so prevents the spread of east-to-west, lateral attacks.

Gardner: Mark, for those in our audience who aren’t that technology savvy, what does micro-segmentation mean? Why has it become an important foundational capability for security across a cloud-use environment?

Need-to-know access 

McIntyre: First of all, I want to call out Unisys’s great work here and their leadership in the last several years. It means a Zero-Trust environment can essentially gauge or control east-to-west behavior or activity in a distributed environment.

Stealth bugFor example, in a traditional IT environment, devices are not really well-managed when they are centralized, corporate-issued devices. You can’t take them out of the facility, of course. You don’t authenticate once you are on a network because you are already in a physical campus environment. But it’s different in a modern, collaborative environment. Enterprises are generally ahead on this change, but it’s now coming into government requirements, too.

And so now, you essentially can parse out your subjects and your objects, your subjects trying to access objects. You can spit them out and say, “We are going to create all user accounts with a certain set of parameters.” It amounts to a privileged, need-to-know model. You can enforce strong controls with a set of certain release-privilege rights. And, of course, in an ideal world, you could go a step further and start implementing biometrics [to authenticate] to get off of password dependencies.

Learn How Unisys Stealth Security 

Simplifies Zero Trust Networks

But number one, you want to verify the identity. Is this a person? Is this the subject who we think they are? Are they that subject based on a corroborating variety of different attributes, behaviors, and activities? Things like that. And then you can also apply the same controls to a device and say, “Okay, this user is using a certain device. Is this device healthy? Is it built to today’s image? Is it patched, clean, and approved to be used in this environment? And if so, to what level?”

And then you can even go a step further and say, “In this model, now that we can verify the access, should this person be able to use our resources through the public Internet and access certain corporate resources? Should we allow an unmanaged device to have a level of access to confidential documents within the company? Maybe that should only be on a managed device.”

So you can create these flexible authentication scenarios based on what you know about the subjects at hand, about the objects, and about the files that they want to access. It’s a much more flexible, modern way to interact.

Within Azure cloud, Microsoft Azure Active Directory services offer those capabilities – they are just built into the service. So micro-segmentation might sound like a lot of work for your security or identity team, but it’s a great example of a cloud service that runs in the background to help you set up the right rules and then let the service work for you.

Gardner: Sudhir, just to be clear, the Unisys Stealth(cloud) Extended Data Center for Microsoft Azure is a service that you get from the cloud? Or is that something that you would implement on-premises? Are there different models for how you would implement and deploy this?

A stealthy, healthy cloud journey 

Mehta: We have been working with Microsoft over the years on Stealth, and we have a fantastic relationship with Microsoft. If you are a customer going through a cloud journey, we deploy what we call a hybrid Stealth deployment. In other words, we help customers do what we call isolation with the help of communities of interests that we create that are basically groupings of hosts, users, and resources based on like interests.

Then, when there is a request to communicate, you create the appropriate Stealth-encrypted tunnels. If you have a scenario where you are doing the appropriate communication between an on-premises host and a cloud-based host, you do that through a secure, encrypted tunnel.

We have also implemented what we call cloaking. With cloaking, if someone is not authorized to communicate with a certain host or a certain member of a community of interest, you basically do not give a response back. So cloaking is also part of the Stealth implementation.

And in working closely with Microsoft, we have further established an automated capability through a discovery API. So when Microsoft releases new Azure services, we are able to update the overall Stealth protocol and framework with the updated Azure services. For customers who have Azure workloads protected by Stealth, there is no disruption from a productivity standpoint. They can always securely leverage whatever applications they are running on Azure cloud.

For customers leveraging Azure cloud with different workloads, we maintain the appropriate level of secure communications just as they would have in an on-premises deployment.

The net of it is being able to establish the appropriate secure journey for customers, from on-premises to the cloud, the hybrid journey. For customers leveraging Azure cloud with different workloads, we maintain the appropriate level of secure communications just as they would have in an on-premises deployment.

Gardner: Mark, when does this become readily available? What’s the timeline on how these technologies come together to make a whole greater than the sum of the parts when it comes to hybrid security and authentication?

McIntyre: Microsoft is already offering Zero Trust, identity-based security capabilities through our services. We haven’t traditionally named them as such, although we definitely are working along that path right now.

Microsoft Chief Digital Officer and Executive Vice President Kurt DelBene is on the US Defense Innovation Board and is playing a leadership role in establishing essentially a DoD or US government priority on Zero Trust. In the next several months, we will be putting more clarity around how our partners and customers can better map capabilities that they already own against emerging priorities and requirements like these. So definitely look for that.

Stealth cloud XDC for Microsoft AzureIn fact, Ignite DC is February 6 and 7, in downtown Washington, DC, and Zero Trust is certainly on the agenda there, so there will be updates at that conference.

But generally speaking, any customer can take the underlying services that we are offering and implement this now. What’s even better, we have companies that are already out there doing this. And we rely greatly on our partners like Unisys to go out and really have those deep architecture conversations with their stakeholders.

Gardner: Sudhir, when people use the combined solution of Microsoft Azure and Stealth for cloud, how can they react to attacks that may get through to prevent damage from spreading?

Contain contagion quickly 

Mehta: Good question! Internally within Unisys’s own IT organization, we have already moved on this cloud journey. Stealth is already securing our Azure cloud deployments and we are 95 percent deployed on Azure in terms of internal Unisys applications. So we like to eat our own dog food.

If there is a situation where there is an incident of compromise, we have a capability called dynamic isolation, where if you are looking at a managed security operations center (SOC) situation, we have empowered the SOC to contain a risk very quickly.

We are able to isolate a user and their device within 10 seconds. If you have a situation where someone turns nefarious, intentionally or coincidentally, we are able to isolate the user and then implement different thresholds of isolation. If a high threshold level is breached across 8 out of 10, that means we completely isolate that user.

Learn More About  Cyber Recovery

With Unisys Stealth 

If there is a threshold level of 5 or 6, we may still give the user certain levels of access. So within a certain group they would continue to access or be able to communicate.

Dynamic isolation isolates a user and their device with different levels of thresholds while we have like a managed SOC go through their cycles of trying to identify what really happened as part of what we would call an advanced response. Unisys is the only solution where we can actually isolate a user or the device within the span of seconds. We can do it now within 10 seconds.

McIntyre: Getting back to your question about Microsoft’s plans, I’m very happy to share how we’ve managed Zero Trust. Essentially it relies on Intune for device management and Azure Active Directory for identity. It’s the way that we right now internally manage our own employees.

My access to corporate resources can come via my personal device and work-issued device. I’m very happy with what Unisys already has available and what we have out there. It’s a really strong reference architecture that’s already generally available.

Gardner: Our discussion began with security for the US DoD, among the largest enterprises you could conceive of. But I’m wondering if this is something that goes down market as well, to small- to medium-sized businesses (SMBs) that are using Azure and/or are moving from an on-premises model.

Do Zero Trust and your services apply to the mom and pop shops, SMBs, and the largest enterprises?

All sizes of businesses

McIntyre: Yes, this is something that would be ideally available for an SMB because they likely do not have large logistical or infrastructure dependencies. They are probably more flexible in how they can implement solutions. It’s a great way to go into the cloud and a great way for them to save money upfront over traditional IT infrastructure. So SMBs should have a really good chance to literally, natively take an idea like this and implement it.

Gardner: Sudhir, anything to offer on that in terms of the technology and how it’s applicable both up and down market?

Mehta: Mark is spot on. Unisys Stealth resonates really well for SMBs and the enterprise. SMBs benefit, as Mark mentioned, in their capability to move quickly. And with Stealth, we have an innovative capability that can discover and visualize your users. Thereafter, you can very quickly and automatically virtualize any network into the communities of interest I mentioned earlier. SMBs can get going within a day or two.

Enterprises can define their journey depending on what you’re actually trying trying to migrate or run in the cloud. The opportunities are there for both SMBs and enterprises.

If you’re a large enterprise, you can define your journey — whether it’s from on-premises to cloud — depending on what you’re actually trying to migrate or run in the cloud. So I would say absolutely both. And it would also depend on what you’re really looking at managing and deploying, but the opportunities are there for both SMBs and enterprises.

Gardner: As companies large and small are evaluating this and trying to discern their interest, let’s look at some of the benefits. As you pointed out, Sudhir, you’re eating your own dog food at Unisys. And Mark has described how this is also being used internally at Microsoft as well.

Do you have ways that you can look at before and after, measure quantitatively, qualitative, maybe anecdotally, why this has been beneficial? It’s always hard in security to prove something that didn’t happen and why it didn’t happen. But what do you get when you do Stealth well?

Proof is in the protection 

Mehta: There are a couple of things, Dana. So one is there is certainly a reduction in cost. When we deploy for 20,000 Unisys employees, our Chief Information Security Officer (CISO) obviously has to be a big supporter of Stealth. His read is from a cost perspective that we have seen significant reductions in costs.

Prior to having Stealth implemented, we had a certain approach as relates to network segmentation. From a network equipment perspective, we’ve seen a reduction of over 70 percent. If you look at server infrastructure, there has been a reduction of more than 50 percent. The maintenance and labor costs have had a reduction north of 60 percent. Ongoing support labor cost has also seen a significant reduction as well. So that’s one lens you could look at.

The other lens that has been interesting is the virtual private network (VPN) exposure. As many of us know, VPNs are perhaps the best breach route for hackers today. When we’ve implemented Stealth internally within Unisys, for a lot of our applications we have done away with the requirement for logging into a VPN application. That has made for easier access to a lot of applications – mainly for folks logging in from home or from a Starbucks. Now when they communicate, it is through an encrypted tunnel and it’s very secure. The VPN exposure completely goes away.

Those are the best two lenses I could give to the value proposition. Obviously there is cost reduction. And the other is the VPN exposure goes away, at least for Unisys that’s what we’ve found with implementing internally.

Gardner: For those using VPNs, should they move to something like Stealth? Does the way in which VPNs add value change when you bring something like Stealth in? How much do you reevaluate your use of VPNs in general?

Mehta: I would be remiss to say you can completely do away with VPNs. If you go back in time and see why VPNs were created, the overall framework was created for secure access for certain applications. Since then, for whatever reasons, VPNs became the only way people communicate from working at home, for example. So the way we look at this is, for applications that are not extremely limited to a few people, you should look at options wherein you don’t necessarily need a VPN. You could therefore look at a solution like Unisys Stealth.

And then if there are certain applications that are extremely sensitive, limited to only a few folks for whatever reason, that’s where potentially you could consider using an application like a VPN.

Gardner: Let’s look to the future. When you put these Zero Trust services into practice, into a hybrid cloud, then ultimately a fully cloud-native environment, what’s the next shoe to fall? Are there some things you gain when you enter into this level of micro-segmentation, by exploiting these newer technologies?

Can this value be extended to the edge, for example? Does it have a role in Internet of things (IoT)? A role in data transfers from organization to organization? What does this put us in a position to do in the future that we couldn’t have done previously?

Machining the future securely 

McIntyre: You hit on two really important points. Obviously devices, IoT devices, for example, and data. So data increasingly — you see T-shirts out and you see slogans, “Data is the new oil,” and such. From a security point of view there is no question this is becoming the case, when there’s something like 44 to 45 zettabytes of data projected to be out there for the next few years.

You can employ traditional security monitoring practices, for example label-free detection, things like that. But it’s just not going to allow you to work quickly, especially in an environment where we’re already challenged with having enough security workforce. There are not enough people out there, it’s a global talent shortage.

It’s a fantastic opportunity forced on us to rely more on modern authentication frameworks and on machine learning (ML) and artificial intelligence (AI) technologies to take on a lot of that lower-level analysis, the log analysis work, out of human hands and have machines free people up for the higher-level work.

We’re trying to make sure that as we deliver new services to the marketplace that those are built in a way that you can configure and monitor them like any other device in the company.  We can make sure that it is being monitored in the same way as your traditional infrastructure. 

For example, we have a really interesting situation within Microsoft. It goes around the industry as well. We have many organizations go into the cloud, but of course, as we mentioned earlier, it’s still unclear on the roles and responsibilities. We’re also seeing big gaps in use of cloud resources versus security tools built into those resources.

And so we’re really trying to make sure that as we deliver new services to marketplace, for example, IoT, that those are built in a way that you can configure and monitor them like any other device in the company. With Azure, for example, we have IoT Hub. We can literally, as you build an IoT device, make sure that it is being monitored in the same way as your traditional infrastructure monitors.

cloud imageThere should not be a gap there. You can still apply the same types of logical access controls around them. There shouldn’t be any tradeoffs on security for how you do security — whether it’s IT or IoT.

Gardner: Sudhir, same question, what is use of Stealth in conjunction with cloud activities get you in the future?

Mehta: Tagging on to what Mark said, AI and ML are becoming interesting. We obviously had a very big digital workplace solutions organization. We are a market leader for services, for helpdesk services. We are looking at the introduction of a lot of what you would call as AIOps in automation as it leads to robotic process automation (RPA) and voice assistance.

So one of the things we are observing is, as you go on this AI-ML, there is a larger exposure because you are focusing more around the operationalization in automation or AI-ML and certain areas where you may not be able to manage, for instance, the way you get the training done for your bots.

So that’s where Stealth is a capability we are implementing right now with digital workplace solutions as part of a journey for AIOps automation as an example. The other area we are working very closely with some of other partners, as well as Microsoft, is around application security and hardening in the cloud.

How do you make sure that when you deploy certain applications in the cloud you ensure that it is secure and it is not being breached, or are there intrusions when you try to make changes to your applications?

Those are two areas we are currently working on, the AIOps and MLOps automation and then the application security and hardening in the cloud, working with Microsoft as well.

Gardner: If I want to be as secure as I can, and I know that I’m going to be doing more in the cloud, what should I be doing now in order to make myself in the best position to take advantage of things like micro-segmentation and the technologies behind Stealth and how they apply to a cloud like Azure? How should I get myself ready to take advantage of these things?

Plan ahead to secure success 

McIntyre: First thing is to remember how you plan and roll out your security estate. It should be no different than what you’re doing with your larger IT planning anyway, so it’s all digital transformation. First thing to do is close that gap between security teams. All the teams – business and IT — should be working together.

Learn How Unisys Stealth Security 

Simplifies Zero Trust Networks

We want to make sure that our customers go to the cloud in a secure way, without losing this ability to access their data. We continue to put more effort in very proactive services — architecture guidance, recommendations, things that can help people get started in the cloud. It’s called Azure Blueprints, a configuration guidance and predefined templates that can help an organization launch a resource in the cloud that’s already compliant against FedRAMP or NIST or ISO or HIPAA standards.

We’ll continue to invest in the technologies that help customers securely deploy technologies or cloud resources from the get-go so that we close those gaps and configuration and close the gaps in reporting and telemetry as well. And we can’t do it without great partners that provide those customized solutions for each sector.

Gardner: Sudhir, last word to you. What’s your advice for people to prepare themselves to be ready to take advantage of things like Stealth?

Mehta: Look at a couple of things. One is focus on trusted identity in terms of who you work with, who you give access to. Even within your organization you obviously need to make sure you establish that trusted identity. And how you do it is you make sure it is simple. Second, look at an overlay network agnostic framework, which is where Stealth can help you. Make sure it is unique. One individual has one identity. Third is make sure it is refutable. So it’s undeniable in terms of how you implement it, and then the fourth is, make sure it’s got the highest level of efficacy, whether it’s related to how you deploy and it’s also the way you architect your solution.

So, the net of it is, a) trust no one, b) assume a breach can occur, and then c) respond really fast to limit damage. If you do these three things, you can get to Zero Trust for your organization.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsors: Unisys and Microsoft.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Business intelligence, Cloud computing, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, machine learning, Microsoft, multicloud, Security, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

SambaSafety’s mission to reduce risk begins in its own datacenter security partnerships

carsSecurity and privacy protection increasingly go hand in hand, especially in sensitive industries like finance and public safety.

For driver risk management software provider SambaSafety protecting their business customers from risk is core to their mission — and that begins with protection of their own IT assets and workers.

Stay with us now as BriefingsDirect explores how SambaSafety adopted Bitdefender GravityZone Advanced Business Security and Full Disk Encryption to improve the end-to-end security of their operations and business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To share their story, please welcome Randy Whitten, Director of IT and Operations at SambaSafety in Albuquerque, New Mexico. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Randy, tell us about SambaSafety, how big it is, and your unique business approach.

Randy Whitten

Whitten

Whitten: SambaSafety currently employs approximately 280 employees across the United States. We have four locations. Corporate headquarters is in Denver, Colorado. Albuquerque, New Mexico is another one of our locations. There’s Rancho Cordova just outside of Sacramento, California, and Portland, Oregon is where our transportation division is.

We also have a variety and handful of remote workers from coast to coast and from border to border.

Gardner: And you are all about making communities safer. Tell us how you do that.

Whitten: We work with departments of motor vehicles (DMVs) across the United States, monitoring the drivers for companies. We put a partnership together with state governments, and third-party information is provided to allow us to process reporting for critical driver information.

We seek to transform that data into action to protect the businesses and our customers from driver and mobility risk. We work to incorporate top-of-the-line security software to ensure that all of our data is protected while we are doing that.

Data-driven driver safety 

Gardner: So, it’s all about getting access to data, recognizing where risks might emerge with certain drivers, and then alerting those people who are looking to hire those drivers to make sure that the right drivers are in the right positions. Is that correct?

Whitten: That is correct. Since 1998, SambaSafety has been the pioneer and leading provider of driver risk management software in North America. SambaSafety has led the charge to protect businesses and improve driver safety, ultimately making communities safer on the road.

Our mission is to guide our customers, including employers, fleet managers, and insurance providers to make the right decisions at the right time by collecting, correlating and analyzing motor vehicle records (MVRs) and other data resources. We identify driver risk and enable our customers to modify their drivers’ behaviors, reduce the accidents, ensure compliance, and assist with lowering the cost, ultimately improving the driver and the community safety once again.

Gardner: Is this for a cross-section of different customers? You do this for public sector and private sector? Who are the people that need this information most?

Whitten: We do it across both sectors, public and private. We do it across transportation. We do it across drivers such as Lyft drivers, Uber drivers, and transportation drivers — our delivery carriers, FedEx, UPS, etc. — those types of customers.

These transportation drivers are delivering our commodities every day — the food we consume, the clothes we wear, the parts that fix our vehicles, all what’s essential to our everyday living.

Gardner: This is such an essential service, because so much of our economy is on four wheels, whether it’s a truck delivering goods and services, transportation directly for people, and public safety vehicles. A huge portion of our economy is behind the wheel, so I think this is a hugely important service you are providing.

Whitten: That’s a good point, Dana. Yes, it is very much. Transportation drivers are delivering our commodities every day — the food that we consume, the clothes that we wear, also the parts that fix our vehicles to drive, plus also just to be able to get like those Christmas packages via UPS or FedEx — the essential items to our everyday living.

intersectionGardner: So, this is mission-critical on a macro scale. Now, you also are dealing, of course, with sensitive information. You have to protect the privacy. People are entitled to information that’s regulated, monitored, and provided accordingly. So you have to be across-the-board reducing risk, doing it the right way, and you also have to make your own systems protected because you have that sensitive information going back and forth. Security and privacy are probably among your topmost mission-critical requirements.

Securing the sectors everywhere

Whitten: That is correct. SambaSafety has a SOC 2 Type II compliant certification. It actually is just the top layer of security we are using within our company, either for our endpoints or for our external customers.

Gardner: Randy, you described your organization as distributed. You have multiple offices, remote workers, and you are dealing with sensitive private and public sector information. Tell us what your top line thinking, your philosophy, about security is and then how you execute on that.

Whitten: Our top line essentially is to make sure that our endpoints are protected, that we are taking care of our employees internally to be able to set them up for success, so they don’t have to worry about security. All of our laptops are encrypted. We have different types of levels of security within our organization, so that gives all of our employees a way to ease their comfort so that they can concentrate on taking care of our end customer.

Gardner: That’s right, security isn’t just a matter of being very aggressive, it also means employee experience. You have to give your people the opportunity to get their work done without hindrance — and the performance of their machine, of course, is a big part of that.

Tell us about the pain points, what were the problems you were having in the past that led you into a new provider when it comes to security software?

We were seeing threats get through the previous antivirus solution, and the cost of that solution was increasing month over month. Every time we’d add a new license it would seem like the price would jump.

Whitten: Some of the things that we have had to deal with within the IT department here at SambaSafety is when we see our tickets come in, it’s typically about memory usage as applications were locking up the computers, where it took a lot of resources to be able to launch the application.

We also were seeing threats getting through the previous antivirus solution, and then just the cost, the cost of that solution was increasing month over month. Every time we would add a new license it would seem like the price point would jump.

Gardner: I imagine you weren’t seeing them as a partner as much as a hindrance.

Whitten: Yes, that is correct. It started to seem like it was a monthly call, then it turned into a weekly call to their support center just to be able to see if we could get additional support and help from them. So that brought up, “Okay, what do we do next and what is our next solution going to look like?”

Gardner: Tell me about that process. What did you look at, and how did you make your choices?

Whitten: We did an overall scoping session and brought in three different antivirus solutions providers. It just so happens that they all measured up to be the next vendor that we were going to work with. Bitdefender came out on top and it was a solution that we could put into our cloud-hosted solution, it was also something that we could work with on our endpoints and also to be able to ensure that all of our employees are protected.

Gardner: So you are using GravityZone Advanced Business Security, Full Disk Encryption, and the Cloud Management Console, all from Bitdefender, is that correct?

Whitten: That is correct. The previous solution for our disk encryption is just about exhausted. Currently we have about 90 percent of our endpoints for disk encryption on Bitdefender now and we have had zero issues with it.

Gardner: I have to imagine you are not just protecting your endpoints, but you have servers and networks, and other infrastructure to protect. What does that consist of and how has that been going?

truckWhitten: That is correct. We have approximately 280 employees, which equals 280 laptops to be protected. We have a fair amount of additional hardware that has to be protected. Those endpoints have to be secured. And then 30 percent of additional hardware, i.e. the Macs that are within our organization, are also part of that Bitdefender protection.

Gardner: And everyone knows, of course, that management of operations is essential for making sure that nothing falls between the cracks — and that includes patch management, making sure that you see what’s going on with machines and getting alerts as to what might be your vulnerability.

So tell us about the management, the Cloud Console, particularly as you are trying to do this across a hybrid environment with multiple sites?

See what’s secure to ensure success 

Whitten: It’s been vital for the success of Bitdefender and their console that we can log on and we can see what’s happening. It has been very key to the success. I can’t say that enough.

And it goes as far as information gathering, dashboard, data analytics, network scanning, and the vulnerability management – just being able to ensure our assets are protected has been key.

Also, we could watch the alerting that happens to ensure that the behavior is not changing from machine intelligence or machine learning (ML) so that our systems do not get infected in any way.

Gardner: And the more administration and automation you get, the more you are able to devote your IT operations people to other assets, other functions. Have you been able to recognize, not only an improvement in security, but perhaps an easing up on the man hours and labor requirements?

Whitten: Sure. The first 60 days of our implementation I was able to improve return on investment (ROI) quickly. We were able to allow additional team resources to focus on other tickets and also other items that came into our work scope within our department.

Bitdefender was already out there managing itself. It was doing what we paying for it to do. It was actually a really good choice for us. The partnership with them is very solid, we are very pleased with it, a win-win situation for both of our companies.

Bitdefender was already out there, and it was managing itself, it was doing what we were paying for it to do — and it was actually a really good choice for us. The partnership with them is very solid, we are very pleased with it, it is a win-win situation for both of our companies.

Gardner: Randy, I have had people ask me, “Why do I need Full Disk Encryption? What does that provide for me? I am having a hard time deciding whether it’s the right thing for our organization.”

What were your requirements for widespread encryption and why do you think that’s a good idea for other organizations?

sambasafety-logoWhitten: The most common reason to have Full Disk Encryption is you are at the store, someone comes in, they break into your car, they steal your laptop bag or they see your computer laying out, they take it. As the Director of IT and Operations for SambaSafety, my goal is to ensure that our assets are protected. So having Full Disk Encryption on board that laptop gives me a chance to sleep a little easier at night.

Gardner: You are not worried about that data leaving the organization because you know it’s got that encryption wrapper.

Whitten: That is correct. It’s protected all the way around.

Gardner: As we start to close out, let’s look to the future. What’s most important for you going forward? What would you like to see improve in terms of security, intelligence and being able to monitor your privacy and your security requirements?

Scope out security needs

Whitten: The big trend right now is to ensure that we are staying up to date and Bitdefender is staying up to date on the latest intrusions so that our software is staying current and we are pushing that out to our machines.

Also just continue to be right on top of the security game. We have enjoyed our partnership with Bitdefender to date and we can’t complain, and for sure it has been a win-win situation all the way around.

Gardner: Any advice for folks that are out there, IT operators like yourself that are grappling with increased requirements? More people are seeing compliance issues, audit issues, paperwork and bureaucracy. Any advice for them in terms of getting the best of all worlds, which is better security and better operations oversight management?

Bitdefender bug bestWhitten: Definitely have a good scope of what you are looking for, for your organization. Every organization is different. What tends to happen is that you go in looking for a solution and you don’t have all of the details that would meet the needs of your organization.

Secondly, get the buy-in from your leadership team. Pitch the case to ensure that you are doing the right thing, that you are bringing the right vendor to the table, so that once that solution is implemented, then they can rest easy as well.

Every company executive across the world right now that has any responsibility with data, definitely security is at the top of their mind. Security is at the top of my mind every single day, protecting our customers, protecting our employees, making sure that our data stays protected and secured so that the bad guys can’t have it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, Enterprise architect, Identity, Information management, machine learning, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Why flexible work and the right technology may just close the talent gap

order workers

Companies struggle to find qualified workers in the mature phase of any business cycle. Yet as we enter a new decade in 2020, they have more than a hyper-low unemployment rate to grapple with.

Businesses face a gaping qualitative chasm between the jobs businesses need to fill and the interest of workers in filling them. As a result, employees have more leverage than ever to insist that jobs cater to their lives, locations, and demands to be creatively challenged.

Accordingly, IDC predicts that by 2021, 60 percent of Global 2000 companies will have adopted a future workspace model — flexible, intelligent, collaborative, virtual, and physical work environments.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as BriefingsDirect explores how businesses must adapt to this new talent landscape and find the innovative means to bring future work and workers together. Our flexible work solutions panel consists of Stephane Kasriel, the former Chief Executive Officer and a member of the board at Upwork, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: If flexible work is the next big thing, that means we have been working for the past decade or two in an inflexible manner. What’s wrong with the cubicle-laced office building and the big parking lot next to the freeway model?

Tim Minahan

Minahan

Minahan: Dana, the problem dates back a little further. We fundamentally haven’t changed the world of work since Henry Ford. That was the model where we built big work hubs, big office buildings, call centers, manufacturing facilities — and then did our best to hire as much talent around that.

This model just isn’t working anymore against the backdrop of a global talent shortage, which is fast approaching more than 85 million medium- to high-skilled workers. We are in dire need of more modern skill sets that aren’t always located near the work hubs. And to your earlier point, employees are now in the driver’s seat. They want to work in an environment that gives them flexible work and allows them to do their very best work wherever and whenever they want to get it done.

Gardner: Stephane, when it comes to flexible work, are remote work and freelance work the same? How wide is this spectrum of options when it comes to flexible work?

Kasriel: Almost by definition, most freelance work is done remotely. At this stage, freelancing is growing faster than traditional work, about three times faster, in fact. About 35 percent of US workers are doing some amount of freelancing. And the vast majority of it is skilled work, which is typically done remotely.

Stephane Kasriel

Kasriel

Increasingly what we see is that freelancers become full-time freelancers; meaning it’s their primary source of income. Usually, as a result of that, they tend to move. And when they move it is out of big cities like San Francisco and New York. They tend to move to smaller cities where the cost of living is more affordable. And so that’s true for the freelance workforce, if you will, and that’s pulling the rest of the workforce with it.

What we see increasingly is that companies are struggling to find talent in the top cities where the jobs have been created. Because they already use freelancers anyway, they are also allowing their full-time employees to relocate to other parts of the country, as well as to hire people away from their headquarters, people who essentially work from home as full-time employees, remotely.

Gardner: Tim, it sounds like Upwork and its focus on freelance might be a harbinger of what’s required to be a full-fledged, flexible work support organization. How do you view freelancing? Is this the tip of the arrow for where we are headed?

Minahan: Against the backdrop of a global talent shortage and outdated model of hub-and-spoke-based work models, the more innovative companies — the ones securing the best talent — go to where the talent is, whether using contingent or full-time workers.

They are also shifting from the idea of having a full-time employee staff to having pools of talent. These are groups that have the skills and capabilities to address a specific business challenge. They will staff up on a given project.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

So, work is becoming much more dynamic. The leading organizations are tapping into that expertise and talent on an as-needed basis, providing them an environment to collaborate around that project, and then dissolving those teams or moving that talent on to other projects once the mission is accomplished.

Gardner: So, it’s about agility and innovation, being able to adapt to whatever happens. That sounds a lot like what digital business transformation is about. Do you see flexible work as supporting the whole digital transformation drive, too?

Minahan: Yes, I certainly do. In fact, what’s interesting is the first move to digital transformation was a shift to transforming customer experience, of creating new ways and new digital channels to engage with customers. It meant looking at existing product lines and digitizing them.

upwork chess

And along the way, companies realized two things. Number one, they needed different skills than they had internally. So the idea of the contingent worker or freelance worker who has that specific expertise becomes increasingly vital.

They also realized they had been asking employees to drive this digital transformation while anchoring them to archaic or legacy technology and a lot of bureaucracy that often comes with traditional work models.

And so there is now an increased focus at the executive C-suite level on driving employee experience and giving employees the right tools, the right work environment, and the flexible work models they need to ensure that they not only secure the best talent, but they can arm them to do their very best work.

There is now an increased focus at the C-suite level on driving employee experience and giving employees the right tools, work environment, and flexible work models they need to ensure they can do their very best work.

Gardner: Stephane, for the freelance workforce, how have they been at adapting to the technologies required to do what corporations need for digital transformation? How does the technology factor into how a freelancer works and how a company can best take advantage of them?

Kasriel: Fundamentally, a talent strategy is a critical part of digital transformation. If you think about digital transformation, it is the what, and the talent strategy is the how. And increasingly, as Tim was saying, as businesses need to move faster, they realize that they don’t have all the skills internally that they need to do digital transformation.

They have to tap into a pool of workers outside of the corporation. And doing this in the traditional way, using staffing firms or trying to find local people that can come in part-time, is extremely inefficient, incredibly slow, and incompatible with the level of agility that companies need to have.

citrix-logo-250x250So just as there was a digital transformation of the business firm, there is now also a digital transformation of the talent strategy for the firm. Essentially work is moving from an offline model to an online model. The technology helps with security, collaboration, and matching supply and demand for labor online in real-time, particularly for niche skills in short-duration projects.

Increasingly companies are reassembling themselves away from the traditional Taylorism model of silos, org charts, and people doing the same work every single day. They are changing to much more self-assembled, cross-functional, agile, and team-based work. In that environment, the teams are empowered to figure out what it is that they need to do and what type of talent they need in order to achieve it. That’s when they pull in freelancers through platforms such as Upwork to add skills they don’t have internally — because nobody has those internally.

And on the freelancer side, freelancers are entrepreneurs. They are usually very good at understanding what skills are in demand and acquiring those skills. They tend to train themselves much more frequently than traditional full-time employees because there is a very logical return on investment (ROI) for them to do so.

If I learned the latest Java framework in a few weeks, for example, I can then bill at a much higher rate than I would otherwise could if I didn’t have those skills.

Gardner: Stephane, how does Upwork help solve this problem? What is your value-add?

Upwork secures hiring, builds trust 

Kasriel: We essentially provide three pieces of value-add. One is a very large database of freelancers on one side and a very large database of clients and jobs on the other side. With that scale comes the ability to have high liquidity. The median time to fill a job on Upwork right now is less than 24 hours, compared to multiple weeks in the offline world. That’s one big piece of it.

The second is around an end-to-end workflow and processes to make it easy for large companies to engage with independent contractors, freelancers, and consultants. Companies want to make sure that these workers don’t get misclassified, that they only have access to IT systems they are supposed to, that they have signed the right level of agreements with the company, and that they have been background checked or whatever other processes that the company needs.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

The third big piece is around trust and safety. Fundamentally, freelancers want to know that they are going to be working with reputable clients and that they are going to get paid. Conversely, companies are engaging with freelancers for things that might be highly strategic, have intellectual property associated with them, and so they want to make sure that the work is going to be done properly and that the freelancer is not going to be selling information from the company, as an example.

So, the three pieces around matching, collaboration and security software, and trust and safety are the things that large companies are using Upwork for to meet the needs of their hiring managers.

Fundamentally, we want to be invisible. We want the platform to look simple so that people can get things done by having freelancers — and not have to think about all of the complexities of being compliant with the various roles that large companies have as it relates to engaging with people in general, but with independent contractors in particular.

Mind the gap in talent, skills 

Gardner: Tim, a new study has been conducted by the Center for Business and Economic Research on these subjects. What are some of the findings?

Minahan: At Citrix, we are committed to helping companies drive higher levels of employee experience using technology to create environments that allow much more flexible work models and empower employees to get their very best work done. So we are always examining the direction of overall work models in the market. So we partnered to better understand how to solve this massive talent crisis.

Consider that there is a gap of close to 90 million medium- to high-skilled workers around the globe, all of these unfilled jobs. There are a couple of ways to solve this. The best way is to expand the talent pool. So, as Stephane said, that can be through tapping into freelance marketplaces, such as Upwork, to find a curated path to the top talent, those who have the finest skills to help drive digital transformation.

But we can couple that with digital workspaces that allow flexible work models by giving the talent access to the tools and information they need to be productive and to collaborate. They can do that in a secure environment that leaves the company confident their information and systems remain secure.

The key findings of the study are that we have an untapped market. Some 69 percent of people who currently are unemployed or economically inactive indicate that they would start working if given more flexible work models and the technology to enable them to work remotely.

The key findings of the Center for Business and Economic Research study are that we have an untapped market. Some 69 percent of people who currently are unemployed or economically inactive indicate that they would start working if given more flexible work models and the technology to enable them to work remotely.

Think about the massive shifts in the demographics of the workplace. We talk about millennials coming into the workforce, and new work models, and all of that’s interesting and important. But we have a massive other group of workers at the other end of the spectrum — the baby boomers — who have massive amounts of talent and knowledge and who are beginning to retire.

upwork bugWhat if we could re-employ them on their own terms? Maybe a few days a week or a few hours a day, to contribute some of their expertise that is much needed to fill some of the skills gaps that companies have?

We are in a unique position right now and have an incredible opportunity to embrace these new work models, these new freelance marketplaces, and the technology to solve the talent gap.

Kasriel: We run a study every year called Freelancing in America; we have been running it for six years now. One of the highlights of the study is that 46 percent,