Better IT security comes with ease in overhead for rural Virginia county government

Caroline_County_Courthouse 2The next public sector security management edition of BriefingsDirect explores how managing IT for a rural Virginia county government means doing more with less — even as the types and sophistication of cybersecurity threats grow.

For County of Caroline, a small team of IT administrators has built a technically advanced security posture that blends the right amounts of automation with flexible, cloud-based administration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share their story on improving security in a local government organization are Bryan Farmer, System Technician, and David Sadler, Director of Information Technology, both for County of Caroline in Bowling Green, Virginia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dave, tell us about County of Caroline and your security requirements. What makes security particularly challenging for a public sector organization like yours?

Sadler: As everyone knows, small governments in the State of Virginia — and all across the United States and around the world — are being targeted by a lot of bad guys. For that reason, we have the responsibility to safeguard the data of the citizens of this county — and also of the customers and other people that we interact with on a daily basis. It’s a paramount concern for us to maintain the security and integrity of that data so that we have the trust of the people we work with.

Gardner: Do you find that you are under attack more often than you used to be?

David Sadler

Sadler

Sadler: The headlines of nearly any major newspaper you see, or news broadcasts that you watch, show what happens when the bad guys win and the local governments lose. Ransomware, for example, happens every day. We have seen a major increase in these attacks, or attempted attacks, over the past few years.

Gardner: Bryan, tell us a bit about your IT organization. How many do you have on the frontlines to help combat this increase in threats?

Farmer: You have the pleasure today of speaking with the entire IT staff in our little neck of the woods. It’s just the two of us. For the last several years it was a one-man operation, and they brought me on board a little over a year-and-a-half ago to lend a hand. As the county grew, and as the number of users and data grew, it just became too much for one person to handle.

Gardner: You are supporting how many people and devices with your organization?

Small-town support, high-tech security

Bryan Farmer (1)

Farmer

Farmer: We are mainly a Microsoft Windows environment. We have somewhere in the neighborhood of 250 to 300 users. If you wrap up all of the devices, Internet of Things (IoT) stuff, printers, and things of that nature, it’s 3,000 to 4,000 devices in total.

Sadler: But the number of devices that actually touch our private network is in the neighborhood of around 750.

Farmer: We are a rural area so we don’t have the luxury of having fiber in between all of our locations and sites. So we have to resort to virtual private networks (VPNs) to get traffic back and forth. There are airFiberconnections, and we are doing some stuff over the air. We are a mixed batch. There is a little bit of everything here.

Gardner: Just as any business, you have to put your best face forward to your citizens, voters, and taxpayers. They are coming for public services, going online for important information. How large is your county and what sort of applications and services you are providing to your citizens?

Farmer: Our population is 30,000?

Sadler: Probably 28,000 to 30,000 people, yes.

Farmer: A large portion of our county is covered by a U.S. Army training base, it’s a lot of nonliving area, so to speak. The population is condensed into a couple of small areas.

We host a web site and forum. People can look up their taxes, permit prices, and basic information that the average citizen will need.

We host a web site and forum. It’s not as robust as what you would find in a big city or a major metropolitan area, but people can look up their taxes, permit prices, things of that nature; basic information that the average citizen will need such as utility information.

Gardner: With a potential of 30,000 end users — and just two folks to help protect all of the infrastructure, applications, and data — automation and easy-to-use management must be super important. Tell us where you were in your security posture before and how you have recently improved on that.

Finding a detection solution

Sadler: Initially when I started here, and I came over from the private sector, we were running one of the big companies that had a huge name but was basically not showing us the right amount of good protection, you could say.

So we switched to a second company, Kaspersky, and immediately we started finding detections of existing malware and different anomalies in the network that had existed for years without protection from Symantec. So we settled on Kaspersky. And anytime you go to an enterprise-level antivirus (AV) endpoint solution, the setup, adjustment, and on-boarding process takes longer than what a lot of people would lead you to believe.

It took us about six months with Kaspersky. I was by myself, so it took me about six months to get everything set up and running like it should, and it performed extremely well. I had a lot of granularity as far as control of firewalls and that type of product.

The granularity is what we like because we have users that have a broad range of needs. We have to be able to address all of those broad ranges under one umbrella.

Many of the different AV endpoint solutions we evaluated lacked the granularity we wanted to address the needs of everyone with one product. We spend six months evaluating and we landed on Bitdefender.

Unfortunately, when the US Department of Homeland Security decided to at first recommend that you not use [Kaspersky] and then later banned that product from use, we were forced to look for a replacement solution, and we evaluated multiple different products.

Again, what we were looking for was granularity because we wanted to be able to address the needs of everyone under the umbrella with one particular product. Many of the different AV endpoint solutions we evaluated lacked that granularity. It was, more or less, another version of the software that we started with. They didn’t give a real high level of protection or did not allow for adjustment.

When we started evaluating a replacement, we were finding things that we could not do with a particular product. We spent probably about six months evaluating different products — and then we landed on Bitdefender.

Now, coming from the private sector and dealing with a lot of home users, my feelings for Bitdefender were based on the reputation of their consumer-grade product. They had an extremely good reputation in the consumer market. Right off the bat, they had a higher score when we started evaluating. It doesn’t matter how easy a product is to use or adjust if their basic detection level is low, then everything else is a waste of time.

rich bitdefender logoBitdefender right off the bat has had a reputation for having a high level of detection and protection as well as a low impact on the systems. Being a small, rural county government, we use machines that are unfortunately a little bit older than what would be recommended, five to six years old. We are using some older machines that have lower processing power, so we could not take a product that would kill the performance of the machine and make it unusable.

During our evaluations we found that Bitdefender performed well. It did not have a lot of system overhead and it gave us a high level of protection. What’s really encouraging is when you switch to a different product and you start scaling your network and find threats that had been existing there for years undetected. Now you know at least you are getting something for your money, and that’s what we found with Bitdefender.

Gardner: I have heard that many times. It has to, at the core, be really good at detecting. All the other bells and whistles don’t count if that’s not the case. Once you have established that you are detecting what’s been there, and what’s coming down the wire every day, the administration does become important.

Bryan, what is the administration like? How have you improved in terms of operations? Tell us about the ongoing day-to-day life using Bitdefender.

Managing mission-critical tech

Farmer: We are Bitdefender GravityZone users. We host everything in the cloud. We don’t have any on-premises Bitdefender machines, servers, or anything like that, and it’s nice. Like Dave said, we have a wide range of users and those users have a wide range of needs, especially with regards to Internet access, web page access, stuff like that.

For example, a police officer or an investigator needs to be able to access web sites that a clerk in the treasurer’s office just doesn’t need to be able to access. To be able to sit at my desk or take my laptop out anywhere that I have an Internet connection and make an adjustment if someone cannot get to somewhere that they need is invaluable. It saves so much time.

We don’t have to travel to different sites. We don’t have to log-in to a server. I can make adjustments from my phone. It’s wonderful to be able to set up these different profiles and to have granular control over what a group of people can do.

We can adjust which programs they can run. We can remove printing from a network. There are so many different ways that we can do it, from anywhere as long as we have a computer and Internet access. Being able to do that is wonderful.

Gardner: Dave, there is nothing more mission-critical than a public safety officer and their technology. And that technology is so important to everybody today, including a police officer, a firefighter, and an emergency medical technician (EMT). Any feedback when it comes to the protection and the performance, particularly in those mission-critical use cases?

Sadler: Bitdefender has allowed us the granularity to be able to adjust so that we don’t interfere with those mission-critical activities that the police officer or the firefighter are trying to perform.

Our security service is hosted in the cloud, and we have found that that is an actual benefit. Bitdefender GravityZone offers us the capability to monitor as well as adjust on machines that never see our network. 

So initially there was an adjustment period. Thank goodness everybody was patient during that process and I think now we are finally — about a year into the process, a little over a year — and we have gotten stuff set pretty good. The adjustments that we are having to make now are minor. Like Bryan said, we don’t have an on-premises security server here. Our service is hosted in the cloud, and we have found that that is an actual benefit. Before, with having a security server and the software hosted on-premises, there were machines that didn’t touch the network. We are looking at probably 40 to 50 percent of our machines that we would have had to manage and protect [manually] because they never touch our network.

The Bitdefender GravityZone cloud-based security product offers us the capability to be able to monitor for detections, as well as adjust firewalls, etc., on machines that we never touch or never see on our network. It’s been a really nice product for us and we are extremely happy with its performance.

endpoint-security-solution

Gardner: Any other metrics of success for a public sector organization like yours with a small support organization? In a public sector environment you have to justify your budget. When you tell the people overseeing your budget why this is a good investment, what do you usually tell them?

Sadler: The benefit we have here is that our bosses are aware of the need to secure the network. We have cooperation from them. Because we are diligent in our evaluation of different products, they pretty much trust our decisions.

Justifying or proving the need for a security product has not been a problem. And again, the day-to-day announcements that you see in the newspaper and on web sites about data breaches or malware infections — all that makes justifying such a product easier.

Gardner: Any examples come to mind that have demonstrated the way that you like to use these products and these services? Anything come to mind that illustrates why this works well, particularly for your organization?

Stop, evaluate, and reverse infections

Farmer: Going back to the cloud hosting, all a machine has to do is touch the Internet. We have a machine in our office here right now that one of our safety officials had and we received an email notification that something was going on. That machine needed to be disinfected, we needed to take a look at this machine.

The end-user didn’t have to notice it. We didn’t have to wait until it was a huge problem or a ransomware thing or whatever the case may be. We were notified automatically in advance. We were able to contact the user and get to the machine. Thankfully, we don’t think it was anything super-critical, but it could have been.

That automation was fantastic, and not having to react so aggressively, so to speak. So the proactivity that a solution like Bitdefender offers is outstanding.

Gardner: Dave, anything come to mind that illustrates some of the features or functions or qualitative measurements that you like?

Sadler: Yes, with Bitdefender GravityZone, it will sandbox a suspicious activity and watch its actions and then roll back if something bad is going on.

We actually had a situation where a vendor that we use on a regular basis from a large company, well-respected, called in to support a machine that they had in one of our offices. We were immediately notified via email that a ransomware attack was being attempted.

We were notified on our end that a ransomware attack had been stopped, evaluated, and reversed by Bitdefender. Not once, but twice in a row. We were immediately able to contact that office say, “Hey, stop what your are doing.”

So this vendor was using a remote desktop application. Somehow the end-user got directed to a bad site, and when it failed the first time on their end, all they could tell was, “Hey, my remote desktop software is not working.” They stopped and tried it again.

We were notified on our end that a ransomware attack had been stopped, evaluated, and reversed by Bitdefender. Not once, but twice in a row. So we were immediately able to contact that office and say, “Hey, stop what you are doing.”

Then we followed up by disconnecting that computer from the network and evaluating them for infection, to make sure that everything had been reversed. Thank goodness, Bitdefender was able to stop that ransomware attack and actually reverse the activity. We were able to get a clean scan and return that computer back to service fairly quickly.

Gardner: How about looking to the future? What would you like to see next? How would you improve your situation, and how could a vendor help you do that?

Meeting government requirements

Sadler: The State of Virginia just passed a huge bill dealing with election security and everybody knows that that’s a huge, hot topic when it comes to security right now. And because most of the localities in Virginia are independent localities, the state passed a bill that allows state Department of Elections and the US Homeland Security Department to step in a little bit more to the local governments and monitor or control the security of the local governments, which in the end is going to be a good thing.

But a lot of the products or solutions that we are now being required to be able to answer about are already answered by the Bitdefender product. For example, automated patch management notification of security issues.

So, Bitdefender right now is already answering a lot of the new requirements. The one thing that I would like to see … from what I understand the cloud-based version of Bitdefender does not allow you to do mobile device management. And that’s going to be required by some of these regulations that are coming down. So it would be really nice if we could have one product that would do the mobile device management as well as the cloud-based security protection for a network.

pring_Grove_gateGardner: I imagine they hear you loud and clear on that. When it comes to compliance like you are describing from a state down to a county, for example, many times there are reports and audits that are required. Is that something that you feel is supported well? Are you able to rise to that occasion already with what you have installed?

Farmer: Yes, Bitdefender is a big part of us being able to remain compliant. The Criminal Justice Information Services (CJIS) audit is one we have to go through on a regular basis. Bitdefender helps us address a lot of the requirements of those audits as well as some of the upcoming audits that we haven’t seen yet that are going to be required by this new regulation that was just passed this past year in the Commonwealth of Virginia.

But from the previews that we are getting on the requirements of those newly passed regulations, it does appear that Bitdefender is going to be able to help us address some of those needs, which is good. By far, it’s the capability to be able to answer some of those needs with Bitdefender that is superior to the products that we have been using in the past.

Gardner: Given that many other localities, cities, towns, municipalities, counties are going to be facing similar requirements, particularly around election security, for example, what advice would you give them, now that you have been through this process? What have you learned that you would share with them so that they can perhaps have an easier go at it?

Research reaps benefits in time, costs 

Farmer: I have seen in the past a lot of places that look at the first line item, so to speak, and then make a decision on that. Then when they get down the page a little bit and see some of the other requirements, they end up in situations where they have two, three, or four pieces of software, and a couple of different pieces of hardware, working together to accomplish one goal. Certainly, in our situation, Bitdefender checks a lot of different boxes for us. If we had not taken the time to research everything properly and get into the full breadth of what’s capable, we could have spent a lot more money and created a lot more work and headaches for ourselves.

A lot of people in IT will already know this, but you have to do your homework. You have to see exactly what you need and get a wide-angle view of it and try to choose something that helps do all of those things. Then automate off-site and automate as much as you can to try to use your time wisely and efficiently.

Gardner: Dave, any advice for those listening? What have you learned that you would share with them to help them out?

The breadth of the protection that we are getting from Bitdefender has been a major plus. Find the product that your can put together under one big umbrella so you have one point of adjustment from one single control panel.

Sadler: The breadth of the protection that we are getting from Bitdefender has been a major plus. So again, like Bryan said, find the product that you can put together under one big umbrella — so that you have one point of adjustment. For example, we are able to adjust firewalls, virus protection, and off-site USB protection — all this from one single control panel instead of having to manage four or five different control panels for different products.

It’s been a positive move for us, and we look forward to continuing to work with that product and we are watching the new product still under development. We see new features coming out constantly. So if anyone from Bitdefender is listening, keep up the good work. We will hang in there with you and keep working.

But the main thing for IT operators is to evaluate your possibilities, evaluate whatever possible changes you are going to make before you do it. It can be an investment of money and time that goes wasted if you are not sure of the direction you are going in. Use a product that has a good reputation and one that checks off all the boxes like Bitdefender.

Farmer: In a lot of these situations, when you are working with a county government or a school you are not buying something for 30, 60, or 90 days – instead you are buying a year at a time. If you make an uninformed decision, you could be putting yourself in a jam time- and labor-wise for the next year. That stuff has lasting effects. In most counties, we get our budgets and that’s what we have. There are no do-overs on stuff like this. So, it speaks back to making a well-informed decision the first time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in artificial intelligence, Bitdefender, Business intelligence, BYOD, Cloud computing, Cyber security, data analysis, Government, machine learning, mobile computing, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Delivering a new breed of patient access best practices requires an alignment of people, process, and technology

receptionistThe next BriefingsDirect healthcare finance insights discussion explores the rapidly changing ways that caregiver organizations on-board and manage patients.

How patients access their healthcare is transitioning to the digital world — but often in fits and starts. This key process nonetheless plays a major role in how patients perceive their overall experiences and determines how well providers manage both care and finances.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to unpack the people, process, and technology elements behind modern patient access best practices. To learn more, we are joined by an expert panel: Jennifer Farmer, Manager of Patient Access and Admissions at Massachusetts Eye and Ear Infirmary in Boston; Sandra Beach, Manager of the Central Registration Office, Patient Access, and Services and Pre-Services at Cooley Dickinson Healthcare in Northampton, Mass., and Julie Gerdeman, CEO of HealthPay24 in Mechanicsburg, Penn. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jennifer, for you and your organization, how has the act of bringing a patient into a healthcare environment — into a care situation — changed in the past five years?

Jennifer Farmer headshot

Farmer

Farmer: The technology has exploded and it’s at everyone’s fingertips. So five years ago, patients would come to us, from referrals, and they would use the old-fashioned way of calling to schedule an appointment. Today it is much easier for them. They can simply go online to schedule their appointments.

They can still do walk-ins as they did in the past, but it’s much easier access now because we have the ways and means for the patients to be triaged and given the appropriate information so they can make an appointment right then and there, versus waiting for their provider to call to say, “Hey, we can schedule your appointment.” Patients just have it a lot easier than they did in the past.

Gardner: Is that due to technology? It seems to me that when I used to go to a healthcare organization they would be greeting me by handing me a clipboard, but now they are always sitting at a computer. How has the digital experience changed this?

Farmer: It has changed it drastically. Patients can now complete their accounts online and so the person sitting at the desk already has that patient’s information. So the clipboard is gone. That’s definitely something patients like. We get a lot of compliments on that.

It’s easier to have everything submitted to us electronically, whether it’s medical records or health insurance. It’s also easier for us to communicate with the patient through the electronic health record (EHR). If they have a question for us or we have a question for them, the health record is used to go back and forth.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

There are not as many phone calls as there used to be, not as many dropped ends. There is also the advent of telemedicine these days so doctors can have a discussion or a meeting with the patient on their cell phones. Technology has definitely changed how medicine is being distributed as well as improving the patient experience.

Gardner: Sandra, how important is it to get this right? It seems to me that first impressions are important. Is that the case with this first interception between a patient and this larger, complex healthcare organization and even ecosystem?

Sandra Beach headshot

Beach

Beach: Oh, absolutely. I agree with Jennifer that so many things have changed over the last five years. It’s a benefit for patients because they can do a lot more online, they can electronically check-in now, for example, that’s a new function. That’s going to be coming with [our healthcare application] Epicso that patients can do that all online.

The patient portal experience is really important too because patients can go in there and communicate with the providers. It’s really important for our patients as telemedicine has come a huge distance over the years.

Gardner: Julie, we know how important getting that digital trail of a patient from the start can be; the more data the better. How have patient access best practices been helped or hindered by technology? Are the patients perceiving this as a benefit?

Gerdeman: They are. There has been a huge improvement in patient experience from technology and the advent and increase in technology. A patient is also a consumer. We are all just people and in our daily lives we do more research.

So, for patient access, even before they book an appointment, either online or on the phone, they pull out their phones and do a ton of research about the provider institution. That’s just like folks do for anything personal, such as a local service like a dry cleaning or a haircut. For anything in your neighborhood or community, you do the same for your healthcare because you are a consumer.

The same level of consumer support that’s expected in our modern daily lives has now come to be expected with our healthcare experiences. Leveraging technology for access is just beginning and will continue to impact healthcare.

Julie Gerdeman headshot

Gerdeman

The same level of consumer support that’s expected in our modern daily lives has now come to be expected with our healthcare experiences. Leveraging technology for access, and as Jennifer and Sandra mentioned, the actual clinical experience — via telemedicine and digital transformation — is just getting into and will continue to impact healthcare.

Gardner: We have looked at this through the lens of the experience and initial impressions — but what about economics? When you do this right, is there a benefit to the provider organization? Is there a benefit to the patient in terms of getting all those digital bits and bytes and information in the right place at the right time? What are the economic implications, Jennifer?

Technology saves time and money

Farmer: They are two-fold. One, the economic implication for a patient is tht they don’t necessarily have to take a day off from work or leave work early. They are able to continue via telemedicine, which can be done through the evening. When providers offer evening and weekend appointments, that’s to satisfy the patient so they don’t have to spend as much time trying to rearrange things, get daycare, or pay for parking.

For the provider organization, the economic implications are that we can provide services to more patients, even as we streamline certain services so that it’s all more efficient for the hospital and the various providers. Their time is just as valuable as anyone else’s. They also want to reduce the wait times for someone to see a patient.

The advent of using technology across different avenues of care reduces that wait time for available services. The doctors and technicians are able to see more patients, which obviously is an economic positive for the hospital’s bottom line.

Gardner: Sandra, patients are often not just having one point of intersection, if you will, with these provider organizations. They probably go to a clinic, then a specialist, perhaps rehabilitation, and then use pharmaceutical services. How do we make this more of a common experience for how patients intercept such an ecosystem of healthcare providers?

Beach: I go back to the EHRs that Jennifer talked about. With us being in a partner system, no matter where you go — you could go to a rehab appointment, a specialist, to the cancer center in Boston — all your records are accessible for the physicians, and for the patients. That’s a huge step in the right direction because, no matter where the patient goes, you can access the records, at least within our system.

Gardner: Julie, to your point that the consumer experience is dictating people’s expectations now, this digital trail and having that common view of a patient across all these different parts of the organization is crucial. How far along are we with that? It seems to me that we are not really fully baked across that digital experience.

desktop shotGerdeman: You’re right, Dana. I think the partner approach is an amazing exception to the rule because they are able to see and share data across their own network.

Throughout the rest of the country, it’s a bit more fractured and splintered. There remains a lot of friction in accessing records as you move — even in some cases within the same healthcare system — from a clinic or the emergency department (ED) into the facility or to a specialist.

The challenge is one of interoperability of data and integration of that data. Hospitals continue to go through a lot of mergers and acquisitions, and every acquisition creates a new challenge.

From the consumer perspective, they want that to be invisible. It should be invisible, the right data should be on their phones regardless of what the encounter was, what the financial obligation for the encounter was — all of it. So that’s the expectation and what’s still happening. There is a way to go in terms of interoperability and integration from the healthcare side.

Gardner: We have addressed the process and the technology, but the third leg on the stool here is the people. How can the people who interact with patients at the outset foster a better environment? Has the role and importance of who is at that initial intercept with the patient been elevated? So much rides on getting the information up front. Jennifer, what about the people in the role of accessing and on-boarding patients, what’s changed with them?

Get off to a great start

Farmer: That is the crux of the difference between a good patient experience and a terrible patient experience, that first interaction. So folks who are scheduling appointments and maybe doing registration — they may be at the information desk — they are all the drivers to making sure that that patient starts off with a great experience.

Most healthcare organizations are delving into different facets of customer service in order to ensure that the patient feels great and like they belong when they come into an organization. Here at Mass. Eye and Ear, we practice something called Eye Care. Essentially, we think about how you would want yourself and your family members to be treated, to make sure that we all treat patients who walk in the door like they are our family members.

When you lead with such a positive approach it downstreams into that patient’s feelings of, “I am in the right place. I expect my care to be fantastic. I know that I’m going to receive great care.” Their positive initial outlook generally reflects the positive outcome of their overall visit.

Most providers are siloed, with different areas or departments. That means patients would hear, “Oh, sorry, we can’t help you. That’s not our area.” To make it a more inclusive experience, everyone in the organization is a brand ambassador.

This has changed dramatically even within the past two to three years. Most providers are siloed, with different areas or departments. That means patients would hear, “Oh, sorry, we can’t help you. That’s not our area.” To make it a more inclusive experience, everyone in the organization is a brand ambassador.

We have to make sure that people understand that, to make it more inclusive for the patient and less hectic for the patient, no matter where you are within a particular organization. I’m sure Sandra can speak to this as well. We are all important to that patient, so if you don’t know the answer, you don’t have to say, “I don’t know.” You can say, “Let me get someone who can assist you. I’ll find some information for you.”

It shouldn’t be work for them when patients walk in the door. They should be treated as a guest, welcomed and treated as a family member. Three or four years ago, it was definitely the mindset of, “Not my job.” At other organizations that I visit, I do see more of a helpful environment, which has changed the patient perception of hospitals as well.

Beach: I couldn’t agree more, Jennifer. We have the same thing here as with your Eye Care. I ask our staff every day, “How would you feel if you were the patient walking in our door? Are we greeting patients with a nice, warm, friendly smile? Are we asking, ‘How can I help you today?’ Or, ‘Good morning, what can I do for you today?’”

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

We keep that at the forefront for our staff so they are thinking about this every time that they greet a patient, every day they come to work, because patients have choices, patients can go to other facilities, they can go to other providers.

We want to keep our patients within our healthcare system. So it’s really important that we have a really good patient experience on the front end, because Jennifer is correct, it has a positive outcome on the back end. If they start off in the very beginning with a scheduler or a registrar or an ED check-in person, and they are not greeted in a friendly, warm atmosphere, then typically that’s what sets off their total visit. That seems to be what they remember. That first interaction is really what they remember.

Gardner: Julie, this reflects back on what’s been happening in the consumer world around the user experience. It seems obvious.

So I’m curious about this notion of competition between healthcare providers. That might be something new as well. Why do healthcare provider organizations need to be thinking about this perception issue? Is it because people could pick up and choose to go somewhere else? How has competition changed the landscape when it comes to healthcare?

Competing for consumers’ care 

Gerdeman: Patients have choices. Sandra described that well. Patients, particularly in metropolitan or suburban areas, have lots of options for primary care, specialty care, and elective procedures. So healthcare providers are trying to respond to that.

In the last few years you have seen not just consumerism from the patient experience, but consumerism in terms of advertising, marketing, and positioning of healthcare services — like we have never seen before. That competition will continue and become even more fierce over time.

mee-exteriorProviders should put the patient at the center of everything that they do. Just as Jennifer and Sandra talked about, putting the patient at the heart and then showing empathy from the very first interaction. The digital interaction needs to show empathy, too. And there are ways to do that with technology and, of course, the human interaction when you are in the facility.

Patients don’t want to be patients most of the time. They want to be humans and live their lives. So, the technology supporting all of that becomes really crucial. It has to become part of that experience. It has to arm the patient access team and put the data and information at their fingertips so they can look away from a computer or a kiosk and interact with that patient on a different level. It should arm them to have better, empathic interactions and build trust with the patient, with the consumer.

Gardner: I have seen that building competition where I live in New Hampshire. We have had two different nationally branded critical-care clinics open up — pop-up like mushrooms in the spring rain — in our neighborhood.

Let’s talk about the experience not just for the patient but for that person who is in the position of managing the patient access. The technology has extended data across the partner organization. But still technology is often not integrated in the back end for the poor people who are jumping between four and five different applications — often multiple systems — to on-board patients.

What’s the challenge from the technology for the health provider organization, Jennifer?

One system, one entry point, it’s Epic

Farmer: That used to be our issue until we gained the Epic system in 2016. People going into multiple applications was part of the issue with having a positive patient experience. Every entry point that someone would go to, they would need to repeat their name and date of birth. It looked one way in one system and it looked another way in a different system. That went away with Epic.

Epic is one system, the registration or the patient access side. It is also the coding side, it’s billing, it’s medical records, it’s clinical care, medications, it’s everything.

So for us here at Mass. Eye and Ear, no matter where you go within the organization, and as Sandra mentioned earlier, we are part of the same Partners HealthCare system. You can actually go to any Partners facility and that person who accesses your account can see everything. From a patient access standpoint, they can see your address and phone number, your insurance information, and who you have as an emergency contact.

There isn’t that anger that patients had been feeling before, because now they are literally giving their name and date of birth only as a verification point. It does make it a lot easier for our patients to come through the door, go to different departments for testing, for their appointment, for whatever reason that they are here, and not have to show their insurance card 10 times.

If they get a bill in the mail and they are calling our billing department, they can see the notes that our financial coordinators, our patient access folks, put on the account when they were here two or three months ago and help explain why they might have gotten a bill. That’s also a verification point, because we document everything.

So, a financial coordinator can tell a patient they will get a bill for a co-pay or for co-insurance and then they get that bill, they call our billing team, they say, “I was never told that,” but we have documentation that they were told. So, it’s really one-stop shopping for the folks who are working within Epic. For the patient, nine times out of 10 they just can go from floor to floor, doctor to doctor, and they don’t have to show ID again, because everything is already stored in Epic.

Beach: I agree because we are on Epic as well. Prior to that, three years ago, it would be nothing for my registrars to have six, seven systems up at the same time and have to toggle back and forth. You run a risk by doing that, because you have so many systems up and you might have different patients in the system, so that was a real concern.

If a patient came in and didn’t have an order from the provider, we would have to call their office. The patient would have to wait. We might call two or three times.

Now we have one system. If the patient doesn’t have the order, it’s in the computer system. We just have to bring it up, validate it, patient gets checked in, patient has their exam, and there is no wait. It’s been a huge win for us and for our patients.

Now, we have one system. If the patient doesn’t have the order, it’s in the computer system. We just have to bring it up, validate it, patient gets checked in, patient has their exam, and there is no wait. It’s been a huge win for us for sure — and for our patients.

Gardner: Privacy and compliance regulations play a more important role in the healthcare industry than perhaps anywhere else. We have to not only be mindful of the patient experience, but also address these very important technical issues around compliance and security. How are you able to both accomplish caring for the patient and addressing these hefty requirements?

It’s healthy to set limits on account access

Farmer: Within Epic, access is granted by your role. Staff that may be working in admitting or the ED or anywhere within patient access, but they don’t have access to someone’s medication list or their orders. However, another role may have access.

Compliance is extremely important. Access is definitely something that is taken very seriously. We want to make sure that staff are accessing accounts appropriately and that there are guardrails built in place to prevent someone from accessing accounts if they should not be.

For instance, within the Partners HealthCare system, we do tend to get people of a certain status; we get politicians, we get celebrities, we get heads of state, public figures that go to various hospitals, even outside of Partners that are receiving care. So we have locks on those particular accounts for employees. Their accounts are locked.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

So if you try to access the account, you get a hard stop. You have to complete why you are accessing this account, and then it is reviewed immediately. And if it’s determined that your role has nothing to do with it, you should not be accessing this particular account, then the organization does takes necessary steps to investigate and either say yes, they had a reason to be in this account, or no, they did not, and the potential of termination is there.

But we do take privacy very seriously within the system and then outside of the system. We make sure we are providing a safe space for people to be able to provide us with their information. It is on the forefront, it drives us, and folks definitely are aware because it is part of their training.

Cooley-DickinsonBeach: You said it perfectly, Jennifer. Because we do have a lot of people that are high profile and that do come through our healthcare systems the security, I have to say, is extremely tight on records. And so it should be. If you are in a record, and you shouldn’t be there, then there are consequences to that.

Gardner: Julie, in addition to security and privacy we have also had to deal with a significant increase in the complexity around finances and payments given how insurers and the payers work. Now there are more copays, more kinds of deductibles. There are so many different plans: platinum, gold, silver, bronze.

In order to keep the goal of a positive patient experience, how are we addressing this new level of complexity when it comes to the finances and payments? Do they go hand-in-hand, the patient experience, the access, and the economics?

A clean bill of health for payment

Gerdeman: They do, and they should, and they will continue to. There will remain complexity in healthcare. It will improve certainly over time, but with all of the changes we have seen complexity is a given. It will be there. So how to handle the complexity, with technology, with efficient process, and with the right people becomes more and more important.

There are ways to make the complex simple with the right technology. On the back end, behind that amazing patient experience — both the clinical experience and also the financial experience – we try to shield the patient. At HealthPay24 we are focused on financial experience and taking all of the data that’s behind there and presenting it very simply to a patient.

That means one small screen on the phone — with different encounters and different back ends – of being able to present that very simply for our patients to meet their financial obligations. They are not concerned that the ED had one different electronic medical record (EMR) than the specialist. That’s really not the concern of the patient, nor should it be. It’s the concern of how the providers can use technology in the back end to then make it simple and change that experience.

We talked about loyalty, and that’s what drives loyalty. You are going to keep coming back to a great experience, with great care, and ease of use. So for me, that’s all crucial as we go forward with healthcare – the technology and the role it plays.

Gardner: And Jennifer and Sandra, how do you see the relationship between the proper on-boarding, access, and experience and this higher complexity around the economics and finance? Do you see more of the patient experience addressing the economics?

Farmer: We have done an overhaul of our system, where it concerns patients, for paying bills or for not having health insurance. Our financial coordinators are there to assist our patients, whether by phone, email, in person. There are lots of different programs we can introduce patients to.

We are certified counselors for the Commonwealth of Massachusetts. That means we are able to help the patient apply for health insurance through the Health Connector for Massachusetts as well as for the state Medicaid program called MassHealth. And so we are here to help those patients go through that process.

We also have an internal program that can assist patients with paying their bills. We talk to patients about different credit cards that are available for those that may qualify. And essentially, the bottom line too is somebody just once again on a payment plan. So, we take many factors, and we try to make it work as best as we can for the patient.

At the end of the day, it’s about that patient receiving care and making sure that they are feeling good about it. We definitely try to meet their needs and introduce them to different things. We are here to support them, and at the end of the day it’s again about their care. If they can’t pay anything right now, but they obviously need immediate medical services, then we assure them, let’s focus on your care. We can talk about the back end or we can talk about your bills at a different point.

We do provide them with different avenues, and we are pretty proud of that because I like to believe that we are successful with it and so it helps the patient overall.

Gerdeman: It really does go to that patients want to meet their obligations, but they need options to be able to do that. Those options become really important — whether it’s a loan program, a payment plan, applying for financial assistance – and technology can enable all of these things.

For HealthPay24, we enable an eligibility check right in the platform so you don’t have to worry about others knowing. You can literally check for eligibility by clicking a button and entering a few fields to know if you should be talking to financial counseling at a provider.

You can apply for payment plans, if the providers opt for that. It will be proactively offered based on demographic data to a patient through the platform. You can also apply for loans, for revolving credit, through the platform. Much of what patients want and need financially is now available and enabled by technology.

Gardner: Sandra, such unification across the financial, economic, and care giving roles strikes me as something that’s fairly new.

Beach: Yes, absolutely it is. We have a program in our ED, for example, that we instituted a year ago. We offer an ED discharge service so when the patient is discharged, they stop at our desk and we offer these patients a wide variety of payment options. Or maybe they are homeless and they are going through a tough time. We can tell them where they can go to get a free meal or spend the night. There are a whole bunch of programs available.

That’s important because we will never turn a patient away. And when patients come through our ED, they need care. So when they leave, we want to be able to help them as much as we can by supporting them and giving them these options. 

That’s important because we will never turn a patient away. And when patients come through our ED, they need care. So when they leave, we want to be able to help them as much as we can by supporting them and giving them these options.

We have also made phone calls for our patients as well. If they need to get someplace just to spend the night, we will call and we will make that arrangement for those patients. So when they leave, they know they have a place to go. That’s really important because people go through hard times.

Gardner: Sandra, do you have any other examples of processes or approaches to people and technology that you have put in place recently? What have been some of the outcomes?

Check-in at home, spend less time waiting 

Beach: Well, the ED discharge service has made a huge impact. We saw probably 7,000-8,000 patients through that desk over the last year. We really have helped a lot of patients. But we are also there just to lend an ear. Maybe they have questions about what the doctor just said to them, but they really weren’t sure what he said. So it’s just made a huge impact for our patients here.

Gardner: Jennifer, same question, any processes you have put in place, examples of things that have worked and what are the metrics of success?

Farmer: We just rolled out e-check-in. So I don’t have any metrics on it just yet, but this is a process where the patient can go to their MyChart or their EHR and check in for an appointment prior to the day. They can also pay their copay. They can provide us with updates to their insurance information, address or phone number, so when they actually come to their appointment, they are not stopping at the desk to sign in or do check in.

That seems to be a popular option for the office visitor currently piloting this, and we are hoping for a big success. It will be rolled out to other entities, but right now that is something that we are working on. It’s tying in the technology, the patient care, for the patient access. It’s tying in the ease of the check-in with that patient. And so again, we are hoping that we have some really positive metrics on that.

Gardner: What sort of timeframe are we talking about here in terms of start to finish from getting that patient into their care?

Farmer: So if they are walking in the door because they have already done e-check-in, they are immediately going in for their appointment, because they are showing up on time, they are expected, they are going right in, so the time that the patient is sitting there waiting in line, sitting in the waiting area, that’s being reduced; the time that they have to talk to someone about any changes or confirming everything that we have on their account, that time is being reduced.

MEE-ExteriorAnd then we are hoping to test this in a pilot program for the next month to six weeks to see what kind of data we can get and hopefully that will — just across the board, just help with that check in process for patients and reduce that time for the folks who are at the desk and they can focus on other tasks as well. So we are giving them back their time.

Gardner: Julie, this strikes me in the parlance of other industries as just-in-time healthcare, and it’s a good move. I know you deal with a national group of providers and payers. Any examples, Julie, that demonstrate and illustrate the positive direction we are going with patient access and why technology is an important part of that?

Just-in-time wellness

Gerdeman: I refer to Christopher Penn’s model of People, Process, and Technology here, Dana, because when people touch process, there is scale, and when process and technology intersect, there is automation. But most importantly, when people intersect with technology, there is innovation, and what we are seeing is not just incremental innovation — but huge leaps in innovation.

What Jen just described as that experience of just-in-time healthcare, that is literally a huge need, that’s a leap, right? We have come to expect it when we reserve a table via OpenTable, when we e-check-in for a hair appointment. I go back to that consumer experience, but that innovation, right, that’s happening all across healthcare.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

One of the things that we just launched, which we are really excited about, is predictive analytics tied to the payment platform. If you know and can anticipate the behaviors and the patterns of a demographic of patients, financially speaking, then it will help ease the patient experience in what they owe, how they pay, and what’s offered to them. It boosts the bottom line of providers, because they are going to get increased revenue collection.

So where predictive analytics is going in healthcare and tying that to the patient experience and to the financial systems, I think will become more and more important. And that leads to even more — there is so much emerging technology on the clinical side and we will continue to see more emerging technology on the back-end systems and the financial side as well.

Gardner: Before we close out, perhaps a look to the future, and maybe even a wish list. Jennifer, if you had a wish list for how this will improve in the next few years, what’s missing, what’s yet to come, what would you like to see available with people, process, and technology?

Farmer: I go back to just patient care, and while we are in a very good spot right now, it can always improve. We need more providers, we need more technicians, we need more patient access folks, and the sense of being able to take care of people because the population is growing and whether you know it or not, you are going to need a doctor at some point.

So I think continuing on the path that we are on of providing excellent customer service, listening to patients, being empathetic. Also providing them with options; different appointment times, different finance options, different providers, it can only get better.

Beach: I absolutely agree. We have a really good computer system, we have the EMRs, but I would have to agree with Jennifer as well that we really need more providers. We need more nurses to take care of our patients.

Gardner: So it comes down to human resources. How about those front-line people who are doing the patient access intercept? Should they have an elevated status, role, and elevated pay schedule?

Farmer: It’s really tough for the patient access people because on the front line — every minute of every day, eight to 10 hours a day — they are working on that front line, so sometimes that’s tough.

It’s really important that we keep training with them. We give them options of going to customer service classes, because their role has changed from basically checking in a patient to now making sure if their insurance is correct. We have so many different insurance plans these days. To know each of those elevates that registrar to be almost an expert in that field in order to be able to help the patient and get them through that registration process, and the bottom line — to get reimbursed for those services. So it’s really come a long way.

Gardner: Julie, on this future perspective, what do you think will be coming down the pike for provider organizations like Jennifer and Sandra’s in terms of technology and process efficiency? How will the technology become even more beneficial?

Gerdeman: It’s going to be a big balancing act. What I mean by that is we are now officially more of an older country than a younger country in terms of age. People are living longer, they need more care than ever before, and we need the systems to be able to support that. So, everything that was just described is critical to support our aging population.

We have a whole other generation entering into healthcare as patients, as providers, and as technologists. This new generation has a completely different expectation of what that experience should and will be. 

But what I mean by the balancing act is we have a whole other generation entering into healthcare as patients, as providers, and as technologists. This new generation has a completely different expectation of what that experience should and will be. They might have an expectation that their wearable device should give all of that data to a provider. That they wouldn’t need to explain it, that it should all be there all day, not just that they walk in and have just-in-time, but all the health data is communicated ahead of time, before they are walking in and then having a meaningful conversation about what to do.

This new generation is going to shift us to wellness care, not just care when we are sick or injured. I think that’s all changing. We are starting to see the beginnings of that focus on wellness. And wearables and devices, and how they are used, the providers are going to have to juggle that with the aging population and traditional services — as well as the new. Technology is going to be a key, core part of that going forward.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in artificial intelligence, Business intelligence, Cloud computing, contact center, CRM, electronic medical records, Enterprise transformation, healthcare, Identity, machine learning, professional services, Security, supply chain, User experience | Tagged , , , , , , , , , , | Leave a comment

How security designed with cloud migrations in mind improves an enterprise’s risk posture top to bottom

DominosThe next BriefingsDirect data security insights discussion explores how cloud deployment planners need to be ever-vigilant for all types of cybersecurity attack vectors. Stay with us as we examine how those moving to and adapting to cloud deployments can make their data and processes safer and easier to recover from security incidents.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about taking the right precautions for cloud and distributed data safety we welcome two experts in this field, Mark McIntyre, Senior Director of Cybersecurity Solutions Group at Microsoft, and Sudhir Mehta, Global Vice President of Product Management and Strategy at Unisys. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, what’s changed in how data is being targeted for those using cloud models like Microsoft Azure? How is that different from two or three years ago?

Mark McIntyre

McIntyre

McIntyre: First of all, the good news is that we see more and more organizations around the world, including the US government, but broadly more global, pursuing cloud adoption. I think that’s great. Organizations around the world recognize the business value and I think increasingly the security value.

The challenge I see is one of expectations. Who owns what, as you go to the cloud? And so we need to be crisper and clearer with our partners and customers as to who owns what responsibility in terms of monitoring and managing in a team environment as you transition from a traditional on-premises environments all the way up into a software-as-a-services (SaaS) environment.

Gardner: Sudhir, what’s changed from your perspective at Unisys as to what the cloud adoption era security requirements are?

Sudhir Mehta

Mehta

Mehta: When organizations move data and workloads to the cloud, many of them underestimate the complexities of securing hybrid, on-premises, and cloud ecosystems. A lot of the failures, or what we would call security breaches or intrusions, you can attribute to inadequate security practices, policies, procedures, and misconfiguration errors.

As a result, cloud security breach reports have been on the rise. Container technology adds flexibility and speed-to-market, but it is also introducing a lot of vulnerability and complexity.

A lot of customers have legacy, on-premises security methodologies and technologies, which obviously they can no longer use or leverage in the new, dynamic, elastic nature of today’s cloud environments.

Gartner estimates that through 2022 at least 95 percent of cloud security failures will be the customers’ fault. So the net effect is cloud security exposure, the attack surface, is on the rise. The exposure is growing.

Change in cloud worldwide 

Gardner: People, process, and technology all change as organizations move to the cloud. And so security best practices can fall through the cracks. What are you seeing, Mark, in how a comprehensive cloud security approach can be brought to this transition so that cloud retains its largely sterling reputation for security?

McIntyre: I completely agree with what my colleague from Unisys said. Not to crack a joke — this is a serious topic — but my colleagues and I meet a lot with both US government and commercial counterparts. And they ask us, “Microsoft, as a large cloud provider, what keeps you awake at night? What are you afraid of?”

It’s always a delicate conversation because we need to tactfully turn it around and say, “Well, you, the customer, you keep us awake at night. When you come into our cloud, we inherit your adversaries. We inherit your vulnerabilities and your configuration challenges.”

We need to be really clear with our customers about the technologies that they need to make themselves more secure. We need to give them awareness into their posture so it’s built right into the fabric of the cloud service.

As our customers plan a cloud migration, it will invariably include a variety of resources being left on-premises, in a traditional IT infrastructure. We need to make sure that we help them understand the benefits already built into the cloud, whether they are seeking infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or SaaS. We need to be really clear with our customers — through our partners, in many cases – about the technologies that they need to make themselves more secure. We need to give them awareness into their posture so that it is built right into the fabric of the cloud service.

Gardner: Sudhir, it sounds as if organizations who haven’t been doing things quite as well as they should on-premises need to be even more mindful of improving on their security posture as they move to the cloud, so that they don’t take their vulnerabilities with them.

From Unisys’s perspective, how should organizations get their housecleaning in order before they move to the cloud?

Don’t bring unsafe baggage to the cloud 

Mehta: We always recommend that customers should absolutely first look at putting their house in order. Security hygiene is extremely important, whether you look at data protection, information protection, or your overall access exposure. That can be from employees working at home or through to vendors or third-parties — wherever they have access to a lot of your information and data.

azure bugFirst and foremost, make sure you have the appropriate framework established. Then compliance and policy management are extremely important when you move to the cloud and to virtual and containerized frameworks. Today, many companies do their application development in the cloud because it’s a lot more dynamic. We recommend that our customers make sure they have the appropriate policy management, assessments, and compliance checks in place for both on-premises and then for your journey to the cloud.

Learn More About  Cyber Recovery

With Unisys Stealth 

The net of it is, if you are appropriately managed when you are on-premises, chances are as you move from hybrid to more of a cloud-native deployment and/or cloud-native services, you are more likely to get it right. If you don’t have it all in place when you are on-premises, you have an uphill battle in making sure you are secured in the cloud.

Gardner: Mark, are there any related issues around identity and authentication as organizations move from on-premises to outside of their firewall into cloud deployment? What should organizations be thinking about specifically around identity and authentication?

Avoid an identity crisis

McIntyre: This is a huge area of focus right now. Even within our own company, at Microsoft, we as employees operate in essentially an identity-driven security model. And so it’s proper that you call this out on this podcast.

Face IDThe idea that you can monitor and filter all traffic, and that you are going to make meaningful conclusions from that in real time — while still running your business and pursuing your mission — is not the best use of your time and your resources. It’s much better to switch to a more modern, identity-based model where you can actually incorporate newer concepts.

Within Microsoft, we have a term called Modern Workplace. It’s a reflection of the fact that government organizations and enterprises around the world are having to anticipate and hopefully provide a collaborative work environment where people can work in a way that reflects their personal preferences around devices and working at home or on the road at a coffee shop or restaurant — or whatever. The concept of work has changed around enterprise and is definitely forcing this opportunity to look at creating a more modern identity framework.

Zero Trust networking and micro-segmentation initiatives recognize that we know people need to keep working and doing their jobs wherever they are. The idea is to accept the fact that people will always cause some level of risk to the organization.

If you look at some of the initiatives in the US government right now, we hear the term Zero Trust. That includes Zero Trust networking and micro-segmentation. Initiatives like these recognize that we know people need to keep working and doing their jobs wherever they are. The idea is to accept the fact that people will always cause some level of risk to the organization.

We are curious, reasonably smart, well-intentioned people, and we make mistakes, just like anybody else. Let’s create an identity-driven model that allows the organization to get better insight and control over authentications, requests for resources, end-to-end, and throughout a lifecycle.

Gardner: Sudhir, Unisys has been working with a number of public-sector organizations on technologies that support a stronger posture around authentication and other technologies. Tell us about what you have found over the past few years and how that can be applied to these challenges of moving to a cloud like Microsoft Azure.

Mehta: Dana, going back in time, one of the requests we had from the US Department of Defense (DoD) on the networking side, was a concern around access to sensitive information and data. Unisys was requested by the DoD to develop a framework and implement a solution. They were looking at more of a micro-segmentation solution, very similar to what Mark just described.

So, fast forward, since then we have deployed and released a military-grade capability called Unisys Stealth®, wherein we are able to manage micro-segmentation, what we classify as key-based, encrypted micro-segmentation, that controls access to different hosts or endpoints based on the identity of the user. It permits only authorized users to communicate with approved endpoints and denies unauthorized communications, and so prevents the spread of east-to-west, lateral attacks.

Gardner: Mark, for those in our audience who aren’t that technology savvy, what does micro-segmentation mean? Why has it become an important foundational capability for security across a cloud-use environment?

Need-to-know access 

McIntyre: First of all, I want to call out Unisys’s great work here and their leadership in the last several years. It means a Zero-Trust environment can essentially gauge or control east-to-west behavior or activity in a distributed environment.

Stealth bugFor example, in a traditional IT environment, devices are not really well-managed when they are centralized, corporate-issued devices. You can’t take them out of the facility, of course. You don’t authenticate once you are on a network because you are already in a physical campus environment. But it’s different in a modern, collaborative environment. Enterprises are generally ahead on this change, but it’s now coming into government requirements, too.

And so now, you essentially can parse out your subjects and your objects, your subjects trying to access objects. You can spit them out and say, “We are going to create all user accounts with a certain set of parameters.” It amounts to a privileged, need-to-know model. You can enforce strong controls with a set of certain release-privilege rights. And, of course, in an ideal world, you could go a step further and start implementing biometrics [to authenticate] to get off of password dependencies.

Learn How Unisys Stealth Security 

Simplifies Zero Trust Networks

But number one, you want to verify the identity. Is this a person? Is this the subject who we think they are? Are they that subject based on a corroborating variety of different attributes, behaviors, and activities? Things like that. And then you can also apply the same controls to a device and say, “Okay, this user is using a certain device. Is this device healthy? Is it built to today’s image? Is it patched, clean, and approved to be used in this environment? And if so, to what level?”

And then you can even go a step further and say, “In this model, now that we can verify the access, should this person be able to use our resources through the public Internet and access certain corporate resources? Should we allow an unmanaged device to have a level of access to confidential documents within the company? Maybe that should only be on a managed device.”

So you can create these flexible authentication scenarios based on what you know about the subjects at hand, about the objects, and about the files that they want to access. It’s a much more flexible, modern way to interact.

Within Azure cloud, Microsoft Azure Active Directory services offer those capabilities – they are just built into the service. So micro-segmentation might sound like a lot of work for your security or identity team, but it’s a great example of a cloud service that runs in the background to help you set up the right rules and then let the service work for you.

Gardner: Sudhir, just to be clear, the Unisys Stealth(cloud) Extended Data Center for Microsoft Azure is a service that you get from the cloud? Or is that something that you would implement on-premises? Are there different models for how you would implement and deploy this?

A stealthy, healthy cloud journey 

Mehta: We have been working with Microsoft over the years on Stealth, and we have a fantastic relationship with Microsoft. If you are a customer going through a cloud journey, we deploy what we call a hybrid Stealth deployment. In other words, we help customers do what we call isolation with the help of communities of interests that we create that are basically groupings of hosts, users, and resources based on like interests.

Then, when there is a request to communicate, you create the appropriate Stealth-encrypted tunnels. If you have a scenario where you are doing the appropriate communication between an on-premises host and a cloud-based host, you do that through a secure, encrypted tunnel.

We have also implemented what we call cloaking. With cloaking, if someone is not authorized to communicate with a certain host or a certain member of a community of interest, you basically do not give a response back. So cloaking is also part of the Stealth implementation.

And in working closely with Microsoft, we have further established an automated capability through a discovery API. So when Microsoft releases new Azure services, we are able to update the overall Stealth protocol and framework with the updated Azure services. For customers who have Azure workloads protected by Stealth, there is no disruption from a productivity standpoint. They can always securely leverage whatever applications they are running on Azure cloud.

For customers leveraging Azure cloud with different workloads, we maintain the appropriate level of secure communications just as they would have in an on-premises deployment.

The net of it is being able to establish the appropriate secure journey for customers, from on-premises to the cloud, the hybrid journey. For customers leveraging Azure cloud with different workloads, we maintain the appropriate level of secure communications just as they would have in an on-premises deployment.

Gardner: Mark, when does this become readily available? What’s the timeline on how these technologies come together to make a whole greater than the sum of the parts when it comes to hybrid security and authentication?

McIntyre: Microsoft is already offering Zero Trust, identity-based security capabilities through our services. We haven’t traditionally named them as such, although we definitely are working along that path right now.

Microsoft Chief Digital Officer and Executive Vice President Kurt DelBene is on the US Defense Innovation Board and is playing a leadership role in establishing essentially a DoD or US government priority on Zero Trust. In the next several months, we will be putting more clarity around how our partners and customers can better map capabilities that they already own against emerging priorities and requirements like these. So definitely look for that.

Stealth cloud XDC for Microsoft AzureIn fact, Ignite DC is February 6 and 7, in downtown Washington, DC, and Zero Trust is certainly on the agenda there, so there will be updates at that conference.

But generally speaking, any customer can take the underlying services that we are offering and implement this now. What’s even better, we have companies that are already out there doing this. And we rely greatly on our partners like Unisys to go out and really have those deep architecture conversations with their stakeholders.

Gardner: Sudhir, when people use the combined solution of Microsoft Azure and Stealth for cloud, how can they react to attacks that may get through to prevent damage from spreading?

Contain contagion quickly 

Mehta: Good question! Internally within Unisys’s own IT organization, we have already moved on this cloud journey. Stealth is already securing our Azure cloud deployments and we are 95 percent deployed on Azure in terms of internal Unisys applications. So we like to eat our own dog food.

If there is a situation where there is an incident of compromise, we have a capability called dynamic isolation, where if you are looking at a managed security operations center (SOC) situation, we have empowered the SOC to contain a risk very quickly.

We are able to isolate a user and their device within 10 seconds. If you have a situation where someone turns nefarious, intentionally or coincidentally, we are able to isolate the user and then implement different thresholds of isolation. If a high threshold level is breached across 8 out of 10, that means we completely isolate that user.

Learn More About  Cyber Recovery

With Unisys Stealth 

If there is a threshold level of 5 or 6, we may still give the user certain levels of access. So within a certain group they would continue to access or be able to communicate.

Dynamic isolation isolates a user and their device with different levels of thresholds while we have like a managed SOC go through their cycles of trying to identify what really happened as part of what we would call an advanced response. Unisys is the only solution where we can actually isolate a user or the device within the span of seconds. We can do it now within 10 seconds.

McIntyre: Getting back to your question about Microsoft’s plans, I’m very happy to share how we’ve managed Zero Trust. Essentially it relies on Intune for device management and Azure Active Directory for identity. It’s the way that we right now internally manage our own employees.

My access to corporate resources can come via my personal device and work-issued device. I’m very happy with what Unisys already has available and what we have out there. It’s a really strong reference architecture that’s already generally available.

Gardner: Our discussion began with security for the US DoD, among the largest enterprises you could conceive of. But I’m wondering if this is something that goes down market as well, to small- to medium-sized businesses (SMBs) that are using Azure and/or are moving from an on-premises model.

Do Zero Trust and your services apply to the mom and pop shops, SMBs, and the largest enterprises?

All sizes of businesses

McIntyre: Yes, this is something that would be ideally available for an SMB because they likely do not have large logistical or infrastructure dependencies. They are probably more flexible in how they can implement solutions. It’s a great way to go into the cloud and a great way for them to save money upfront over traditional IT infrastructure. So SMBs should have a really good chance to literally, natively take an idea like this and implement it.

Gardner: Sudhir, anything to offer on that in terms of the technology and how it’s applicable both up and down market?

Mehta: Mark is spot on. Unisys Stealth resonates really well for SMBs and the enterprise. SMBs benefit, as Mark mentioned, in their capability to move quickly. And with Stealth, we have an innovative capability that can discover and visualize your users. Thereafter, you can very quickly and automatically virtualize any network into the communities of interest I mentioned earlier. SMBs can get going within a day or two.

Enterprises can define their journey depending on what you’re actually trying trying to migrate or run in the cloud. The opportunities are there for both SMBs and enterprises.

If you’re a large enterprise, you can define your journey — whether it’s from on-premises to cloud — depending on what you’re actually trying to migrate or run in the cloud. So I would say absolutely both. And it would also depend on what you’re really looking at managing and deploying, but the opportunities are there for both SMBs and enterprises.

Gardner: As companies large and small are evaluating this and trying to discern their interest, let’s look at some of the benefits. As you pointed out, Sudhir, you’re eating your own dog food at Unisys. And Mark has described how this is also being used internally at Microsoft as well.

Do you have ways that you can look at before and after, measure quantitatively, qualitative, maybe anecdotally, why this has been beneficial? It’s always hard in security to prove something that didn’t happen and why it didn’t happen. But what do you get when you do Stealth well?

Proof is in the protection 

Mehta: There are a couple of things, Dana. So one is there is certainly a reduction in cost. When we deploy for 20,000 Unisys employees, our Chief Information Security Officer (CISO) obviously has to be a big supporter of Stealth. His read is from a cost perspective that we have seen significant reductions in costs.

Prior to having Stealth implemented, we had a certain approach as relates to network segmentation. From a network equipment perspective, we’ve seen a reduction of over 70 percent. If you look at server infrastructure, there has been a reduction of more than 50 percent. The maintenance and labor costs have had a reduction north of 60 percent. Ongoing support labor cost has also seen a significant reduction as well. So that’s one lens you could look at.

The other lens that has been interesting is the virtual private network (VPN) exposure. As many of us know, VPNs are perhaps the best breach route for hackers today. When we’ve implemented Stealth internally within Unisys, for a lot of our applications we have done away with the requirement for logging into a VPN application. That has made for easier access to a lot of applications – mainly for folks logging in from home or from a Starbucks. Now when they communicate, it is through an encrypted tunnel and it’s very secure. The VPN exposure completely goes away.

Those are the best two lenses I could give to the value proposition. Obviously there is cost reduction. And the other is the VPN exposure goes away, at least for Unisys that’s what we’ve found with implementing internally.

Gardner: For those using VPNs, should they move to something like Stealth? Does the way in which VPNs add value change when you bring something like Stealth in? How much do you reevaluate your use of VPNs in general?

Mehta: I would be remiss to say you can completely do away with VPNs. If you go back in time and see why VPNs were created, the overall framework was created for secure access for certain applications. Since then, for whatever reasons, VPNs became the only way people communicate from working at home, for example. So the way we look at this is, for applications that are not extremely limited to a few people, you should look at options wherein you don’t necessarily need a VPN. You could therefore look at a solution like Unisys Stealth.

And then if there are certain applications that are extremely sensitive, limited to only a few folks for whatever reason, that’s where potentially you could consider using an application like a VPN.

Gardner: Let’s look to the future. When you put these Zero Trust services into practice, into a hybrid cloud, then ultimately a fully cloud-native environment, what’s the next shoe to fall? Are there some things you gain when you enter into this level of micro-segmentation, by exploiting these newer technologies?

Can this value be extended to the edge, for example? Does it have a role in Internet of things (IoT)? A role in data transfers from organization to organization? What does this put us in a position to do in the future that we couldn’t have done previously?

Machining the future securely 

McIntyre: You hit on two really important points. Obviously devices, IoT devices, for example, and data. So data increasingly — you see T-shirts out and you see slogans, “Data is the new oil,” and such. From a security point of view there is no question this is becoming the case, when there’s something like 44 to 45 zettabytes of data projected to be out there for the next few years.

You can employ traditional security monitoring practices, for example label-free detection, things like that. But it’s just not going to allow you to work quickly, especially in an environment where we’re already challenged with having enough security workforce. There are not enough people out there, it’s a global talent shortage.

It’s a fantastic opportunity forced on us to rely more on modern authentication frameworks and on machine learning (ML) and artificial intelligence (AI) technologies to take on a lot of that lower-level analysis, the log analysis work, out of human hands and have machines free people up for the higher-level work.

We’re trying to make sure that as we deliver new services to the marketplace that those are built in a way that you can configure and monitor them like any other device in the company.  We can make sure that it is being monitored in the same way as your traditional infrastructure. 

For example, we have a really interesting situation within Microsoft. It goes around the industry as well. We have many organizations go into the cloud, but of course, as we mentioned earlier, it’s still unclear on the roles and responsibilities. We’re also seeing big gaps in use of cloud resources versus security tools built into those resources.

And so we’re really trying to make sure that as we deliver new services to marketplace, for example, IoT, that those are built in a way that you can configure and monitor them like any other device in the company. With Azure, for example, we have IoT Hub. We can literally, as you build an IoT device, make sure that it is being monitored in the same way as your traditional infrastructure monitors.

cloud imageThere should not be a gap there. You can still apply the same types of logical access controls around them. There shouldn’t be any tradeoffs on security for how you do security — whether it’s IT or IoT.

Gardner: Sudhir, same question, what is use of Stealth in conjunction with cloud activities get you in the future?

Mehta: Tagging on to what Mark said, AI and ML are becoming interesting. We obviously had a very big digital workplace solutions organization. We are a market leader for services, for helpdesk services. We are looking at the introduction of a lot of what you would call as AIOps in automation as it leads to robotic process automation (RPA) and voice assistance.

So one of the things we are observing is, as you go on this AI-ML, there is a larger exposure because you are focusing more around the operationalization in automation or AI-ML and certain areas where you may not be able to manage, for instance, the way you get the training done for your bots.

So that’s where Stealth is a capability we are implementing right now with digital workplace solutions as part of a journey for AIOps automation as an example. The other area we are working very closely with some of other partners, as well as Microsoft, is around application security and hardening in the cloud.

How do you make sure that when you deploy certain applications in the cloud you ensure that it is secure and it is not being breached, or are there intrusions when you try to make changes to your applications?

Those are two areas we are currently working on, the AIOps and MLOps automation and then the application security and hardening in the cloud, working with Microsoft as well.

Gardner: If I want to be as secure as I can, and I know that I’m going to be doing more in the cloud, what should I be doing now in order to make myself in the best position to take advantage of things like micro-segmentation and the technologies behind Stealth and how they apply to a cloud like Azure? How should I get myself ready to take advantage of these things?

Plan ahead to secure success 

McIntyre: First thing is to remember how you plan and roll out your security estate. It should be no different than what you’re doing with your larger IT planning anyway, so it’s all digital transformation. First thing to do is close that gap between security teams. All the teams – business and IT — should be working together.

Learn How Unisys Stealth Security 

Simplifies Zero Trust Networks

We want to make sure that our customers go to the cloud in a secure way, without losing this ability to access their data. We continue to put more effort in very proactive services — architecture guidance, recommendations, things that can help people get started in the cloud. It’s called Azure Blueprints, a configuration guidance and predefined templates that can help an organization launch a resource in the cloud that’s already compliant against FedRAMP or NIST or ISO or HIPAA standards.

We’ll continue to invest in the technologies that help customers securely deploy technologies or cloud resources from the get-go so that we close those gaps and configuration and close the gaps in reporting and telemetry as well. And we can’t do it without great partners that provide those customized solutions for each sector.

Gardner: Sudhir, last word to you. What’s your advice for people to prepare themselves to be ready to take advantage of things like Stealth?

Mehta: Look at a couple of things. One is focus on trusted identity in terms of who you work with, who you give access to. Even within your organization you obviously need to make sure you establish that trusted identity. And how you do it is you make sure it is simple. Second, look at an overlay network agnostic framework, which is where Stealth can help you. Make sure it is unique. One individual has one identity. Third is make sure it is refutable. So it’s undeniable in terms of how you implement it, and then the fourth is, make sure it’s got the highest level of efficacy, whether it’s related to how you deploy and it’s also the way you architect your solution.

So, the net of it is, a) trust no one, b) assume a breach can occur, and then c) respond really fast to limit damage. If you do these three things, you can get to Zero Trust for your organization.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsors: Unisys and Microsoft.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Business intelligence, Cloud computing, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, machine learning, Microsoft, multicloud, Security, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

SambaSafety’s mission to reduce risk begins in its own datacenter security partnerships

carsSecurity and privacy protection increasingly go hand in hand, especially in sensitive industries like finance and public safety.

For driver risk management software provider SambaSafety protecting their business customers from risk is core to their mission — and that begins with protection of their own IT assets and workers.

Stay with us now as BriefingsDirect explores how SambaSafety adopted Bitdefender GravityZone Advanced Business Security and Full Disk Encryption to improve the end-to-end security of their operations and business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To share their story, please welcome Randy Whitten, Director of IT and Operations at SambaSafety in Albuquerque, New Mexico. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Randy, tell us about SambaSafety, how big it is, and your unique business approach.

Randy Whitten

Whitten

Whitten: SambaSafety currently employs approximately 280 employees across the United States. We have four locations. Corporate headquarters is in Denver, Colorado. Albuquerque, New Mexico is another one of our locations. There’s Rancho Cordova just outside of Sacramento, California, and Portland, Oregon is where our transportation division is.

We also have a variety and handful of remote workers from coast to coast and from border to border.

Gardner: And you are all about making communities safer. Tell us how you do that.

Whitten: We work with departments of motor vehicles (DMVs) across the United States, monitoring the drivers for companies. We put a partnership together with state governments, and third-party information is provided to allow us to process reporting for critical driver information.

We seek to transform that data into action to protect the businesses and our customers from driver and mobility risk. We work to incorporate top-of-the-line security software to ensure that all of our data is protected while we are doing that.

Data-driven driver safety 

Gardner: So, it’s all about getting access to data, recognizing where risks might emerge with certain drivers, and then alerting those people who are looking to hire those drivers to make sure that the right drivers are in the right positions. Is that correct?

Whitten: That is correct. Since 1998, SambaSafety has been the pioneer and leading provider of driver risk management software in North America. SambaSafety has led the charge to protect businesses and improve driver safety, ultimately making communities safer on the road.

Our mission is to guide our customers, including employers, fleet managers, and insurance providers to make the right decisions at the right time by collecting, correlating and analyzing motor vehicle records (MVRs) and other data resources. We identify driver risk and enable our customers to modify their drivers’ behaviors, reduce the accidents, ensure compliance, and assist with lowering the cost, ultimately improving the driver and the community safety once again.

Gardner: Is this for a cross-section of different customers? You do this for public sector and private sector? Who are the people that need this information most?

Whitten: We do it across both sectors, public and private. We do it across transportation. We do it across drivers such as Lyft drivers, Uber drivers, and transportation drivers — our delivery carriers, FedEx, UPS, etc. — those types of customers.

These transportation drivers are delivering our commodities every day — the food we consume, the clothes we wear, the parts that fix our vehicles, all what’s essential to our everyday living.

Gardner: This is such an essential service, because so much of our economy is on four wheels, whether it’s a truck delivering goods and services, transportation directly for people, and public safety vehicles. A huge portion of our economy is behind the wheel, so I think this is a hugely important service you are providing.

Whitten: That’s a good point, Dana. Yes, it is very much. Transportation drivers are delivering our commodities every day — the food that we consume, the clothes that we wear, also the parts that fix our vehicles to drive, plus also just to be able to get like those Christmas packages via UPS or FedEx — the essential items to our everyday living.

intersectionGardner: So, this is mission-critical on a macro scale. Now, you also are dealing, of course, with sensitive information. You have to protect the privacy. People are entitled to information that’s regulated, monitored, and provided accordingly. So you have to be across-the-board reducing risk, doing it the right way, and you also have to make your own systems protected because you have that sensitive information going back and forth. Security and privacy are probably among your topmost mission-critical requirements.

Securing the sectors everywhere

Whitten: That is correct. SambaSafety has a SOC 2 Type II compliant certification. It actually is just the top layer of security we are using within our company, either for our endpoints or for our external customers.

Gardner: Randy, you described your organization as distributed. You have multiple offices, remote workers, and you are dealing with sensitive private and public sector information. Tell us what your top line thinking, your philosophy, about security is and then how you execute on that.

Whitten: Our top line essentially is to make sure that our endpoints are protected, that we are taking care of our employees internally to be able to set them up for success, so they don’t have to worry about security. All of our laptops are encrypted. We have different types of levels of security within our organization, so that gives all of our employees a way to ease their comfort so that they can concentrate on taking care of our end customer.

Gardner: That’s right, security isn’t just a matter of being very aggressive, it also means employee experience. You have to give your people the opportunity to get their work done without hindrance — and the performance of their machine, of course, is a big part of that.

Tell us about the pain points, what were the problems you were having in the past that led you into a new provider when it comes to security software?

We were seeing threats get through the previous antivirus solution, and the cost of that solution was increasing month over month. Every time we’d add a new license it would seem like the price would jump.

Whitten: Some of the things that we have had to deal with within the IT department here at SambaSafety is when we see our tickets come in, it’s typically about memory usage as applications were locking up the computers, where it took a lot of resources to be able to launch the application.

We also were seeing threats getting through the previous antivirus solution, and then just the cost, the cost of that solution was increasing month over month. Every time we would add a new license it would seem like the price point would jump.

Gardner: I imagine you weren’t seeing them as a partner as much as a hindrance.

Whitten: Yes, that is correct. It started to seem like it was a monthly call, then it turned into a weekly call to their support center just to be able to see if we could get additional support and help from them. So that brought up, “Okay, what do we do next and what is our next solution going to look like?”

Gardner: Tell me about that process. What did you look at, and how did you make your choices?

Whitten: We did an overall scoping session and brought in three different antivirus solutions providers. It just so happens that they all measured up to be the next vendor that we were going to work with. Bitdefender came out on top and it was a solution that we could put into our cloud-hosted solution, it was also something that we could work with on our endpoints and also to be able to ensure that all of our employees are protected.

Gardner: So you are using GravityZone Advanced Business Security, Full Disk Encryption, and the Cloud Management Console, all from Bitdefender, is that correct?

Whitten: That is correct. The previous solution for our disk encryption is just about exhausted. Currently we have about 90 percent of our endpoints for disk encryption on Bitdefender now and we have had zero issues with it.

Gardner: I have to imagine you are not just protecting your endpoints, but you have servers and networks, and other infrastructure to protect. What does that consist of and how has that been going?

truckWhitten: That is correct. We have approximately 280 employees, which equals 280 laptops to be protected. We have a fair amount of additional hardware that has to be protected. Those endpoints have to be secured. And then 30 percent of additional hardware, i.e. the Macs that are within our organization, are also part of that Bitdefender protection.

Gardner: And everyone knows, of course, that management of operations is essential for making sure that nothing falls between the cracks — and that includes patch management, making sure that you see what’s going on with machines and getting alerts as to what might be your vulnerability.

So tell us about the management, the Cloud Console, particularly as you are trying to do this across a hybrid environment with multiple sites?

See what’s secure to ensure success 

Whitten: It’s been vital for the success of Bitdefender and their console that we can log on and we can see what’s happening. It has been very key to the success. I can’t say that enough.

And it goes as far as information gathering, dashboard, data analytics, network scanning, and the vulnerability management – just being able to ensure our assets are protected has been key.

Also, we could watch the alerting that happens to ensure that the behavior is not changing from machine intelligence or machine learning (ML) so that our systems do not get infected in any way.

Gardner: And the more administration and automation you get, the more you are able to devote your IT operations people to other assets, other functions. Have you been able to recognize, not only an improvement in security, but perhaps an easing up on the man hours and labor requirements?

Whitten: Sure. The first 60 days of our implementation I was able to improve return on investment (ROI) quickly. We were able to allow additional team resources to focus on other tickets and also other items that came into our work scope within our department.

Bitdefender was already out there managing itself. It was doing what we paying for it to do. It was actually a really good choice for us. The partnership with them is very solid, we are very pleased with it, a win-win situation for both of our companies.

Bitdefender was already out there, and it was managing itself, it was doing what we were paying for it to do — and it was actually a really good choice for us. The partnership with them is very solid, we are very pleased with it, it is a win-win situation for both of our companies.

Gardner: Randy, I have had people ask me, “Why do I need Full Disk Encryption? What does that provide for me? I am having a hard time deciding whether it’s the right thing for our organization.”

What were your requirements for widespread encryption and why do you think that’s a good idea for other organizations?

sambasafety-logoWhitten: The most common reason to have Full Disk Encryption is you are at the store, someone comes in, they break into your car, they steal your laptop bag or they see your computer laying out, they take it. As the Director of IT and Operations for SambaSafety, my goal is to ensure that our assets are protected. So having Full Disk Encryption on board that laptop gives me a chance to sleep a little easier at night.

Gardner: You are not worried about that data leaving the organization because you know it’s got that encryption wrapper.

Whitten: That is correct. It’s protected all the way around.

Gardner: As we start to close out, let’s look to the future. What’s most important for you going forward? What would you like to see improve in terms of security, intelligence and being able to monitor your privacy and your security requirements?

Scope out security needs

Whitten: The big trend right now is to ensure that we are staying up to date and Bitdefender is staying up to date on the latest intrusions so that our software is staying current and we are pushing that out to our machines.

Also just continue to be right on top of the security game. We have enjoyed our partnership with Bitdefender to date and we can’t complain, and for sure it has been a win-win situation all the way around.

Gardner: Any advice for folks that are out there, IT operators like yourself that are grappling with increased requirements? More people are seeing compliance issues, audit issues, paperwork and bureaucracy. Any advice for them in terms of getting the best of all worlds, which is better security and better operations oversight management?

Bitdefender bug bestWhitten: Definitely have a good scope of what you are looking for, for your organization. Every organization is different. What tends to happen is that you go in looking for a solution and you don’t have all of the details that would meet the needs of your organization.

Secondly, get the buy-in from your leadership team. Pitch the case to ensure that you are doing the right thing, that you are bringing the right vendor to the table, so that once that solution is implemented, then they can rest easy as well.

Every company executive across the world right now that has any responsibility with data, definitely security is at the top of their mind. Security is at the top of my mind every single day, protecting our customers, protecting our employees, making sure that our data stays protected and secured so that the bad guys can’t have it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, Enterprise architect, Identity, Information management, machine learning, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Why flexible work and the right technology may just close the talent gap

order workers

Companies struggle to find qualified workers in the mature phase of any business cycle. Yet as we enter a new decade in 2020, they have more than a hyper-low unemployment rate to grapple with.

Businesses face a gaping qualitative chasm between the jobs businesses need to fill and the interest of workers in filling them. As a result, employees have more leverage than ever to insist that jobs cater to their lives, locations, and demands to be creatively challenged.

Accordingly, IDC predicts that by 2021, 60 percent of Global 2000 companies will have adopted a future workspace model — flexible, intelligent, collaborative, virtual, and physical work environments.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as BriefingsDirect explores how businesses must adapt to this new talent landscape and find the innovative means to bring future work and workers together. Our flexible work solutions panel consists of Stephane Kasriel, the former Chief Executive Officer and a member of the board at Upwork, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: If flexible work is the next big thing, that means we have been working for the past decade or two in an inflexible manner. What’s wrong with the cubicle-laced office building and the big parking lot next to the freeway model?

Tim Minahan

Minahan

Minahan: Dana, the problem dates back a little further. We fundamentally haven’t changed the world of work since Henry Ford. That was the model where we built big work hubs, big office buildings, call centers, manufacturing facilities — and then did our best to hire as much talent around that.

This model just isn’t working anymore against the backdrop of a global talent shortage, which is fast approaching more than 85 million medium- to high-skilled workers. We are in dire need of more modern skill sets that aren’t always located near the work hubs. And to your earlier point, employees are now in the driver’s seat. They want to work in an environment that gives them flexible work and allows them to do their very best work wherever and whenever they want to get it done.

Gardner: Stephane, when it comes to flexible work, are remote work and freelance work the same? How wide is this spectrum of options when it comes to flexible work?

Kasriel: Almost by definition, most freelance work is done remotely. At this stage, freelancing is growing faster than traditional work, about three times faster, in fact. About 35 percent of US workers are doing some amount of freelancing. And the vast majority of it is skilled work, which is typically done remotely.

Stephane Kasriel

Kasriel

Increasingly what we see is that freelancers become full-time freelancers; meaning it’s their primary source of income. Usually, as a result of that, they tend to move. And when they move it is out of big cities like San Francisco and New York. They tend to move to smaller cities where the cost of living is more affordable. And so that’s true for the freelance workforce, if you will, and that’s pulling the rest of the workforce with it.

What we see increasingly is that companies are struggling to find talent in the top cities where the jobs have been created. Because they already use freelancers anyway, they are also allowing their full-time employees to relocate to other parts of the country, as well as to hire people away from their headquarters, people who essentially work from home as full-time employees, remotely.

Gardner: Tim, it sounds like Upwork and its focus on freelance might be a harbinger of what’s required to be a full-fledged, flexible work support organization. How do you view freelancing? Is this the tip of the arrow for where we are headed?

Minahan: Against the backdrop of a global talent shortage and outdated model of hub-and-spoke-based work models, the more innovative companies — the ones securing the best talent — go to where the talent is, whether using contingent or full-time workers.

They are also shifting from the idea of having a full-time employee staff to having pools of talent. These are groups that have the skills and capabilities to address a specific business challenge. They will staff up on a given project.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

So, work is becoming much more dynamic. The leading organizations are tapping into that expertise and talent on an as-needed basis, providing them an environment to collaborate around that project, and then dissolving those teams or moving that talent on to other projects once the mission is accomplished.

Gardner: So, it’s about agility and innovation, being able to adapt to whatever happens. That sounds a lot like what digital business transformation is about. Do you see flexible work as supporting the whole digital transformation drive, too?

Minahan: Yes, I certainly do. In fact, what’s interesting is the first move to digital transformation was a shift to transforming customer experience, of creating new ways and new digital channels to engage with customers. It meant looking at existing product lines and digitizing them.

upwork chess

And along the way, companies realized two things. Number one, they needed different skills than they had internally. So the idea of the contingent worker or freelance worker who has that specific expertise becomes increasingly vital.

They also realized they had been asking employees to drive this digital transformation while anchoring them to archaic or legacy technology and a lot of bureaucracy that often comes with traditional work models.

And so there is now an increased focus at the executive C-suite level on driving employee experience and giving employees the right tools, the right work environment, and the flexible work models they need to ensure that they not only secure the best talent, but they can arm them to do their very best work.

There is now an increased focus at the C-suite level on driving employee experience and giving employees the right tools, work environment, and flexible work models they need to ensure they can do their very best work.

Gardner: Stephane, for the freelance workforce, how have they been at adapting to the technologies required to do what corporations need for digital transformation? How does the technology factor into how a freelancer works and how a company can best take advantage of them?

Kasriel: Fundamentally, a talent strategy is a critical part of digital transformation. If you think about digital transformation, it is the what, and the talent strategy is the how. And increasingly, as Tim was saying, as businesses need to move faster, they realize that they don’t have all the skills internally that they need to do digital transformation.

They have to tap into a pool of workers outside of the corporation. And doing this in the traditional way, using staffing firms or trying to find local people that can come in part-time, is extremely inefficient, incredibly slow, and incompatible with the level of agility that companies need to have.

citrix-logo-250x250So just as there was a digital transformation of the business firm, there is now also a digital transformation of the talent strategy for the firm. Essentially work is moving from an offline model to an online model. The technology helps with security, collaboration, and matching supply and demand for labor online in real-time, particularly for niche skills in short-duration projects.

Increasingly companies are reassembling themselves away from the traditional Taylorism model of silos, org charts, and people doing the same work every single day. They are changing to much more self-assembled, cross-functional, agile, and team-based work. In that environment, the teams are empowered to figure out what it is that they need to do and what type of talent they need in order to achieve it. That’s when they pull in freelancers through platforms such as Upwork to add skills they don’t have internally — because nobody has those internally.

And on the freelancer side, freelancers are entrepreneurs. They are usually very good at understanding what skills are in demand and acquiring those skills. They tend to train themselves much more frequently than traditional full-time employees because there is a very logical return on investment (ROI) for them to do so.

If I learned the latest Java framework in a few weeks, for example, I can then bill at a much higher rate than I would otherwise could if I didn’t have those skills.

Gardner: Stephane, how does Upwork help solve this problem? What is your value-add?

Upwork secures hiring, builds trust 

Kasriel: We essentially provide three pieces of value-add. One is a very large database of freelancers on one side and a very large database of clients and jobs on the other side. With that scale comes the ability to have high liquidity. The median time to fill a job on Upwork right now is less than 24 hours, compared to multiple weeks in the offline world. That’s one big piece of it.

The second is around an end-to-end workflow and processes to make it easy for large companies to engage with independent contractors, freelancers, and consultants. Companies want to make sure that these workers don’t get misclassified, that they only have access to IT systems they are supposed to, that they have signed the right level of agreements with the company, and that they have been background checked or whatever other processes that the company needs.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

The third big piece is around trust and safety. Fundamentally, freelancers want to know that they are going to be working with reputable clients and that they are going to get paid. Conversely, companies are engaging with freelancers for things that might be highly strategic, have intellectual property associated with them, and so they want to make sure that the work is going to be done properly and that the freelancer is not going to be selling information from the company, as an example.

So, the three pieces around matching, collaboration and security software, and trust and safety are the things that large companies are using Upwork for to meet the needs of their hiring managers.

Fundamentally, we want to be invisible. We want the platform to look simple so that people can get things done by having freelancers — and not have to think about all of the complexities of being compliant with the various roles that large companies have as it relates to engaging with people in general, but with independent contractors in particular.

Mind the gap in talent, skills 

Gardner: Tim, a new study has been conducted by the Center for Business and Economic Research on these subjects. What are some of the findings?

Minahan: At Citrix, we are committed to helping companies drive higher levels of employee experience using technology to create environments that allow much more flexible work models and empower employees to get their very best work done. So we are always examining the direction of overall work models in the market. So we partnered to better understand how to solve this massive talent crisis.

Consider that there is a gap of close to 90 million medium- to high-skilled workers around the globe, all of these unfilled jobs. There are a couple of ways to solve this. The best way is to expand the talent pool. So, as Stephane said, that can be through tapping into freelance marketplaces, such as Upwork, to find a curated path to the top talent, those who have the finest skills to help drive digital transformation.

But we can couple that with digital workspaces that allow flexible work models by giving the talent access to the tools and information they need to be productive and to collaborate. They can do that in a secure environment that leaves the company confident their information and systems remain secure.

The key findings of the study are that we have an untapped market. Some 69 percent of people who currently are unemployed or economically inactive indicate that they would start working if given more flexible work models and the technology to enable them to work remotely.

The key findings of the Center for Business and Economic Research study are that we have an untapped market. Some 69 percent of people who currently are unemployed or economically inactive indicate that they would start working if given more flexible work models and the technology to enable them to work remotely.

Think about the massive shifts in the demographics of the workplace. We talk about millennials coming into the workforce, and new work models, and all of that’s interesting and important. But we have a massive other group of workers at the other end of the spectrum — the baby boomers — who have massive amounts of talent and knowledge and who are beginning to retire.

upwork bugWhat if we could re-employ them on their own terms? Maybe a few days a week or a few hours a day, to contribute some of their expertise that is much needed to fill some of the skills gaps that companies have?

We are in a unique position right now and have an incredible opportunity to embrace these new work models, these new freelance marketplaces, and the technology to solve the talent gap.

Kasriel: We run a study every year called Freelancing in America; we have been running it for six years now. One of the highlights of the study is that 46 percent, so almost half of freelancers, say that they cannot take a traditional full-time job. And that’s usually primarily driven by health issues, by care duties, or by the fact that they live in a part of the US where there are no jobs for their skills. They tend to be more skilled and educated on average than non-freelancers, and they tend to be completely undercounted in the Bureau of Labor Statistic data every month.

So when we talk about no unemployment in the country, and when we talk about the skills gap, there is this other pool of talent that tends to be very resilient, really hardworking, and highly skilled — but who cannot commit to a traditional full-time job that requires them to be on-site.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

So, yes, there is a skills gap overall. If you look at the micro numbers, that is true. But at the macro level, at the business firm level, it’s much more of a gap of flexibility — and a gap of imagination — than anything else. Firms are competing for the same talent in the same way and then wondering why they are struggling to attract new fresh talent and improve their diversity.

I tell them to go online and look at the talent available there. You will find a world of work, of people that are extremely eager to work for you. In fact, they are probably going to be much more loyal to your company than anybody else because you are by far the best employer that they could work with.

Gardner: To be clear, this is not North America or the US only. I have seen similar studies and statistics coming out of Europe and Japan. They differ from market to market, but it’s all about trying to solve the mismatch between employers and available potential talent.

Tim, people have been working remotely for quite a while now. Why is this not an option, but a necessity, when it comes to flexible and remote work?

Minahan: It’s the market dynamics we have been talking about. Companies struggle to find the talent they need at scale in the locations where they traditionally have major office hubs. Out of necessity, to advance their business and access the skills they need, they must embrace more flexible work models. They need to be looking for talent in nontraditional ways, such as making freelance workers part of their regular talent strategies, and not an adjunct for when someone is out on sick leave.

And it’s really accelerating quite dramatically. We talk a lot about that talent crunch, but in addition, it’s also a skills gap. As Stephane was saying, so many of these freelance workers have the much-in-demand skills that people need.

When you think about the innovators in the industry, folks like Amazon who recently said, “Hey, we can’t find all of the talent we need with the skills that we need so we are going to retrain and spend close to $1 billion to retain a third of our workforce.”

They are expanding their talent pool. That’s what innovative companies are beginning to do. They are saying: “Okay, we have these constraints. What can we do, how can we work differently, how can we embrace technology differently, and how can we look at the workforce differently in order to expand our talent pool?”

Gardner: If you seek out the best technology to make that flexible workforce innovative, collaborative, and secure, are there other economic paybacks? If you do it right, can out also put money to the bottom line? What is the economic impact?

More remote workers, more revenue

Minahan: From the study that we did around remote workers and tapping into the untapped talent pool, the research found that this could equate to more than $2 trillion in added value per year — or a 10 percent boost to the US GDP. It’s because otherwise businesses are not able to deliver services because they don’t have the talent.

On a micro level, at an individual business level, when workers are engaged in these more flexible work models they are more stress-free. They are far more productive. They have more time for doing meaningful work. As a result, companies that embrace these work models are seeing far higher revenue growth, sometimes upward of 2.5 times. There are revenue growths, far higher profitability, and far greater worker retention than their peers.

Kasriel: It’s also important to remember that the average American worker spends more time commuting to work than on vacation in a given year. Imagine if all of that time could be reused to be productive at work, spend another couple of hours every day doing work for the company, or doing other things in their lives so they could consume more goods and services, which would drive economic growth.

Right now the amount of waste coming from companies requiring that their workers commute to work is probably the biggest amount of waste that companies are creating in the economy. It also causes income inequality, congestion, and pollution.

Right now the amount of waste coming from companies requiring that their workers commute to work is probably the biggest amount of waste that companies are creating in the economy. By the way, it also causes income inequality, congestion, and pollution. So there are countless negative externalities that nobody is even taking into account. Yet the waste of time by forcing workers to commute to work is increasing every year when it doesn’t need to be.

Some 20 years ago, when people were talking about remote work, it felt challenging from a cultural standpoint. We were all used to working face-to-face. It was challenging from a technological standpoint. We didn’t have broadband, secure application environments such as Citrix, and video conferencing. The tools were not in the cloud. A lot of things made it challenging to work remotely — but now that cultural barrier is not nearly as big.

We are all more or less digital natives; we all use these tools. Frankly, even when you are two floors away in the same building, how many times you take the elevator to go down to meet somebody face-to-face versus chat with them or do a video conference with them?

Human Resources SectionAt this stage, whether you are two floors away or 200 miles away makes almost no difference whatsoever. Where it does make a difference is forcing people to have to come to work every single day when it adds a huge amount of constraint in their lives and it’s fundamentally not productive for the economy.

Minahan: Building on what Stephane said, the study we did found that in addition to unlocking that untapped pool of talent, those folks who do currently have full-time jobs, 95 percent of them said they would work from home at least twice a week if given the opportunity. To Stephane’s point, you just look at that group alone and the time they would save from commuting multiplies to 105 hours of newly free time per year, time they didn’t have to spend commuting and doing unproductive things. Most of them said that they would put more hours into work because they didn’t have to deal with all the hassle of getting there.

Flexible work provides creativity 

Gardner: What about the quality of the work? It seems to me that creative work happens in its own ways, even in a state of leisure. I have to tell you some of the best creative thoughts I have occur when I’m in the shower. I don’t know why. So maybe creativity isn’t locked into a 9-to-5 definition.

Is there something in what we’re talking about that caters to the way the human brain works? As we get into the age of robotic process automation (RPA) should we look more to the way that people are intrinsically creative and free that?

Kasriel: Yes, the World Economic Forum has called attention to such changes in our evolution, the idea that progressively machines are going to be taking over the parts of our jobs that they can do better than we can. This frees us to be the best of ourselves, to be humans. The repetitive, non-cognitive work being done in a lot of offices is progressively going to be automated through RPA and artificial intelligence (AI). That allows us to spend more time on the creative work. The nature of creative work is such that you can’t order it on-demand, you can’t say, “Be creative in the next five minutes.”

It comes when it comes. It’s the inspiration that comes. So putting in artificial boundaries of saying, “You will be creative from 9-to-5, and you will only do this in the office environment,” is unlikely to be successful. Frankly, if you look at workplace management, you see companies increasingly trying to design work environments that are mix between areas of the office where you can be very productive — by just doing the things that you need to do — and places where you can be creative and thinking.

And that’s just a band-aid solution. The real solution is to let people work from anywhere and let them figure out the time at which they are the most creative and productive. Hold people accountable for an outcome, as opposed to holding them accountable for the number of fixed-time hours they are giving to the firm. It is, after all, very weakly correlated to the amount of output, of what they actually generate for the company.

Minahan: I fully agree. If you look at the overall productivity and the GDP, productivity advanced consistently with each new massive innovation right up until recently. The advent of mobile devices, mobile apps, and all of the distractions from communications and chat channels that we have at work have reached a crescendo.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

On any given day, a typical employee spends nearly 65 percent of their time on such busy work. That means responding to Slack messages, being distracted by the application alerts about some tasks that may not be pertinent for your job and spending another 20 percent of time just searching for information. These all leave employees with less than two hours a day, by some estimates, on the meaningful and rewarding work that they were hired to do.

If we can free them up from those distractions and give them an environment to work where and how they want, one of the chief benefits is the capability to drive greater innovation and creativity than they can in an interruptive office environment.

Gardner: We have been talking in general terms. Do we have any concrete examples, use cases perhaps, that illustrate what we have been driving at? Why is it good for business and also for workers?

Blended workforce wins 

Kasriel: If you look at tech companies created in the last 15 to 20 years, increasingly you see them as what people call remote first, where they try to hire people outside of their main headquarters first and only put people in the office if they happen to live nearby. And that leads to a blended workforce, a mix between full-time employees and free-lancers.

The companies most visible started in open-source software development. So if you look at Mozilla, the non-profit behind Firefox, or if you look at the Wikipedia foundation, the non-profit building Wikipedia, if you look at Automattic, the for-profit open source company that builds WordPress, or if you look at GitLab. I mean, if you look at Upwork, we ourselves are mostly distributed, 2,000 people working in 800 different cities. InVision would be another example.

So, very well-known tech companies that build products used by hundreds of millions of people. WordPress alone empowers a subset of the Internet. These companies tend to have well over 100,000 workers between full-time employees and freelancers. They either have no office or most of their people are not working in an office.

Microsoft started using Upwork a few years ago. At this stage, they have thousands of different freelancers working on thousands of different projects. They are doing it becuase it’s the right thing to do.

The companies that are a little bit more challenging are the ones that have grown in a world where everybody was a full-time employee. Everybody was on-site. But progressively they have made a shift to more flexible work models.

Probably the company that I’ve seen to be the most publicly vocal about this is Microsoft. Microsoft started using Upwork a few years ago. At this stage, they have thousands of different freelancers working on thousands of different projects. Partly they do it because they struggle to find great talent in Redmond, Wash., just like everybody else. There is a finite talent pool. But partly they are doing it because it’s the right thing to do.

Citrix campusIncreasingly we hear companies say, “We can do well, and we can do a good at the same time.” That means helping people who may be disabled, people that may have care duties, young parents with children at home, people that are retiring but are not fully willing to completely step out of the workforce, or people that just happened to live in smaller cities in the U.S. where increasingly, even if you have the skills, they are not local jobs.

And they have spoken about this in both terms, which is: It’s the right thing for their shareholders, the right thing for their business, but it’s also helping society be more fair and distributed in a way that benefits workers outside of the big tech hubs of San Francisco, Seattle, Boston, New York, and Austin.

Gardner: Tim, any examples that demonstrate why a future workspace model helps encourage this flexible work and why it’s good for both the employees and employers?

May the workforce be with you

Minahan: Stephane did a great job covering the more modern companies built from the ground up on flexible work models. He brought up an interesting point. It’s much more challenging for traditional or established companies to transition to these models. One that stands out and is relevant is eBay.

eBay, as we all know, is one of the largest digital marketplaces in the world. Like many others, they built call centers in major cities and hired a whole bunch of folks to answer and provide support calls to buyers and sellers as they were conducting commerce in the marketplace. However, their competition was setting up call centers right down the street, so they were in constant churning — hiring, training, losing them, and needing to rehire. Finally they said, “This can’t go on. We have to figure out a different model.”

They embraced technology and consequently a more flexible work model. They went where the talent is: The stay-at-home parent in Montana, the retiree in Florida, the gig worker in New York or Boston. They armed them with a digital workspace that gave them the information, tools, and knowledge base they needed to answer questions from customers but in far more flexible work models. They could work three hours a day or maybe one day a week. eBay was able to Uberfy the workforce.

They started a year-and-a-half ago and are now they are close to having 4,000 of these call center workers as a remote workforce, and it’s all transparent to the rest of us. They are delivering a higher-level service to the customers by going to where the talent is and it’s completely transparent. We are unaware that they are not sitting in a call center somewhere. They are actually sitting in a remote office in all corners of the country.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business networks, Citrix, Cloud computing, Cyber security, mobile computing, Security, User experience | Tagged , , , , , , , , , , | Leave a comment

As hybrid IT complexity ramps up, operators look to data-driven automation tools

sphere image

The next edition of the BriefingsDirect Voice of the Innovator podcast series examines the role and impact of automation on IT management strategies.

Growing complexity from the many moving parts in today’s IT deployments are forcing managers to seek new productivity tools. Moving away from manual processes to bring higher levels of automation to data center infrastructure has long been a priority for IT operators, but now new tools and methods are making composability and automation better options than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the advancing role and impact from IT automation is Frances Guida, Manager of HPE OneView Automation and Ecosystem Product Management at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top drivers, Frances, for businesses seeking higher levels of automation and simplicity in their IT infrastructure?

Guida: It relates to what’s happening at a business level. It’s a truism that business today is moving faster than it ever has before. That puts pressure on all parts of a business environment — and that includes IT. And so IT needs to deliver things more quickly than they used to. They can’t just use the old techniques; they need to move to much more automated approaches. And that means they need to take work out of their operational environments.

Gardner: What’s driving the complexity that makes such automation beneficial?

IT means business 

Guida: It again starts from the business. IT used to be a support function, to support business processes. So, it could go along on its own time scale. There wasn’t much that the business could or would do about it.

Frances Guida

Guida

In 2020, technology is now part of the fabric of most of the products, services, and experiences that businesses offer. So when technology is part of an offering, all of a sudden technology is how a business is differentiated. As part of how a business is differentiated, business leaders are not going to take, “Oh, we will get to it in 18 months,” as an answer. If that’s the answer they get from the IT department, they are going to go look for other ways of getting things done.

And with the advances of public cloud technology, there are other ways of getting things done that don’t come from an internal IT department. So IT organizations need to be able to keep up with the pace of business change, because businesses aren’t going to accept their historical time scale.

Gardner: Does accelerating IT via automation require an ecosystem of partners, or is there one tool that rules them all?

Guida: This is not a one-size-fits-all world. I talk to customers in our HPE Executive Briefing Centers regularly. The first thing I ask them is, “Tell me about the toolsets you have in your environment.” I often ask them about what kinds of automation toolsets they have. Do you have Terraform or Ansible or Chef or Puppet or vRealize Orchestrator or something else? It’s not uncommon for the answer to be, “Yes.” They have all of them.

So even within a customer’s environment, they don’t have a single tool. We need to work with all the toolsets that the customers have in their IT environments.

Gardner: It almost sounds like you are trying to automate the automation. Is that fair?

Guida: We definitely are trying to take some of the hard work that has historically gone into automation and make it much simpler.

Complexity is Growing in the Data Center

What’s the Solution?

Gardner: IT operations complexity is probably only going to increase, because we are now talking about pushing compute operations — and even micro data centers — out to the edge in places like factories, vehicles, and medical environments, for example. Should we brace ourselves now for a continuing ramp-up of complexity and diversity when it comes to IT operations?

Guida: Oh, absolutely. You can’t have a single technology that’s going to answer everything. Is the end user going to interface through a short message service (SMS) or are they going to use a smartphone? Are they going to be on a browser? Is it an endpoint that interacts with a system that’s completely independent of any user base technology? All of this means that IT has to be multifaceted.

Even if we look at data center technologies, for the last 15 years virtualization has been pretty much the standard way that IT deploys new systems. Now, increasingly, organizations are looking at a set of applications that don’t run in virtual machines (VMs), but rather are container-based. That brings a whole other set of complexity they have to think about in their environments.

Complexity is like entropy; it just keeps growing. When we started thinking about bringing a lot more flexibility to on-premises data center environments, we looked holistically at the problem … at a deeper level.

Complexity is like entropy; it just keeps growing. When we started thinking about bringing a lot more flexibility to on-premises data center environments, we looked holistically at the problem. I don’t think the problem can only be addressed through better automation; in fact, it has to be addressed at a deeper level.

And so with our composable infrastructure strategies, we thought architecturally about how we could bring the same kind of flexibility you have in a public cloud environment to on-premises data centers. We realized we needed a way to liberate IT beyond the boundaries of physical infrastructure by being able to group that physical infrastructure into pools of resources that could be much more fluid and where the physical aspects could be changed.

Now, there is some hardware infrastructure technology in that, but a lot of that magic is done through software, using software to configure things that used to be done in a physical manner.

CISo we defined a layer of software-defined intelligence that captures all of the things you need to know about configuring physical hardware — whether it’s firmware levels or biased headings or connections. We define and calculate all of that in software.

And automation is the icing on that cake. Once you have your infrastructure that can be defined in software, you can program it. That’s where the automation comes in, being able to use everyday automation tools that organizations are already using to automate other parts of their IT environment and apply that to the physical infrastructure without a whole bunch of unnatural acts that were previously required if you wanted to automate physical infrastructure.

Gardner: Are we talking about a fundamental shift in how infrastructure should be conceived or thought of here?

Consolidate complexity via automation 

Guida: There has been a saying in the IT industry for a while about moving from pets to cattle. Now we even talk about thinking about herds. You can brute-force that transition by trying to automate to all of the low-level application programing interfaces (APIs) in physical infrastructure today. Most infrastructure today is programmable, with rare exceptions.

But then you as the organization are doing the automation, and you must internalize that and make your automation account for all of the logic. For example, if you then make a change in the storage configuration, what does that mean for the way the network needs to be configured? What does that mean for firmware settings? You would have to maintain all of that in your own automation logic.

How to Simplify and Automate

Your Data Center

There are some organizations in the world that have the scale of automation engineering to be able to do that. But the vast majority of enterprises don’t have that capability. And so what we do with composable infrastructure, HPE OneView, and our partner ecosystem is we actually encapsulate all of that in our software to find intelligence. So all you have to do is take that configuration file and apply it to a set of physical hardware. It brings things that used to be extremely complex down to what a standard IT organization has the capabilities of doing today.

Gardner: And not only is that automation going to appeal to the enterprise IT organizations, it’s also going to appeal to the ecosystem of partners. They now have the means to use the composable infrastructure to create new value-added services.

How does HPE’s composability benefit both the end-user organizations and the development of the partner ecosystem?

Guida: When I began the composable ecosystem program, we actually had two or three partners. This was about four years ago. We have now grown to more than 30 different integrations in place today, with many more partners that we are talking to. And those range from the big, everyday names like VMware and Microsoft to smaller companies that may be present in only a particular geography.

But what gets them excited is that, all of a sudden, they are able to bring better value to their customers. They are able to deliver, for example, an integrated monitoring system. Or maybe they are already doing application monitoring, and all of a sudden they can add infrastructure monitoring. Or they may already be doing facilities management, managing the power and cooling, and all of a sudden they get a whole bunch of data that used to be hard to put in one place. Now they can get a whole bunch of data on the thermals, of what’s really going on at the infrastructure level. It’s definitely very exciting for them.

Gardner: What jumps out at you as a good example of taking advantage of what composable infrastructure can do?

Guida: The most frequent conversations I have with customers today begin with basic automation. They have many tools in their environment; I mentioned many of them earlier: Ansible, Terraform, Chef, Puppet, or even just PowerShell or Python; or in the VMware environment, vRealize Orchestrator.

They have these tools and really appreciate what we have been able to do with publishing these integrations on GitHub, for example, of having a community, and having direct support back to our engineers who are doing this work. They are able to pretty straightforwardly add that into their tools environment.

How a Software-Defined Data Center

Lets the Smartest People Work for You

And we at HPE have also done some of the work ourselves in the open source tools projects. Pretty much every automation tool that’s out there in mainstream use by IT — we can handle it. That’s where a lot of the conversations we have with customers begin.

If they don’t begin there, they start back in basic IT operations. One of the ways people take advantage of the automation in HPE OneView — but they don’t realize they are taking advantage of automation — is in how OneView helps them integrate their physical infrastructure into a VMware vCenter or a Microsoft System Center environment.

Visualize everything, automatically

For example, in a VMware vCenter environment, an administrator can use our plug-in and it automatically sucks in all of the data from their physical infrastructure that’s relevant to their VMware environment. They can see things in their vCenter environment that they otherwise couldn’t see.

They can see everything from a VM that’s sitting on the VM host that’s connected through the host bus adapters (HBAs) out to the storage array. There is the logical volume. And they can very easily visualize the entire logical as well as physical environment. That’s automation, but you are not necessarily perceiving it as automation. You are perceiving it as simply making an IT operations environment a lot easier to use.

The automation benefits — instead of just going down into the IT operations — can also go up to allow more cloud management. It affects infrastructure and applications.

For that level of IT operations integration, VMware and Microsoft environments are the poster children. But for other tools, like Micro Focus and some of the capacity planning tools, and event management tools like ServiceNow – those are another big use case category.

The automation benefits – instead of just going down into the IT operations – can also go up to allow more cloud management. Another way IT organizations take advantage of the HPE automation ecosystem means, “Okay, it’s great that you can automate a piece of physical infrastructure, but what I really need to do — and what I really care about — is automating a service. I want to be able to provision my SQL database server that’s in the cloud.”

That not only affects infrastructure pieces, it touches a bunch of application pieces, too. Organizations want it all done through a self-service portal. So we have a number of partners who enable that.

Morpheus comes to mind. We have quite a lot of engagements today with customers who are looking at Morpheus as a cloud management platform and taking advantage of how they can not only provision the logical aspects of their cloud, but also the physical ones through all of the integrations that we have done.

How to Simplify, Automate, and

Develop Faster

Gardner: How does HPE and the partner ecosystem automate the automation, given the complexity that comes with the newer hybrid deployment models? Is that what HPE OneView is designed to help do these days?

Automatic, systematic, cost-saving habit 

Guida: I want to talk about a customer who is an online retailer. If you think about the retail world — obviously a highly dynamic world and technology is at the very forefront of the product that they deliver; technology is the product that they deliver.

They have a very creative marketing department that is always looking for new ways to connect to their customers. That marketing department has access to a set of application developers who are developing new widgets, new ways of connecting with customers. Some of those developers like to develop in VMs, which is more old school; some of the developers are more new school and they prefer container-based environments.

multicloud

The challenge the IT department has is that from one week to the next they don’t fully know how much of their capacity needs to be dedicated to a VM versus a container environment. It all depends on which promotions or programs the business decides it wants to run at any time.

So the IT organization needed a way to quickly switch an individual VM host server to be reconfigured as a bare-metal container host. They didn’t want to pay a VM tax on their container host. They identified that if they were going to do that manually, there were dozens and dozens — I think they had 36 or 37 — steps that they needed to do. And they could not figure out a way to automate individually each one of those 37 steps.

When we brought them an HPE Synergy infrastructure — managed by OneView, automated by Ansible — they instantly saw how that was going to help solve their problems. They were able to change their environemnt from one personality to another in a completely automated fashion.

When we brought them an HPE Synergy infrastructure — managed by OneView, automated with Ansible — they instantly saw how that was going to help solve their problems. They were going to be able to change their environment from one personality to another personality in a completely automated fashion. And now they are able to do that changeover in just 30 minutes, and instead of needing dozens of manual steps. They have zero manual steps; everything is fully automated.

And that enables them to respond to the business requirements. The business needs to be able to run whatever programs and promotions it is that they want to run — and they can’t be constrained by IT. Maybe that gives a picture of how valuable this is to our customers.

Gardner: Yes, it speaks to the business outcomes, which are agility and speed, and at the same time the IT economics are impacted there as well.

Speaking of IT economics and IT automation, we have been talking in terms of process and technology. But businesses are also seeking to simplify and automate the economics of how they acquire and spend on IT, perhaps more on a pay-per-use basis.

Is there alignment between what you are doing in automation and what HPE is doing with HPE GreenLake? Do the economics and automation reinforce one another?

How to Drive Innovation and

Automation in Your Data Center

Guida: Oh, absolutely. We bring physical infrastructure flexibility, and HPE GreenLake brings financial flexibility. Those go hand in hand. In fact, the example that I was just speaking about, the online retailer, they are very, very busy during the Christmas shopping season. They are also busy for Valentine’s Day, Mother’s Day, and back-to-school shopping. But they also have times where they are much less busy.

They have HPE GreenLake integrated into their environment so in addition to having the physical flexibility in their environment, they are financially aligning through a flexible capacity program and paying for technology — in the way that their business model works. So, these things go hand-in-hand.

As I said earlier, I talk to a lot of HPE customers because I am based in the San Francisco Bay Area where we have our corporate headquarters. I am frequently in our Executive Briefing Center two to three times a week. There are almost no conversations I am part of that don’t lead eventually to the financial aspects, as well as the technical aspect, of how all the technology works.

Gardner: Because we have opened IT automation up to the programmatic level, a new breed of innovation can be further brought to bear. Once people get their hands on these tools and start to automate, what have you seen on the innovation side? What have people started doing with this that you maybe didn’t even think they would do when you designed the products?

Single infrastructure signals innovation

Guida: Well, I don’t know that we didn’t think about this, but one of the things we have been able to do is make something that the IT industry has been talking about for a while in an on-premises IT environment.

There are lots of organizations that have IT capacity that is only used some of the time. A classic example is an engineering organization that provides a virtual desktop infrastructure (VDI) capability for engineers. These engineers need a bunch of analytics applications — maybe it’s genomic engineering, seismic engineering, or fluid dynamics in the automotive industry. They have multiple needs. Typically they have been running those on different sets of physical infrastructures.

With our automation, we can enable them to collapse that all into one set of infrastructure, which means they can be much more financially efficient. Because they are more financially efficient on the IT side, they are able to then devote more of their dollars to driving innovation — finding new ways of discovering oil and gas under the ground, new ways of making automobiles much more efficient, or uncovering new secrets within our DNA. By spending less on their IT infrastructure, they are able to spend more on what their core business innovation should be.

Gardner: Frances, I have seen other vendors approach automation with a tradeoff. They say, “Well, if you only use our cloud, it’s automated. If you only use our hypervisor, it’s automated. If you only use our database, it’s automated.”

But HPE has taken a different tack. You have looked at heterogeneity as the norm and the complexity as a result of heterogeneity as what automation needs to focus on. How far ahead is HPE on composability and automation? How differentiated are you from others who have put a tradeoff in place when it comes to solving automation?

We have had composable infrastructure on the market for three-plus years. Our HPE Synergy platform now has a $1 billion run rate. We have 3,600 customers around the world. It’s been a tremendously successful business for us.

Guida: We have had composable infrastructure on the market for three-plus years now. Our HPE Synergy platform, for example, now has a more than $1 billion run rate for HPE. We have 3,600 customers and counting around the world. It’s been a tremendously successful business for us.

I find it interesting that we don’t see a lot of activity out there, of people trying to mimic or imitate what we have done. So I expect composability and automation will remain fundamentally differentiating for us from many of our traditional on-premises infrastructure competitors.

HPE-GreenlakeIt positions us very well to provide an alternative for organizations who like the flexibility of cloud services but prefer to have them in their on-premises environments. It’s been tremendously differentiating for us. I am not seeing anyone else who has anything coming on hot in any way.

Gardner: Let’s take a look to the future. Increasingly, not only are companies looking to become data-driven, but IT organizations are also seeking to become data-driven. As we gather more data and inference, we start to be predictive in optimizing IT operations.

I am, of course, speaking of AIOps. What does that bring to the equation around automation and composability? How will AIOps change this in the coming couple of years?

Automation innovation in sight with AIOps

Guida: That’s a real opportunity for further innovation in the industry. We are at the very early stages about how we take advantage in a symptomatic way of all of the insights that we can derive from knowing what is actually happening within our IT environments and mining those insights. Once we have mined those insights, it creates the possibility for us to take automation to another level.

We have been throwing around terms like self-healing for a couple of decades, but a lot of organizations are not yet ready for something like self-healing infrastructure. There is a lot of complexity within our environments. And when you put that into a broader heterogeneous data center environment, there is even more complexity. So there is some trepidation.

How to Accelerate to

A Self-Driving Data Center

Over time, for sure, the industry will get there. We will be forced to get there because we are going to be able to do that in other execution venues like the public cloud. So the industry will get there. The whole notion of what we have done with automation of composable infrastructure is absolutely a great foundation for us as we take our customers toward these next journeys around automation.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Enterprise architect, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud | Tagged , , , , , , , , , , , | Leave a comment

How MSP StoredTech brings comprehensive security services to diverse clients using Bitdefender

endpoint-security-solution

The choice of bedrock security technology can make or break managed service providers’ (MSPs’) ability to scale, grow rapidly while remaining efficient, and maintain top quality customer service.

The next edition of BriefingsDirect explores how by simultaneously slashing security-related trouble tickets and management costs by more than 75 percent, Stored Technology Solutions, or StoredTech, grew its business and quality of service at the same time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we learn now how StoredTech adopted Bitdefender Cloud Security for Managed Service Providersto dramatically improve the security of their end users — and develop enhanced customer loyalty.

Here to discuss the role of the latest Bitdefender security technology for making MSPs more like security services providers is Mark Shaw, President of StoredTech in Raleigh, North Carolina. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, what trends are driving the need for MSPs like yourself to provide security that enhances the customer experience?

Mark Shaw

Shaw

Shaw: A lot of things are different than they were back in the day. Attacks are very easy to implement. For a dollar, you can buy a malware kit on the Dark Web. Anyone with a desire to create havoc with malware, ransomware, or the like, can do it. It’s no longer a technical scenario, it’s simply a financial one.

At the same time, everyone is now a target. So back in the day, obviously, there were very needy targets. People would spend a lot of time, effort, and technical ability to hack large enterprises. But now, there is no business that’s too small.

If you have data and you don’t want to lose it, you’re a target. Of course, the worst part for us is that MSPs are now directly being targeted. So no matter what the size, if you are an MSP, they want access to your clients.

China has entire groups dedicated to hacking only MSPs. So the world landscape has dramatically shifted.

Gardner: And, of course, the end user doesn’t know where the pain point is. They simply want all security all the time — and they want to point the finger at you as the MSP if things aren’t right.

Shaw: Oh, absolutely right; that’s what we get paid to do.

Gardner: So what were your pain points? What from past security providers and vendors created the pain that made you seek a higher level of capability?

Just-right layers of security prevent pain 

Shaw: We see a lot of pain points when it comes to too many layers. We talk about security being a layering process, which is fantastic. You want the Internet provider to be doing their part. You want the firewall to do its part.

When it comes to security, a lot of the time we see way too many security applications from different vendors running on a machine. That really decimates the performance. End users really don’t care; they do care about security — but they aren’t going to sacrifice performance.

A lot of the time we see way too many security applications from different vendors running on a machine. That really decimates the performance. End users really don’t care; they do care about security — but they are not going to sacrifice performance.

We also see firms that spend all their time meeting all the industry and government regulations, and they are still completely exposed. What we tell people is, just because you check a box in security, that doesn’t mean you are in compliance. It doesn’t mean that you are secure.

For small business owners, we see all these pain points in how they handle their compliance and security needs. And, of course, in our world, we are seeing a lot of pain points because insurance for cybersecurity is becoming more prevalent and paying out through cryptovirus and ransomware attacks. That insurance is becoming more prevalent. And so we are seeing a chicken-and-egg thing, with a recent escalation in malware and ransom attacks [because of those payments].

Gardner: Tell us about StoredTech. What’s your MSP about?

The one throat to choke 

Shaw: We are both an MSP and a master MSP. We refer to ourselves as the “one throat to choke.” Our job is to provide solutions that have depth of scale. For us, it’s all about being able to scale.

We provide the core managed services that most MSPs provide, but we also provide telco services. We help people select and provide Internet services, and we spend a lot of time working with cameras and access control, which require an entirely different level of security and licensing.

If it’s technology-related, we don’t want customers pointing fingers and saying, “Well, that’s the telephone guys’ problem,” or, “That’s the guy with the cameras and the access control, that’s not us.”

We remove all of that finger-pointing. Our job is to delight our customers by finding ways to say, “Yes,” and to solve all of their technology needs.

Gardner: You have been in business for about 10 years. Do you focus on any particular verticals, size of company, or specialize?

Shaw: We really don’t, unlike the trends in the industry today. We are a product of our scars. When I worked for corporate America, we didn’t know we were getting laid off until we read it in the newspaper. So, I don’t want to have any one client. I don’t want to have anybody surprising us.

storedtech workersWe have the perfect bell-curve distribution. We have clients who are literally one guy with a PC in his basement running an accounting firm, all the way up to global clients with 30,000 endpoints and employees.

We have diverse geographies as well as technical verticals among our clients — everything from healthcare to manufacturing, retail, other technology companies; you name it. We resell them as well. For us, we are not siloed. We run the gamut. Everybody needs technology.

Gardner: True. So, one throat to choke is your value, and you are able to extend that and scale up to 30,000 employees or scale down to a single seat. You must have been very choosy about improving your security posture. Tell us about your security journey.

Shaw: Our history goes way back. We started with the old GFI LanGuard for Macs product, which was a remote monitoring and management (RMM) that tied to VIPRE. SolarWinds acquired that product and we got our first taste of the Bitdefender engine. We loved what Bitdefender did. When Kaseya was courting us to work with them, we told them, “Guys, we need to bring Bitdefender with us.”

At that point in time, we had no idea that Bitdefender also had an entire GravityZone platform with an MSP focus. So when we were able to get onto the Bitdefender GravityZone platform, it was just amazing for us.

We loved what Bitdefender did. When we were able to get the Bitdefender GravityZone platform with an MSP focus, it was just amazing for us. We actually use Bitdefender as a sales tool against other MSPs.

We actually used Bitdefender as a sales tool against other MSPs and their security platforms by saying, “Hey, listen. If we come in, we are going to put in a single agent that’s the security software, right? Your antivirus, your content filtering, your malware detection and prevention – and it’s going to be lighter and faster. We are going to speed up your computers by putting this software on.”

We went from VIPRE software to the Bitdefender engine, which really wasn’t the full Bitdefender, to then the full Bitdefender GravityZone when we finally moved with the Kaseya product. Bitdefender lit up our world. We were able to do deployments faster and quicker. We really just started to scale at that point.

Gardner: And just as you want to be one throat to choke to your customers, I am pretty sure that Bitdefender wants to be one throat to choke for you. How does Bitdefender help you protect yourselves as an MSP?

A single-point solution that’s scalable 

Shaw: For us, it’s really about being able to scale quickly and easily. It’s the ability to have customizable solutions whether we are deploying it on a Mac, SQL Server, or in a Microsoft Azure instance in the cloud, we need scalability. But at the same time, we need customizing, the ability to change and modify exactly what we want out there.

The Bitdefender platform gives us the capability to either ramp up or scale down the solution based on which applications are running and what the end user expects. It’s the best of both worlds. We have this 800-pound gorilla, one single point of security, and at the same time we can get so granular with it that we can solve almost any client’s needs without having to retool and without layering on multiple products.

In the past, we used to use other antivirus products, layered them on with the content filtering products. We just had layer after layer after layer, which for our engineers meant if you wanted to see what was wrong, you had to log into one of the four consoles. Today, it’s log-in to this one console and you can see the status of everything.

By making it simple, the old KISS method, we were able to dramatically scale and ramp up — whether that’s 30,000 end points or one. We have a template for almost anything.

We have a great hashtag called automate-or-die. The concept is to automate so we can give every customer exactly what they need without having to retool the environment or add layer upon layer of products, all of which have an impact on the end user.

Gardner: You are a sophisticated enough organization that you want automation, but you also want customization. That’s often a difficult balance. What is it about Bitdefender Cloud Security for MSPs that gets that balance?

Shaw: Being able to see everything in one dashboard — to have everything laid out in front of you – and be able to apply different templates and configurations to different types of machines based on a core functionality. That allows us to provide customization without large overhead from manual configuration every single time we have to do it.

To be able to add that value — but not add those additional man hours — really brings it all together. Having that single platform, which we never had before in the old days, gives us that. We can see it, deploy it, understand it, and report on it. Again, it’s exactly what we would tell our customers, come to us for one throat to choke.

And we basically demanded that Bitdefender have that same throat to choke for us. We want it all easy, customizable — we want everything. We want the Holy Grail, the golden goose — but we don’t want to put any effort into it.

Gardner: Sounds like the right mix to me. How well has Bitdefender been doing that for you? What are the results? Do you have some metrics to measure this?

The measure of success 

Shaw: We had some metrics that you mentioned. We understand by what we have to do, how much time we have to support and how quickly we can implement and deploy.

We have seen malware infections reduced by about 80 percent. We took weekly trouble tickets from previous antivirus and security vendors from 50, down to about 1 a week. We slashed administration costs by about 75 percent. Customer satisfaction has never been higher.

In the old days of multiple layers of security, we got calls, “My computer is running slow.” And we would find that an antivirus agent was scanning or a content filtering app was doing some sort of update.

We have one Bitdefender agent to deploy. We go out there, we deploy it, and it’s super simple. We just have an easier time now managing that entire security apparatus versus what we used to do.

Now we are able to say, “You know what? This is really easy.” We have one Bitdefender agent to deploy. We go out there, we deploy it, and it’s super simple. We just have an easier time now managing that entire security apparatus versus what we used to do.

Gardner: Mark, you mentioned that you support a great variety of sizes of organizations and types of vertical industries. But nowadays there’s a great variety between on-premises, cloud, and a hybrid continuum. It’s difficult for some vendors to support that continuum.

How has Bitdefender risen to that challenge? Are you able to support your clients whether they are on-premises, in the cloud, or both?

No dark cloud on the horizon 

Shaw: If you look at the complexion of most customers nowadays that’s exactly what you see. You see a bunch of people who say, “I am never, ever taking my software off-premises. It’s going to stay right here. I don’t trust the cloud. I am never going to use it.” You have those “never” people.

You have some people who say, “I’d really love to go to the cloud 100 percent, but these four or five applications aren’t supported. So I still need servers, but I’d love to move everything else to the cloud.”

Storedtech bugAnd then, of course, we have some clients who are literally born in the cloud: “I am starting a new company and everything is going to be cloud-enabled. If you can’t put it up in the cloud, if you can’t put it in Azure or something of this sort, don’t even talk to us about it.”

The nice part about that is, it doesn’t really matter. At the end of the day, we all make jokes. The cloud is just somebody else’s hardware. So, if we are responsible for either those virtual desktop infrastructure (VDI) clients, or those servers, or those physical workstations — whatever the case may be — it doesn’t matter. If it’s an Exchange Server, a SQL Server, an app server, or an Active Directory server, we have a template. We can deploy it. It’s quick and painless.

Knowing that Bitdefender GravityZone is completely cloud-centric means that I don’t have to worry about loading anything on-premises. I don’t have to spin up a server to manage it – it just doesn’t matter. At the end of the day, whatever the complexion of the customer is we can absolutely tailor to their needs with a Bitdefender product without a lot of headaches.

Gardner: We have talked about the one throat and the streamlining from a technology and a security perspective. But, as a business, you also want to streamline operations, billing, licensing, and make sure that people aren’t being overcharged or undercharged. Is there anything about the Bitdefender approach, in the cloud, that’s allowed you to have less complexity when it comes to cost management?

Keep costs clean and simple 

Shaw: The nice part about it, at least for us is, we don’t put a client out there without Bitdefender. For us it’s almost a one-to-one. For every RMM agent deployed, it’s one Bitdefender deployed. It’s clean and simple, there is no fuss. If a client is working with us, they are going to be on our solutions and our processes.

Going back to that old KISS method, we want to just keep it simple and easy. When it comes to the back-office billing, if we have an RMM agent on there, it has a Bitdefender agent. Bitdefender has a great set of application programming interfaces (APIs). Not to get super-technical, but we have a developer on staff who can mine those APIs, pull that stuff out, make sure that we’re synchronized to our RMM product, and just go from there.

As long as we have a simple solution and a simple way of billing on the back end, clients don’t mind. Our accounting department really likes it because if there’s an RMM agent on there, there’s a Bitdefender agent, and it’s as simple as that.

Gardner: Mark, what comes next? Are there other layers of security you are looking at? Maybe full-disk encryption, or looking more at virtualization benefits? How can Bitdefender better support you?

Follow Bitdefender into the future 

Shaw: Bitdefender’s GravityZone Full Disk Encryption is fantastic; it’s exactly what we need. I trust Bitdefender to have our best interests in mind. Bitdefender is a partner of ours. We really mean that, they are not a vendor.

So when they talk to us about things that they are seeing, we want to make sure that we spend a lot of time and understand that. From our standpoint, encryption, absolutely. Right now we spend a lot of time with clients who have data that is not necessarily personally identifiable information (PII), but it is data that is subject to patents, or is their secret sauce — and it can’t get out. So we use Bitdefender to do a lot of things like locking down universal serial bus (USB) drives and things like that.

As Bitdefender looks down the road to ML and AI, just make sure to be cutting edge — but not bleeding edge — because nobody wants wants to hemorrhage cash, time, and everything else.

I know there is a lot of talk about machine learning (ML) and artificial intelligence (AI) out there. To me they are cool buzzwords, but I don’t know if they are there yet. If they get there, I believe and trust that Bitdefender is going to say, “We are there. We believe it’s the right thing to do.”

I have seen a lot of next-generation antivirus software that says, “We use only AI or we use ML only.” And what I see is they miss apparent threats. They slow the machines into a crawl, and they make the end-user experience miserable.

As Bitdefender looks down these roads of ML and AI, just make sure to be cutting edge here, but don’t be bleeding edge because nobody wants to hemorrhage cash, time, and everything else.

Bitdefender bug bestWe are vested in the Bitdefender experience. The guys and girls at Bitdefender, they know what’s coming. They see it all time. We are happy to play along with that. Typically by the time it hits an end user or a customer in the enterprise space, it’s old hat. I think the real cutting edge, bleeding edge stuff happens well before an MSP tends to play in that space.

But there’s a lot of stuff coming out, a lot of security risk, on mobile devices, the Internet of everything, and televisions. Every day now you see how those are being hacked — whether it’s a microphone, the camera, or whatever. There is a lot of opportunity and a lot of growth out there, and I am sure Bitdefender is on top of it.

Gardner: Before we close out, do you have any advice for organizations on how to manage security better as a culture, as an ongoing, never-ending journey? You mentioned that you peel back the onion, and you always hit another layer. There is something more you have to solve the next day. This is a nonstop affair.

What advice do you have for people so that they don’t lose track of that?

Listen and learn 

Shaw: From an MSP’s standpoint, whether you’re an engineer, in sales, or an account manager — it’s about constant learning. Understand, listen to your clients. Your clients are going to tell you what they need. They are going to tell you what they are concerned about. They are going to tell you their feelings.

If you listen to your clients and you are in tune with them, they are going to help set the direction for your company. They are going to guide you to what’s most important to them, and then that should parlay into what’s most important for you.

In our world, we went from just data storage and MSP services into then heading to telco and telephones, structured cabling, cameras, and access control, because our clients asked us to. They kept saying these are pain points, can you help us?

And, for me, that’s the recipe to success. Listen to your clients, understand what they want, especially when it comes to security. We always tell everybody, eat your own dog food. If you are selling a security solution that you are putting out there for your clients, make sure your employees have it on all of their machines. Make sure your employees are using it at home. Get the same experience with the customers. If you are going through cyber security training, put your staff through cyber security training, too. Everyone, from the CEO right down on to the person managing the warehouse should go through the same training.

If we put ourselves in our customers’ shoes and we listen to our customers — no matter what it is, security, phones, computers, MSP, whatever it is — you are going to be in tune with your customers. You’re going to have success.

We just try to find a way to say, “Yes,” and delight our customers. At the end of the day if you are doing that, if you are listening to their needs, that’s all that matters.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in AIOps, artificial intelligence, Bitdefender, BYOD, Cloud computing, Cyber security, Identity, managed services, Security, User experience | Tagged , , , , , , , , , | Leave a comment

Healthcare providers define new ways to elevate and improve the digital patient experience

patient infoThe next BriefingsDirect healthcare insights discussion explores ways to improve the total patient experience — including financial considerations — using digital technology.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about ways that healthcare providers are seeking to leverage such concepts as customer relationship management (CRM) to improve their services we are joined by Laura Semlies, Vice President of Digital Patient Experience, at Northwell Health in metro New York; Julie Gerdeman, CEO at HealthPay24 in Mechanicsburg, Penn., and Jennifer Erler, Cash Manager in the Treasury Department at Fairview Health Services in Minneapolis. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Laura, digital patient experiences have come a long way, but we still have a long way to go. It’s not just technology, though. What are the major components needed for improved digital patient experience?

laura-semlies-headshot

Semlies

Semlies: Digital, at the end of the day, is all about knowing who our patients are, understanding what they find valuable, and how they are going to best use tools and assets. For us the primary thing is to figure out where the points of friction are and how digital then has the capability to help solve that.

If you continuously gain knowledge and understanding of where you have an opportunity to provide value and deliberately attack each one of those functions and experiences, that’s how we are going to get the best value out of digital over time.

So for us that was around knowing the patient in every moment of interaction, and how to give them better tools to access our health system — from an appointments’ perspective, to drive down the redundant data collection, and give them the ability to both pay their bills online and to not be surprised when they get their bill and the amount. Those are the things that we focused on, because they were the highest points of friction and value as articulated by our patients.

Where we go next is up to the patients. Frankly, the providers who are struggling with the technology between them and their patients [also struggle] in the relationship itself.

Partner with IT to provide best care

Gardner: Jennie, the financial aspects of a patient’s experience are very important. We have separate systems for financial and experience. Should we increasingly be talking about both the financial and the overall care experience?

Jennie Erler headshot

Erler

Erler: We should. Healthcare organizations have an opportunity to internally partner with IT. IT used to be an afterthought, but it’s coming to the forefront. IT resources are a huge need for us in healthcare to drive that total patient experience.

As Laura said, we have a lot of redundant data. How do we partner with IT in the best way possible where it benefits our customers’ experience? And how do they want that delivered? Looking at the industry today, I’m seeing Amazon and Walmart getting into the healthcare field.

As healthcare organizations, perhaps we didn’t invest heavily in IT, but I think we are trying to catch up now. We need to invest in the relationship with IT — and all the other operational partners — to deliver to the patients in the best way possible.

Gardner: Julie, doesn’t making technology better for the financial aspects of the patient experience also set the stage for creating an environment and the means to accomplish a total digital patient experience?

julie-gerdeman-picture-1

Gerdeman

Gerdeman: It does, Dana. We see the patient at the center of all those decisions. So put the patient at the center, and then engage with that patient in the way that they want to engage. The role that technology plays is to personalize digital engagement. There is an opportunity in the financial engagement of the patient to communicate; to communicate clearly, simply, so that they know what their obligation is — and that they have options. Technology enables options, it enables communication, and that then elevates their experience. With the patient at the center, with technology enabling it, that takes it to a whole other level.

Learn to listen; listen to learn 

Semlies: At the end of the day, technology is about giving us the tools to be active listeners. Historically it has been one-directional. We have a transaction to perform and we go when we perform that transaction.

In the tomorrow-state, it becomes much more of a dialogue. The more we learn about an individual, and the more we learn about a behavior, the more we learn what was a truly positive experience — or a negative experience. Then we can take those learnings and activate them in the right moments.

We just don’t have the tools yet to actively listen and understand how to get to a higher level of personalization. Most of our investment is now going to figure out what we need to be actively listening. 

It’s always impressive to me when something pops up on my Amazon cart as a recommendation. They know I want something before I even know I want something. What is the analogy in healthcare? It could be a service that I need and want, or a new option that would be attractive to me, that’s inherently personalized. We just don’t have the tools yet to actively listen and understand how to get to that level of personalization.

Most of our investment is now going to figure out what do we need so that we can be actively listening — and actively talking in the right voice to both our providers and our patients to drive better experiences. Those are the things that other industries, in my opinion, have a leg up on us.

We can do the functions but connecting those functions and getting to where we can design and cultivate simple experiences that people love — and drive loyalty and relationships – that’s the magic sauce.

Gain a Detailed Look at Patient

Financial Engagement Strategies 

Gardner: It’s important to know what patients want to know, when they want to know it, and maybe even anticipate that across their experience. What’s the friction in the process right now? What prevents the ultimate patient experience, where you can anticipate their needs and do it in a way that makes them feel comfortable? That also might be a benefit to the payers and providers.

Erler: Historically, when we do patient surveys, we ask about the clinical experience. But maybe we are not asking patients the right questions to get to the bottom of it all. Maybe we are not being as intuitive as we could be with all the data we have in our systems.

It’s been a struggle from a treasury perspective. I have been asking, “Can we get a billing-related question on the survey?” That’s part of their experience, too, and it’s part of their wellness. Will they be stressing about what they owe on my bill and what it is going to cost them? We have to take another look at how we serve our patients.

We need to be more in-the-moment instead of after-the-fact. How was your visit and how can we fix it? How can we get that feedback right then and there when they are having that experience?

Gardner: It’s okay to talk about the finances as part of the overall care, isn’t it?

Erler: Right!

Healthy money, healthy mind 

Gerdeman: Yeah, absolutely. We recently conducted a study with more than 150 providers at HealthPay24. What we found is a negative billing-financial experience can completely negate the fabulous clinical experience from a healthcare provider. Really, it can leave such a bad impression.

To Jennie’s point, by asking questions — not just around the clinical experience, but around the financial experience, and how things can be improved – allows patients to get back to their options and the flexibility is provided in a personalized way, based on who they are and what they need.

Semlies: The other component of this is that we are very organized around transactional interactions with patients, but when it comes to experience — experience is relationship-based. Odds are you don’t have one bill coming to you, you have multiple bills coming to you, and they come to you with multiple formats, with multiple options to pay, with multiple options to help you with those bills. And that is very, very confusing, and that’s in one interaction with the healthcare system.

If you connect that to a patient who is dealing with something more chronic or more serious, they could have literally 20, 30, 40 or 100 bills coming in. That just creates such an exasperation for our patients — and frustration.

two peopleOur path to solving this needs to be far less around single transactions and far broader. It demands that the healthcare systems think differently about how they approach these problems. Patients don’t experience one bill; they experience a series of bills. If we give them different support numbers, different tools, different options for each and every one of those, it will always be confusing – no matter how sophisticated the tool that you use to pay the bill is.

Gardner: So the idea is to make things simpler for the patient. But there is an awful lot of complexity behind the scenes in order to accomplish that. It’s fundamentally about data and sharing data. So let’s address those two issues, data and complexity. How do we overcome those to provide improved simplicity?

Erler: We have all the information we need on a claim that goes to the payer. The payer knows what they are going to pay us. How do we get more married-up with the payer so that we can create that better experience for our customers? How do we partner better with the payers to deliver that information to the patients?

How do we start to individualize our relationships with patients so we know how they are going to behave and how they are going to interact? How do we partner better with the payers to deliver information to the patients?

And then how do we start to individualize our relationships with patients so we know how they are going to behave and how they are going to interact?

I don’t know that patients are aware of the relationship that we as providers have with our payers, and how much we struggle just to get paid. The data is in the claim, the payer has the data, so why is it so difficult for us to do what we need with that data on the backend? We need to make that simpler for everybody involved.

Gardner: Julie … people, process, and technology. We have seen analogs to this in other industries. It is a difficult problem. What technologically and culturally do you think needs to happen in order for these improvements to take place?

Connect to reduce complexity 

Gerdeman: It’s under way and it’s happening. The generations and demographics are changing in our society and in our culture. As the younger generations become patients, they bring with them the expectation that data is at their fingertips and that technology enables their lives, wherever they are and whatever they are doing, because they have a whole other view.

Millennials, the younger generations, have a different perspective and expectations around wellness. There is a big shift happening — not just care for being sick, but actual wellness to prevent illness. The technology needs to engage with that demographic in a new way and understanding.

Laura used the word connectionConnection and interoperability are truly how we address the complexity you referenced. Through that connection, the technology enables IT to be interoperable with all the different health systems hospitals use. That’s how we are going to solve it.

Gardner: We are also seeing in other industries an interesting relationship between self-help, or self-driven processes, and automation. They complement one another, if it’s done properly.

Do you see that as an opportunity in healthcare, where the digital experience gives the patient the opportunity to drive their own questions and answers, to find their own way should they choose? Is automation a way that makes an improved experience possible?

Gain a Detailed Look at Patient

Financial Engagement Strategies 

Semlies: Absolutely. Self-help is one of the first things we first went live with using HealthPay24 technology. We knew the top 20 questions that patients were calling in about. We had lots of toolkits inside the organization, but we didn’t expose that information. It lived on our website somewhere, but it didn’t live in our website in a direct, easy to read, easy to understand way. It was written in our voice, not the patient’s voice, and it wasn’t exposed at the moment that a patient was actually making that transaction.

Part of the reason why we have seen such an increase in our online payments is because we posted literally, quite simply, frequently asked questions (FAQ) around this. Patients don’t want to call and wait 22 minutes to get an agent to hear them if they can self-serve themselves. And it’s really helped us a lot, and there is an analogy in that in lots of different places in the healthcare space.

Gardner: You need to have the right tools and capabilities internally to be able to satisfy the patient requirements. But the systems internally don’t always give you that single view of the patient, like what a customer relationship management (CRM) system does in other industries.

Would you like to have a complement to a CRM system in healthcare so that you have all the information that you need to interact properly?

Healthcare CRM as a way of life

Semlies: CRM is something that we didn’t talk about in healthcare previously. I very much believe that CRM is as much about an ethos and a philosophy as it is about a system. I don’t believe it is exclusively a system. I think it’s a way of life, an understanding of what the patient needs. You can have the information at your fingertips in the moment that you need it and be able to share that.

I think we’re evolving. We want to be customer-obsessed, but there is a big difference between wanting to be customer-obsessed and actually being customer-obsessed.

The other challenge is there are some inherent conflicts when you start talking about customer obsession and what other stakeholders inside the health system want to do with their patients, but it can be really hard to deliver. When a patient wants a real-time answer to something and your service level agreement (SLA) is a day, you can’t meet their expectation.

We’re evolving. We want to be customer-obsessed, but there is a big difference between wanting to be cusomter-obsessed and actually being customer-obsessed. It can be really hard to deliver.

And so how do you rethink your scope of service? How do you rethink the way you provide information to individuals? How do you rethink providing self-help opportunities so they can get what they need? Getting to that place starts with understanding the customer and understanding what their expectations are. The you can start delivering to them in the way the patients expect us to.

Erler: Within our organization, there’s an internal cultural shift to start thinking about a patient as being a customer. There was a feeling of insensitivity around calling a patient a customer or treating this more as consumerism, but that’s what it’s becoming.

As that culture shifts and we think more about consumerism and CRM, it’s going to enhance the patients’ experience. But we have to think about it differently because there’s the risk when you say “consumerism” that it’s all about the money, and that all we care about is money. That’s not what it is. It’s a component, but it’s about the full patient experience. CRM tools are going to be crucial for us in order to get to that next level.

Gardner: Again, Julie, it seems to me that if you can solve this on the financial side of things, you’ve set up the opportunity — a platform approach, and even a culture – to take on the larger digital experience of the patient. How close are we on the financial side when it comes to a single view approach?

Data to predict patient behavior, experience 

Gerdeman: From a financial perspective, we are down that path. We have definitely made strides in achieving technology and digital access for financial. That is just one component of a broader technology ecosystem that will have a bigger return on investment (ROI) for providers. That ROI then impacts revenue cycles, not just the backend financials but all the way to the post-experience for a patient. I believe financial is one component, and technology is an enabler.

One of the things that we’re really passionate about at HealthPay24 is the predictive capability of understanding the patient. And what I mean by that is the predictive analytics and the data that you already have — potentially in a CRM, maybe not – can be an indicator of patient behavior and what could be provided. And that will further drive in ROI by using predictive capabilities, better results, and ultimately a much better patient experience.

Gardner: On this question of ROI, Laura, how do you at Northwell make the argument of making investments and getting recurring payoffs? How do you create a virtuous adoption cycle benefit?

Gain a Detailed Look at Patient

Financial Engagement Strategies 

Semlies: We first started our digital patient experience improvements about 18 months ago, and that was probably late compared to some of our competitors, and certainly compared to other industries.

But part of the reason we did was because we knew that within the next 2 to 3 years, patients were going to bring their expectations from other industries to healthcare. We knew that that was going to happen. In a competitive market like New York, where I live and work, if we didn’t start to evolve and build sophisticated advanced experiences from a digital perspective, we would not have that differentiation and we would lose to competitors who had focused on that.

The hard part for the industry right now is that in healthcare, relationships with a provider and a patient are not enough anymore. We have to focus on the total experience. That was the first driver, but we also have to be cognizant of what we take in from a reimbursement perspective and what we put out in terms of investment and innovation.

The question of ROI is important. Where does the investment come from? It doesn’t come from digital itself. But it does come from the opportunities that digital creates for us. That can be from the access tools that create the capacity to invite patients that wouldn’t ordinarily have selected Northwell to become new patients. It can mean in-house patients who previously didn’t choose Northwell for their follow-up care and make it easy for them to do so and then we retain them.

The questions of ROI is important. Where does the investment come from? It doesn’t come from digital itself. It comes from the opportunities that digital creates for us. We have actually increased collections and decreased bad debts.

It means avoiding leakage into the payment space when we get to things like accelerating cash because it’s easy. You just click a button at the point of getting a bill and pay the bill. Now I have accelerated the cashflow. Maybe we can help pay more than one bill at a time, whereas previously they maybe didn’t even understand why there was more than one bill. So we have actually increased collections and decreased bad debts.

Those are the functions that we are going to see ROI in, not digital itself. And so, the conversation is a tricky one because I run the service line of digital and I have to partner with every one of my business associates and leaders to make sure that they are accounting for and helping give credit to the applications and the tools that we’re building so the ROI and the investment can continue. And so, it makes the conversation a little bit harder, but it certainly has to be there.

Gardner: Let’s take a look to the future. When you have set up the digital systems, have that adoption cycle, and can produce ROI appreciation, you are also setting the stage for having a lot more data to look at, to analyze, and to reapply those insights back into those earlier investments and processes.

What does the future hold and what would you like to see things like analytics provide?

Erler: From a treasury perspective, just taking out how cumbersome it is on the back end to handle all these different payment channels [would be an improvement]. If we could marry all of these systems together on the back end and deliver that to the patient to collect one payment and automate that process – then we are going to see an ROI no matter what.

Healthpay24When it comes to the digital experience, we can make something look really great on the front end, but the key is not burdening our resources on the back end and to make that a true digital experience.

Then we can give customer service to our patients and the tools that they need to get to that data right away. Having all that data in one place and being able to do those analytics [are key]. Right now, we have all these different merchant accounts. How do you pull all of that together and look at the span and how much you are collecting and what your revenue is? It’s virtually impossible now to pull all that together in one place on the back end.

Gardner: Julie, data and analytics are driving more of the strategic thinking about how to do IT systems. Where do you see it going? What will be some of the earlier payoffs from doing analytics properly in a healthcare payer-provider environment?

The analytics advantage

Gerdeman: We are just starting to do this with several of our customers, where we are taking data and analyzing the financials. That can be from the discount programs they are currently offering patients, or for the payment plans they’re tying to collection results.  We’re looking at the demographics behind each of those, and how it could be shifted in a way that they are able to collect more while providing a better experience.

Our vision is this: The provider knows the patient so well that in anticipation they are getting the financial offer that best supports their needs. I think we are in such an interesting time right now in healthcare. What happens now when I take my children to a doctor’s appointment is going to look and feel so different when they take their children to an appointment.

We are seeing just the beginnings of the text reminders, the digital engagement, you have an appointment, have you thought about this? They will be walking around and it’s going to be so incorporated in their lives — like Instagram that they are on all the time.

I can’t wait to see when they are taking their children — or not, right? Maybe they are going to be doing things much more virtually and digitally than we are with our own children. To me there will be broad cultural changes from how more data will be enabling us. It is very exciting.

Gardner: Laura, where do you see the digital experience potential going for healthcare?

Automation assists prevention 

Semlies: Automation is key to the functions that we do. We expend energy in people and resources that we could be using automation for. Data is key to helping us pick the right things to automate. The second is anticipation and being able to understand where the patient is and what the next step should be. Being able to predict and personalize is the next step. Data is obviously a critical component that’s going to help you do that.

Gain a Detailed Look at Patient

Financial Engagement Strategies 

The last piece is that prevention over time is going to be the name of the game. Healthcare will look very different tomorrow than today. You will see new models pop up that are very much moving the needle in terms of how we collect information about a person, what’s going on inside of their body, and then being able to predict what is going to happen next.

We will be able to take action to avert or prevent things from happening. Our entire model of how we treat wellness is going to shift. What primary care looks like is going to be different, and analytics is at the core of all of that — whether you’re talking about it from an artificial intelligence (AI) perspective, it’s all the same thing.

Our entire model of how we treat wellness is going to shift. What primary care looks like is going to be different, and analytics is at the core of all of that. But most doctors aren’t getting that kind of information today because we don’t have a great way of sharing patient-generated health data yet.

Did you get the data on the right thing to measure? Are you looking at it? Do you have the tools to be able to signal when something is going off? And is that signal in the right voice to the person who needs to consume that? Is it at the right time so that you can actually avert it?

When I use my Fitbit, it understands that my heart rate is up. It’s anticipating that it’s because I’m exercising. It asks me that, and it asks me in a voice that I understand and I can respond to.

But most doctors aren’t getting that same kind of information today because we don’t have a great way of sharing patient-generated health data yet. It just comes in as a lot of noise. So how do we take all of that data?

We need to package it and bring it to the right person at the right moment and in the right voice. Then it can be used to make things preventable. It can actually drive an outcome. That to me is the magic of where we can go. We are not there yet, but I think that’s where we have to go.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in Cloud computing, electronic medical records, Enterprise transformation, healthcare, Information management, professional services, supply chain, User experience | Tagged , , , , , , , , , , , | Leave a comment

How agile Enterprise Architecture builds agile business advantage

Light Bulbs ConceptThe next BriefingsDirect digital business trends discussion explores how Enterprise Architecture (EA) defines and supports more agile business methods and outcomes.

We will now learn how Enterprise Architects embrace agile approaches to build competitive advantages for enterprises and governments, as well as to keep those organizations more secure and compliant.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about attaining agility by the latest EA approaches, we are now joined by our panel, Mats Gejnevall, Enterprise Architect at minnovate and Member of The Open Group Agile Architecture Work Group; Sonia Gonzalez, Forum Director of the Architecture Forum at The Open Group; Walters Obenson, Director of the Agile Architecture Framework at The Open Group, and Łukasz Wrześniewski, Enterprise Architect and Agile Transformation Consultant. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mats, what trends are driving the choice and motivation behind a career in EA? What are some of the motivations these days that are driving people into this very important role?

Mats Gejnevall

Gejnevall

Gejnevall: Most people are going into EA because they want to have a holistic view of the problem at hand. I do think that EA is a mindset that you can use to apply to any type of issue or problem you have. You look at an issue from many different perspectives and try to understand the fit between the issue or the problem and potential solutions.

That’s human nature to want to do, to look at things from a holistic point of view. It’s such an interesting area to be in, because you can apply it to just about everything. Particularly, a general EA application, where you look at the business, how it works, and how that will affect the IT part of it. So looking at that holistic view I think is the important part — and that’s the motivation.

Gardner: Łukasz, why do you think agility particularly is well addressed by EA?

Wrześniewski: I agree with Mats that EA provides a holistic view to understand how organizations work and can enable agility. As one of the main enablers for agility, EA changes the organization in terms of value. Nowadays agility is the trend, the new way of working and how the organization transforms itself for scaling the enterprise. EA is one of the critical success factors.

EA’s holistic point of view

Gardner: It’s one thing to be a member of this noble profession; it’s another for organizations to use them well.

Mats, how should organizations leverage architects to better sustain an agile approach and environment? It takes a receptive culture. How do organizations need to adjust?

Gejnevall: First of all, we need to distinguish between being agile doing EA and EA supporting an Agile approach. They are two very different things.

Let’s discuss being agile doing EA. To create a true agile EA, the whole organization needs to be agile, it’s not just the IT part. EA needs to be agile and loosely coupled, one of the key concepts, applied both to the business and the IT side.

the-open-group-logoBut to become agile doing EA, means adopting the agile mindset, too. We talked earlier about EA being the mindset. Agile is also a mindset – how you think about things, how to do things in different ways than you have been doing before, and to look at all the different agile practices out there.

For instance, you have sprints, iterations, demos, and these kinds of things. You need to take them into your EA way of working and create an agile way of working. You also need to connect your EA with the solution development in agile ways. So EA and solution development in an agile way needs to connect in the long-term.

Gardner: Mats, it sounds a little bit like the chicken and the egg. Which comes first, the EA or the agile environment? Where do you begin?

Change your mind for enterprise agility

Wrześniewski:

Łukasz Wrześniewski

Wrześniewski

 Everything is about achieving the agility in the enterprise. It’s not about doing the architecture. Doing the architecture in an agile way is the one thing, but our main goal is to achieve enterprise agility. EA is just a means to do that. So we can do the architecture in a really agile way. We can do the sprints, iterations, and apply the different agile methodologies to deliver architecture.

But also, we can do architecture in more traditional way, the understanding of how a system is complex and how to transform the system in a proper way, the organization as a system, and we can achieve agility.

That’s a very important factor when it comes to people’s mentality and how the people work in the organization. That’s a very big challenge to an organization, to change the way of working, to change the mindset, and really the Enterprise Architect has to sometimes take the shoes of the psychologist.

Gonzalez: Like Łukasz said, it’s the mindset and to change your mind. At first, organizations need to be agile based on Agile principles, such as delivering value frequently and aligning with the business strategy. And when you do that, you also have to change your EA capability to become more agile, starting with the process and the way that you do EA.

For example, using sprints, like Łukasz said, and also to be aware of how EA governance can support agile. As you know, it’s important to deliver value frequently, but it has to be aligned with the organization view and strategy, like Mats said at the beginning, to have the overall view of the organization, but also to be aware, to handle risk, and also addressing compliance. You may go through an agile effort without considering the whole enterprise, and you are facing the risk of different teams doing things in an agile way, but not connected to each other.

It’s a change of mindset that will automatically make you change the way you are doing EA.

Value stream helps express the value that an organization produces for its stakeholders, the outcomes it produces, and the different stages needed to produce that value. It provides a concept that’s less detailed than looking at your individual business processes. 

Gejnevall: As Łukasz was saying, I think it’s very much connected to the entire organization becoming agile. It’s a challenge. If you want to do EA for an agile organization, that’s something that probably needs to be done. You need to plan, but also open up the change process so it can change in a correct and slower way, because you can’t just come at it top-down, to make an organization agile top-down, it has to come both from top-down and bottom-up.

Gardner: I also hear people asking, “I have heard of Agile development, and now I am hearing about agile enterprise. Is this something different than DevOps, is it more than DevOps?” My impression is that it is much more than DevOps, but maybe we can address that.

Mats, how does DevOps fit into this for those people that are thinking of agile only in terms of development?

Gejnevall: It depends on the normal way of doing Agile development, doing something in short iterations. And then you have some demos at the end, retrospectives, and some planning for the next iteration. And there is some discussion ongoing right now whether or not the demo needs to be something executable, that it’s used quickly in the organization. Or it could be just an architecture piece, a couple of models that are showing some aspect of things. In my view, it doesn’t have to be something executable.

And also when you look at DevOps as well, there are a lot of discussions now about industrial DevOps, where you actually produce not software but other technical stuff in an agile way, with iterations, and you do it incrementally.

Wrześniewski: EA and architecture work as an enabler that allow for increasing complexity. We have many distributed teams that are working on the one product in DevOps, not run on Agile, and the complexity of the product, of the environment will be growing.

Architecture can put it in a proper direction. And I mean intentional architecture that is not like big upfront design, like in traditional waterfall, but intentional architecture that enables the iterations and drives DevOps into the proper direction to reduce complexity — and reduces the possibility of failure in product development.

Gardner: I have also heard that architecture is about shifting from building to assembly, that it becomes repeatable and crosses organizational boundaries. Does anyone have a response to this idea of shifting from building to assembly and why it’s important?

Strong building blocks bring success

Wrześniewski: The use of microservicescontainers, and similar technologies will mean components that you can assemble into entire products. These components are replaceable. It’s like the basic elements of EA when talking about the architecture and the building blocks, and good composition of the building blocks to deliver products.

Architecture perfectly addresses this problem and shift. We have already had this concept for years in EA.

Gardner: Anyone else on this topic of moving toward assembly, repeatability, and standardization?

Gejnevall: On the IT side, I think that’s quite common. It’s been common for many years in different ways and then new things happen. We talked about service-orientation for quite a while and then we started talking about microservices. These are all types of loosely coupled systems that become much more agile in certain ways.

Automation of business workflows and processes with businessmanThe interesting thing is to look at the business side of things. How can you make the business side become more agile? We have done a lot of workshops around service-orienting the business, making it capability-based and sustainable. The business consists of a bunch of services, so capabilities, and you can connect these capabilities to value streams and change the value streams in reaction to changes in the business side. That’s much easier than the old way of having strict boundaries between business units and business services that are developed.

We are now trying to move the thinking from the IT side up into the business side to enable the business to become much more componentized as you put different business services that the organization produces together in new ways and allow the management to come up with new and innovative ideas.

Gardner: That gets to the heart of what we are trying to accomplish here. But what are some of the common challenges to attaining such agility, when we move both IT and the business to an agile perspective of being able to react and move, but without being brittle or having processes that can be extended — without chaos and complexity?

One of the challenges for the business architecture is the proper partitioning the architecture to distinguish the capabilities across the organizational silos.That means keeping the proper level of detail that is connected to the organizational strategy, and to be able to understand the system.

Wrześniewski: One of the challenges for the business architecture is the proper partitioning of the architecture to distinguish the capabilities across the organizational silos. That means keeping the proper level of detail that is connected to the organizational strategy, and to be able to understand the system. Another big challenge is also to get the proper sponsorship for such activity and so to proceed with the transformation across the organization.

Gejnevall: Change is always hard for a lot of people. And we are trying to change, and to have people live in a more changeable world than they have been in before. That’s going to be very hard. Because people don’t like change, we are going to have to motivate people much more and have them understand why we need to change.

But change is going to be happening quicker and quicker, and if we create a much more agile enterprise, changes will keep rolling in faster and faster all of the time.

Wrześniewski: One of the areas where I ran into a problem when creating an architecture in an agile way was that if you have lots and lots of agile projects ongoing, or agile teams ongoing, you have to have a lot of stakeholders that come and watch these demos and have relevant opinions about them. From my past experiences of doing EA, it’s always hard to get the correct stakeholders’ involvement. And that’s going to be even harder, because now the stakeholders are looking at hundreds of different agile sprints at the same time. Will there be enough stakeholders for all of that?

Gardner: Right, of course you have to address the people, the process, and the technology, so the people, maybe even the most important part nowadays.

Customer journey from finish to start

Sonia Gonzalez

Gonzalez

Gonzalez: With all of those agile digital trends, what is more important now is to have two things in mind, a product-centric view and the customer journey. In order to do that the different layers that aren’t traditional architecture are blurry, because now it’s not about business and IT anymore — it’s about the organization as a whole that needs to be agile.

And in that regard, for example, like Mats and Łukasz have said, the right stakeholder needs to be in for the whole process. So it’s no longer saying, “I am the business, I am giving this request.” And then the IT people need to solve it. It’s not about that anymore. It’s having in mind that the product has services included, has an IT component, and also a business component.

When you are building your customer journey, just start from the very end, the connection with the customer, and move back all the way to the background and platform that are delivering the IT capabilities.

So it’s about having a more cross view of doing architecture, which is important.

Gardner: How does modeling and a standardized approach to modeling help overcome some of these challenges? What is it about what EA that allows for agility to become a common thread across an organization?

Wrześniewski: When it comes to the modeling, the models are different, so different viewpoints are just the tools for EA. Enterprise Architects should choose proper means to define the architecture that should enable the change that the organization needs.

So the common understanding — or maybe some stereotype of the Enterprise Architect — is they are the guys that draw the lines and boxes and deliver only big documentation, but then nobody uses it.

The challenge here is to deliver the MVPs in terms of modeling that the development teams and business will consider as something valuable and that can guide them. It’s not about making nice documentation, depositories in the tools, even if somebody is happy with some nice sketch on paper. It’s good architecture for the architect, because the architecture is about enabling the change in the organization and supporting the business and IT to deliver value, it’s not about only documenting every component. This is my opinion about what is the role of the architect and the model.

And, of course, we many different methods and conventions and the architect should choose the proper one for the organization.

Model collaborations create solutions

Gejnevall: I don’t think that the architects should sit around and model on their own, it should be a collaboration between the solution architect and the solution developers in some ways. It’s a collaborative effort, where you actually work on the architecture together. So you don’t have to hand over a bunch of papers to the solution developers later on, they already know the whole stuff.

So you are working in a continuous way of moving the material over to them, and you send it over to them in pieces, start with the most important pieces first or the slices of the architecture that is the most important and is most valuable, that’s sort of the whole Minimum Viable Architecture (MVA) approach. You can create lots of small MVAs, and then together with the solution teams allow them to work on that. It continuously creates new MVAs and the solution team continuously develops new MVPs. And that will go on for the entire length of a project, if that’s what you are working on, or for a product.

Gonzalez: In terms of modeling, there are at least two ways to see this. One of them is the fact that you need to model your high-level landscape for the enterprise in order to have this strategic view. You have some tools to identify which items you should have priorities for, going into your backlog and then going into the iteration, you need to be aligned with that.

Also, for example, you can model high-level value streams, identify key capabilities and then try to define which one would be the item you would be delivering, in that you don’t need to do a lot of modeling, just high-level modeling which you are going to depict that.

Having lots of corporate architecture allows you to facilitate these different building blocks for changing. And there are lots of tools in the market now that will allow you to have automation in the things you are doing.

On the other hand, we have other models that are more solution-level-oriented and in that case, one of the challenges that architects have now in relationship to modeling is how to deal with the fact that models are changing – and should change faster now because trends are changing and the market is changing. So there are different techniques that can be used for that. For example, test-driven design, domain design, domain-driven design, refactoring, and some others that support agile modeling.

Also, like Mats mentioned, having lots of corporate architecture that would allow you to facilitate these different building blocks for changing. And there are a lot of tools in the market now that will allow you to have automation in the things you are doing. For example, to automate testing, which is something that we should do. It’s actually one of the key components of DevOps to automate the testing, to view how this facility really continues with the integration, the development, and finally, the delivery.

Gardner: Sonia, you mentioned automation, but a lot of organizations, enterprises and governments are saddled with legacy systems. That can be quite complex, having older back end systems that require a lot of manual support. How do we move past the restraints, if you will, of back-end systems, legacy systems, and still become agile?

Combine old and new 

Gonzalez: That’s a very good question, Dana. That’s precisely one of the stronger things of our EA. Łukasz mentioned that is the fact that you can use it in different ways and adapt it to different uses.

So, you can, for example, if you have a bank where you usually have a lot of systems, you have legacy systems that are very difficult to change and risky to change. So, what a company should do is to have this combined approach saying, “Okay, I have a more traditional EA to handle my background systems because they are more stable and perhaps require fewer changes.”

Walters Obenson

Obenson

But on the other hand, if you have your end-user platform, such as online banking or mobile banking, that development should be faster. You can have an agile view on that. So you can have a combined view.

However, we also depend on the background. One of the things that companies are doing right now is to try to go over components and services, microservices, and outsourcing to build a corporate architecture for customer services platforms without having to change all the background systems at once because that’s very risky.

So it’s some kind of like a combined effort that it can be used in these cases.

Gardner: Anyone else have some insights on how to make agile EA backward compatible?

Wrześniewski: What Sonia said is really important, that we have some sort of combined or hybrid approach for EA. You will always have some projects that run in the agile part, some projects that have a more traditional approach that are longer, and that the delivery of architecture will take a longer time to reduce the risk when we are replacing some, for example, core banking system. The role of the EA is to know how to combine these different approaches and how to find the silver bullets to solve all the different situations.

So, we wouldn’t be always looking for the organization on the one perspective that we are agile and everything that was before is a batch practice. We try to combine, and this is the evolution of organization’s new approach. So we will have to step by step improve the organization to get the best results if we are completely agile.

Gardner: Walters brought up the important issue of governance. How can agile EA allow organizations to be faster, focused on business outcomes, and also be more secure and more compliant? How does EA and agile EA help an organization attain both a secure and compliant environment?

Gejnevall: You need to have a security architecture, and that has to be set up in a very loosely coupled way so you can select the security features that are needed for your specific project.

You need to have that security architecture as a reference model at the bottom of your architecture. That is something you need to follow. But then the security architecture is not just the IT part of it, it’s also the business side of things, because security has got a lot to do with the processes and the way a company works.

All of that needs to be taken into consideration when we do the architecture and it needs to be known by all the solution development teams, these are the rules around security. I think you can’t let go early in that, but security architecture needs to be flexible as well, and it needs to be adapting continuously, because it needs to handle new threats all the time. You can’t do one security architecture and think it’s going to live there forever; it’s going to have the same type of renewal and refactoring things happening to it as anything else.

Blue Chip Client Testing Enterprise MobilityWrześniewski: I would like to add that, in general, the agile approaches are more transparent and the testing of the security requirements often is done in an interactive way, so this approach can ensure higher security.

Also, the governance should be adapted to the agile governance and some governance body that works in an agile way and you have different level of enterprise; I mean portfolio management, project management and teams. So, there is also some organizational change that needs to be done.

Gardner: Many times when I speak with business leaders, they are concerned about mounting complexity, and one of the ways that they are very attracted to trying to combat complexity is to move towards minimum viable products and minimum viable services. How does the concept of an MVA help agility, but at the same time combat complexity?

MVA moves product from plan to value

Wrześniewski: MVA is the architecture of minimum viable products that can enable the development of the product. This can help you to solve the complexity issues with the minimum viable product to focus on this functionality, the capabilities that are mandatory for the organization and can deliver the highest percentage of value in the software.

And also if the minimum viable product fails, we don’t invest too much for the entire product development.

Gejnevall: Inherently, organizations are complex. You have to start very much higher up than the IT side of it to take away complexity. You need to start at the business level, on the organizational level, on the process level, on how you actually do work. If that’s complex, the IT solutions for that will still be complex, so it needs to have a good EA and MVA can test out new things and new ways of organizing yourself, because everything doesn’t have to be an IT project in the end.

You do an MVA and that’s a process change or an organization will change, you test it out and you say, did it actually minimize our complexity or did it actually increase our complexity, at least you can stop the project very quickly and go in another direction instead.

Gonzalez: Handling complexities is challenging, especially for big organizations that have been in the market for a long time. You will need to focus on the minimum viable product for leveraging the MVA, and go by slices, like taking smaller pieces to avoid going into much modeling.

Handling complexity is challenging, especially for big organizations that have been in the market for a long time.You will need to focus on the minimum viable product for leveraging the MVA, and go by slices, like taking smaller pieces to avoid going into much modeling.

However, at the end, even though you are not conceding things to be only IT, at the end you have a platform which is the one that is providing your IT capabilities. In that case, my view is use of architecture is important. So you may have a more traditional EA for keeping the maintenance of your complex landscape. That’s already there. You cannot avoid that or ignore that, but you need to identify which components are there.

So, whenever you are deciding a new problem with MVA, you can also be aware of the dependencies there at the platform level, which is where most of the time the complexities rely on. So that’s in my view a combined use again of both of them.

And the other key thing here is having the good integration and automation tooling, because sometimes you need to do things manually and that’s where it takes a lot of time, so you just make some automations of that, then it will be easier to maintain and to allow you to handle that complexity without going against an agile view.

Gardner: And before we start to wrap up, I wanted to ask you what an organization will experience when they do leverage agile EA and become more adaptive in their business in total, holistically. What do you get when you do agile EA? What do you recognize as metrics of success if this is going well?

Deliver value and value delivery

Gejnevall: Each one of these MVAs and minimum viable products is actually supposed to leave us some business value at the end. If you look a the framework like the TOGAF® standard, a standard of The Open Group, there is a phase at the end where you actually look at to see, “Did we really achieve this value that we expected to?”

This a part of most product management frameworks as well. So we need to measure before we do something and then we need to measure afterward, did we get this business value that we expected, because just running a project at the demo, we can’t really tell if we got the value or not. We need to put it out in operations and measure it that way.

So getting that feedback loop much quicker than we did in the past when it took a couple of years to develop a new product and at the end of it we have changed and we didn’t get the value, even though we spent many million dollars to do that. Now we might spend a lot less money, but we can actually prove that we are getting some business value out of this and actually measure it appropriately as well.

Architecture_frameworkWrześniewski: I agree fully with Mats that the value is quicker delivery. Also, the product quality should be much higher and all the people should be much more satisfied. I mean the team that delivers the service or product changes the business, the stakeholders, and direct clients. This really impacts the clients and team’s satisfaction. This is one of the important benefits of agile EA as well.

Gejnevall: Just because you have a term called minimum viable product and you think it always needs to be IT that’s doing that, I think you can do a minimum viable product in many other ways. Like I was saying before, process changes, organizational changes and other things. So it doesn’t always have to be IT that is doing the minimum viable product that gives you the best business value.

Gardner: How about the role of The Open Group? You have a number of certification programs, standards, workgroups, and you are talking with folks in the EA field all the time. What is it that The Open Group is bringing to the table nowadays to help foster agile EA and therefore better, more secure, more business-oriented companies and governments?

Open Group EA and Agile offerings abound

Gonzalez: We have a series of standards from The Open Group. One of the subsets of that is the architecture portfolio. We have several activities going on. We have the Agile Architecture Framework snapshot, product of The Open Group Board Members’ activity which is already in the market for test and comments, but it’s not yet an approved standard. The Agile Architecture Framework™ (O-AAF) covers both Digital Transformation of the enterprise, together with Agile Transformation of the enterprise considering concepts like Lean and DevOps among others.

On the other hand, we have the Architecture or the Agile EA one at the level of the Architecture Forum, which is the one Mats and Łukasz are dealing with, of how to have an agile EA practice. There is a very good white paperpublished, and other deliverables, like a guide about how to use or make the TOGAF framework an agile sprint using the Architecture Development Method (ADM), so that’s another paper that is under construction, and there are also several that are on the way.

We also have in the ArchiMate® Forum, we have Agile Modeling Activity, which is precisely dealing with the modeling part of this, so the three activities are connected.

And into a separate working group, even though it is related, we have Digital Practitioners Work Group, aimed to address the digital enterprise. Also there is connection with the Agile Architecture Framework and we just started looking for some harmonization also with EA and the TOGAF standard.

In the security space, we recently started the Zero Trust Architecture product, which is precisely trained to address this part of Zero Trust Architecture, which is securing the resources instead of securing the network. That’s a joint activity between Security Forum and the Architecture Forum. So, some of those are the things that are going on.

And also at the level of the Agile Architecture Framework, there is also conversation about how to handle security and cloud in an agile environment, so you see we have several moving things at the table at the moment.

Gejnevall: Long-term, I think we need to look into agile enterprise much more, but I think that all these efforts sort of are converging up to that point sooner or later that we need to look to see what would an agile enterprise looks like and create reference architectures and ideas for that. And I think that that will be sort of the end result somewhere, but we are not there yet, but we are going in that direction with all these different projects.

Gardner: And, of course, more information is available at The Open Group website. They have many global events and conferences that people can go to and learn about these issues and contribute to these issues as well.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Business intelligence, Cloud computing, Cyber security, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Internet of Things, multicloud, Platform 3.0, The Open Group | Tagged , , , , , , , , , , , , , , , | Leave a comment

Cerner’s lifesaving sepsis control solution shows the potential of bringing more AI-enabled IoT to the healthcare edge

workingThe next BriefingsDirect intelligent edge adoption benefits discussion focuses on how hospitals are gaining proactive alerts on patients at risk for contracting serious sepsis infections.

An all-too-common affliction for patients around the world, sepsis can be controlled when confronted early using a combination of edge computing and artificial intelligence (AI). Edge sensors, Wi-Fi data networks, and AI solutions help identify at-risk situations so caregivers at hospitals are rapidly alerted to susceptible patients to head-off sepsis episodes and reduce serious illness and death.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as we hear about this cutting-edge use case that puts AI to good use by outsmarting a deadly infectious scourge with guests Missy Ostendorf, Global Sales and Business Development Practice Manager at Cerner Corp.; Deirdre Stewart, Senior Director and Nursing Executive at Cerner Europe, and Rich Bird, World Wide Industry Marketing Manager for Healthcare and Life sciences at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Missy, what are the major trends driving the need to leverage more technology and process improvements in healthcare? When we look at healthcare, what’s driving the need to leverage better technology now?

Missy Ostendorf

Ostendorf

Ostendorf: That’s an easy question to answer. Across all industries resources always drive the need for technology to make things more efficient and cost-conservative — and healthcare is no different.

If we tend to lead more slowly with technology in healthcare, it’s because we don’t have mission-critical risk — we have life-critical risk. And the sepsis algorithm is a great example of that. If a patient turns septic, they have four hours and they can die. So, as you can imagine, that clock ticking is a really big deal in healthcare.

Gardner: And what has changed, Rich, in the nature of the technology that makes it so applicable now to things like this algorithm to intercept sepsis quickly?

Bird: The pace of the change in technology is quite shocking to hospitals. That’s why they can really benefit when two globally recognized organizations such as HPE and Cerner can help them address problems.

cerner logoWhen we look at the demand-spike across the healthcare system, we see that people are living longer with complex long-term conditions. When they come into a hospital, there are points in time when they need the most help.

What [HPE and Cerner] are doing together is understanding how to use this connected technology at the bedside. We can integrate the Internet of Things (IoT) devices that the patients have on them at the bedside, medical devices traditionally not connected automatically but through the humans. The caregivers are now able to use the connected technology to take readings from all of the devices and analyze them at the speed of computers.

So we’re certainly relying on the professionalism, expertise, and the care of the team on the ground, but we’re also helping them with this new level of intelligence. It offers them and the patients more confidence in the fact that their care is being looked at from the people on the ground as well as the technology that’s reading all of their life science indicators flowing into the Cerner applications.

Win against sepsis worldwide 

Gardner: Deirdre, what is new and different about the technology and processes that makes it easier to consume intelligence at the healthcare edge? How are nurses and other caregivers reacting to these new opportunities, such as the algorithm for sepsis?

Deirdre Stewart

Stewart

Stewart: I have seen this growing around the world, having spent a number of years in the Middle East and looking at the sepsis algorithm gain traction in countries like Qatar, UAE, and Saudi Arabia. Now we’re seeing it deployed across Europe, in Ireland, and the UK.

Once nurses and clinicians get over the initial feeling of, “Hang on a second, why is the computer telling me my business? I should know better.” Once they understand how that all happens, they have benefited enormously.

But it’s not just the clinicians who benefit, Dana, it’s the patients. We have documented evidence now. We want to stop patients ever getting to the point of having sepsis. This algorithm and other similar algorithms alert the front-line staff earlier, and that allows us to prevent patients developing sepsis in the first place.

Some of the most impressive figures show the reduction in incidents of sepsis and the increase in the identification of the early sepsis stages, the severe inflammatory response part. When that data is fed back to the doctors and nurses, they understand the importance of such real-time documentation.

I remember in the early days of the electronic medical records; the nurses might be inclined to not do such real-time documentation. But when they understand how the algorithms work within the system to identify anything that is out of place or kilter, it really increases the adoption, and definitely the liking of the system and what it can provide for.

Gardner: Let’s dig into what this system does before we look at some of the implications. Missy, what does the Cerner’s CareAware platform approach do?

Ostendorf: The St. John Sepsis Surveillance Agent looks for early warning signs so that we can save lives. There are three pieces: monitoring, alerting, and then the prescribed intervention.

It goes to what Deirdre was speaking to about the documentation is being done in real-time instead of the previous practice, where a nurse in the intensive care unit (ICU) might have had a piece of paper in her pocket and she would write down, for instance, the patients’ vital signs.

A lot can happen in four hours in the ICU. By having all of the information flow into the electronic medical record we can now have the sepsis agent algorithm continually monitoring that data.

And maybe four hours later she would sit at a computer and put in four hours of vitals from every 15 minutes for that patient. Well, as you can imagine, a lot can happen in four hours in the ICU. By having all of the information flow into the electronic medical record we can now have the sepsis agent algorithm continually monitoring that data.

It surveys the patient’s temperature, heart rate, and glucose level — and if those change and fall outside of safe parameters, it automatically sends alerts to the care team so they can take immediate action. And with that immediate action, they can now change how they are treating that patient. They can give them intravenous antibiotics and fluids, and there is 80 percent to 90 percent improvement in lives saved when you can take that early intervention.

So, we’re changing the game by leveraging the data that was already there, we are just taking advantage of it, and putting it into the hands of the clinicians so that action can be taken early. That’s the most important part. We have been able to actionize the data.

Gardner: Rich, this sounds straightforward, but there is a lot going on to make this happen, to make the edge of where the patient exists able to deliver data, capture data, protect it and make it secure and in compliance. What has had to come together in order to support what was just described by Missy in terms of the Cerner solution?

Healthcare tech progresses to next level 

Rich Bird

Bird

Bird: Focusing on the outcomes is very important. It delivers confidence to the clinical team, always at the front of mind. But it provides that in a way that is secured, real-time, and available, no matter where the care team are. That’s very, very important. And the fact that all of the devices are connected poses great potential opportunities in terms of the next evolution of healthcare technology.

Until now we have been digitizing the workflows that have always existed. Now, for me, this represents the next evolution of that. It’s taking paper and turning it into digital information. But then how do we get more value from that? Having Wi-Fi connectivity across the whole of a site is not something that’s easy. It’s something that we pride ourselves on making simple for our clients, but a key thing that you mentioned was security around that.

When you have everything speaking to everything else, that also introduces the potential of a bad actor. How do we protect against that, how do we ensure that all of the data is collected, transported, and recorded in a safe way? If a bad actor were to become a part of external network and internal network, how do we identify them and close it down?

Working together with our partners, that’s something that we take great pride in doing. We spoke about mobility, and outside of healthcare, in other industries, mobility usually means people have wide access to things.

devicesBut within hospitals, of course, that mobility is about how clinicians can collect and access the data wherever they are. It’s not just one workstation in a corner that the care team uses every now and again. The technology now for the care team gives them the confidence to know the data they are taking action on is collected correctly, protected correctly, and provided to them in a timely manner.

Gardner: Missy, another part of the foundational technology here is that algorithm. How are machine learning (ML) and AI coming to bear? What is it that allowed you to create that algorithm, and why is that a step further than simple reports or alerts?

Ostendorf: This is the most exciting part of what we’re doing today at Cerner and in healthcare. While the St. John’s Sepsis Algorithm is saving lives in a large-scale way – and it’s getting most of the attention — there are many things we have been able to do around the world.

Deirdre brought up Ireland, and even way back in 2009 one of our clients there, St. James’s Hospital in Dublin, was in the news because they made the decision to take the data and build decision-making questions into the front-end application that the clinicians use to order a CT scan. Unlike other X-rays, CT scans actually provide radiation in a way that’s really not great. So we don’t want to have a patient unnecessarily go through a CT scan. The more they have, the higher their risks go up.

They take the data and build decision-making questions into the front-end of the application the clinicians use to order a CT scan. We don’t want to have a patient unnecessarily go through a CT scan. Now with ML, it can tell the clinician whether the CT scan is necessary for the treatment of that patient.

By implementing three questions, the computer looks at the trends and why the clinicians thought they needed it based on previous patients’ experiences. Did that CT scan make a difference and how they were diagnosed? And now with ML, it can tell the clinician on the front end that, “This really isn’t necessary for what you are looking for to treat this patient.”

Clinicians can always override that, they can always call the x-ray department and say, “Look, here’s why I think this one is different.” But in Ireland they were able to lower the number of CT scans that they had always automatically ordered. So with ML they are changing behaviors and making their community healthier. That’s one example.

Another example of where we are using the data and ML is with the Cerner Opioid Toolkit in the United States (US). We announced that in 2018 to help our healthcare system partners combat the opioid crisis that we’re seeing across America.

Deirdre, you could probably speak to the study as a clinician.

Algorithm assisted opioid-addiction help

Stewart: Yes, indeed. It’s interesting work being done in the US on what they call Opioid-Induced Respiratory Depression (OIRD). It looks like approximately 1 in 200 hospitalized surgical patients can end up with an opioid-induced ventilatory impairment. This results in a large cost in healthcare. In the US alone, it’s estimated in 2011 that it cost $2 billion. And the joint commission has made some recommendations on how the assessment of patients should be personalized.

It’s not just one single standardized form with a score that is generated based on questions that are answered. Instead it looks at the patients’ age, demographics, previous conditions, and any other history with opioid intake in the previous 24 hours. And according to the risks of the patient, it then recommends limiting the number of opioids they are given. They also looked at the patients who ended up in respiratory distress and they found that a drug agent to reverse that distress was being administered too many times and at too high a cost in relation to patient safety.

Now with the algorithm, they have managed to reduce the number of patients who end up in respiratory distress and limit the number of narcotics according to the specific patients. It’s no longer a generalized rule. It looks at specific patients, alerts, and intervenes. I like the way our clients worldwide work in the willingness to share this information across the world. I have been on calls recently where they voiced interest in using this in Europe or the Middle East. So it’s not just one hospital doing this and improving their outcomes — it’s now something that could be looked at and done worldwide. That’s the same whenever our clients devise a particular outcome to improve. We have seen many examples of those around the world.

Ostendorf: It’s not just collecting data, it’s being able to actualize the data. We see how that’s creating not only great experiences for a partner but healthier communities.

Gardner: This is a great example of where we get the best of what people can do with their cognitive abilities and their ability to contextualize and the best of the machines to where they can do automation and orchestration of vast data and analytics. Rich, how do you view this balancing act between attaining the best of what people can do and machines can do? How do these medical use cases demonstrate that potential?

Machines plus, not instead of, people 

Bird: When I think about AI, I grew up in the science fiction depiction where AI is a threat. If it’s not any taking your life, it’s probably going to take your job.

But we want to be clear. We’re not replacing doctors or care teams with this technology. We’re helping them make more informed and better decisions. As Missy said, they are still in control. We are providing data to them in a way that helps them improve the outcomes for their patients and reduce the cost of the care that they deliver.

It’s all about using technology to reduce the amount of time and the amount of money care costs to increase patient outcomes – and also to enhance the clinicians’ professionalism.

Missy also talked about adding a few questions into the workflow. I used to work with a chief technology officer (CTO) of a hospital who often talked about medicine as eminence-based, which is based on the individuals that deliver it. There are numerous and different healthcare systems based on the individuals delivering them. With this digital technology, we can nudge that a little bit. In essence, it says, “Don’t just do what you’ve always done. Let’s examine what you have done and see if we can do that a little bit better.”

We know that personal healthcare data cannot be shared. But when we can show the value of the data when shared in a safe way, the clinical teams can see the value generated . It changes the conversation. It helps people provide better care.

The general topic we’re talking about here is digitization. In this context we’re talking about digitizing the analog human body’s vital signs. Any successful digitization of any industry is driven by the users. So, we see that in the entertainment industry, driven by people choosing Netflix over DVDs from the store, for example.

When we talk about delivering healthcare technology in this context, we know that personal healthcare data cannot be shared. It is the most personal data in the world; we cannot share that. But when we can show the value of data when shared in a safe way — highly regulated but shared in a safe way — the clinical teams can then see the value generated from using the data. It changes the conversation to how much does the technology cost. How much can we save by using this technology?

For me, the really exciting thing about this is technology that helps people provide better care and helps patients be protected while they’re in hospital, and in some cases avoid having to come into the hospital in the first place.

Gardner: Getting back to the sepsis issue as a critical proof-point of life-enhancing and life-saving benefits, Missy, tell us about the scale here. How is this paying huge dividends in terms of saved lives?

Life-saving game changer 

Ostendorf: It really is. The World Health Organization (WHO) statistics from 2018 show that 30 million people worldwide experience a sepsis event. In their classification, six million of those could lead to deaths. In 2018 in the UK, there were 150,000 annual cases, with 44 of those ending in deaths.

You can see why this sepsis algorithm is a game-changer, not just for a specific client, but for everyone around the world. It gives clinicians the information they need in a timely manner so that they can take immediate action — and they can save lives.

doctorRich talked about the resources that we save, the cost that’s driven out, all those things are extremely important. When you are the patient or the patient’s family, that translates into a person who actually gets to go home from the hospital. You can’t put a dollar amount or an efficiency on that.

It’s truly saving lives and that’s just amazing to think that. We’re doing that by simply taking the data that was already being collected, running that through the St. John’s sepsis algorithm and alerting the clinicians so that they can take quick action.

Stewart: It was a profound moment for me after Hamad Medical Corp. in Qatar, where I had run the sepsis algorithm across their hospitals for about 11 months, did the data and they reckoned that they had potentially saved 64 lives.

And at the time when I was reading this, I was standing in a clinic there. I looked out at the clinic, it was a busy clinic, and I reckoned there were 60 to 70 people sitting there. And it just hit me like a bolt of lightning to think that what the sepsis algorithm had done for them could have meant the equivalent of every single person in that room being saved. Or, on the flipside, we could have lost every single person in that room.

Mothers, fathers, husbands, wives, sons, daughters, brothers, sisters — and it just hit me so forcefully and I thought, “Oh, my gosh, we have to keep doing this.” We have to do more and find out all those different additional areas where we can help to make a difference and save lives.

nurseGardner: We have such a compelling rationale for employing these technologies and processes and getting people and AI to work together. In making that precedent we’re also setting up the opportunity to gather more data on a historical basis. As we know, the more data, the more opportunity for analysis. The more analysis, the more opportunity for people to use it and leverage it. We get into a virtuous, positive adoption cycle.

Rich, once we’ve established the ability to gather the data, we get a historical base of that data. Where do we go next? What are some of the opportunities to further save lives, improve patient outcomes, enhance patient experience, and reduce costs? What is the potential roadmap for the future?

Personalization improves patients, policy 

Bird: The exciting thing is, if we can take every piece of medical information about an individual and provide that in a way that the clinical team can see it from one end of the user’s life right up to the present day, we can provide medicine that’s more personalized. So, treating people specifically for the conditions that they have.

Missy was talking about evaluating more precisely whether to send a patient for a certain type of scan. There’s also another side of that. Do we give a patient a certain type of medication?

When we’re in a situation where we have the patient’s whole data profile in front of us, clinical teams can make better decisions. Are they on a certain medication already? Are they allergic to a medication that you might prescribe to them? Will their DNA, the combination of their physiology, the condition that they have, the multiple conditions that they have – then we start to see that better clinical decisions can be made. We can treat people uniquely for the specific conditions.

At Hewlett Packard Labs, I was recently talking with an individual about how big data will revolutionize healthcare. You have certain types of patients with certain conditions in a cohort of patients, but how can we make better decisions on that cohort of patients with those co-conditions? You know, with at a specific time in their life, but then also how do we do that from an individual level of individuals?

Rather than just thinking about patients as cohorts, how could policymakers and governments around the world make decisions based on impacts of preventative care, such as more health maintenance? We can give visibility into that data to make better decisions for populations over long periods of time.

It all sounds very complicated, but my hope is, as we get closer, as the power of computing improves, these insights are going to reveal themselves to the clinical team more so than ever.

There’s also the population health side. Rather than just thinking about patients as individuals, or cohorts of patients, how could policymakers and governments around the world make decisions based on impacts of preventative care, such as incentivizing populations to do more health maintenance? How can we give visibility into that data into the future to make better decisions for populations over the longer period of time?

We want to bring all of this data together in a safe way that protects the security and the anonymity of the patients. It could provide those making clinical decisions about the people that are in front of them, as well as policymakers to look over the whole population, the means to make more informed decisions. We see massive potential around prevention. It could have an impact on how much healthcare costs before the patient actually needs treatment.

It’s all very exciting. I don’t think it’s too far away. All of these data points we are collecting are in their own silos right now. There is still work to do in terms of interoperability, but soon everybody’s data could interact with everybody else’s data. Cerner, for example, is making some great strides around the population health element.

Gardner: Missy, where do you see accelerating benefits happening when we combine edge computing, healthcare requirements, and AI?

At the leading edge of disease prevention

Ostendorf: I honestly believe there are no limits. As we continue to take the data in in places like in northern England, where the healthcare system is on a peninsula, they’re treating the entire population.

Rich spoke to population health management. Well, they’re now able to look across the data and see how something that affects the population, like diabetes, specifically affects that community. Clinicians can work with their patients and treat them, and then work the actual communities to reduce the amount of type 2 diabetes. It reduces the cost of healthcare and reduces morbidity rate.

That’s the next place where AI is going to make a massive impact. It will no longer be just saving a life with the sepsis algorithm running against those patients who are in the hospital. It will change entire communities and how they approach health as a community, as well as how they fund healthcare initiatives. We’ll be able to see more proactive management of health community by community.

hpe-logoGardner: Deirdre, what advice do you give to other practitioners to get them to understand the potential and what it takes to act on that now? What should people in the front lines of caregiving be thinking about on how to best utilize and exploit what can be done now with edge computing and AI services?

Stewart: Everybody should have the most basic analytical questions in their heads at all times. How can I make what I am doing better? How can I make what I am doing easier? How can I leverage the wealth of information that is available from people who have walked in my shoes and looked after patients in the same way as I’m looking after them, whether that’s in the hospital or at home in the community? How do I access that in an easier fashion, and how do I make sure that I can help to make improvements in it?

Access to information at your fingertips means not having to remember everything. It’s having it there, and having suggestions made to me. I’m always going back and reviewing what those results and analytics are to help improve the next time, the next time around.

From bedside to boardroom, everybody should be asking themselves those questions. Have I got access to the information I need? And how can I make things better? What more do I need?

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, electronic medical records, Enterprise transformation, healthcare, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, Security | Tagged , , , , , , , , , , , , , | Leave a comment

Three generations of Citrix CEOs on enabling a better way to work

Citrix-Workspace-StageFor the past 30 years, Citrix has made a successful habit of challenging the status quo. That includes:

  • Delivering applications as streaming services to multiple users

  • Making the entire PC desktop into a secure service

  • Enhancing networks that optimize applications delivery

  • Pioneering infrastructure-as-a-service (IaaS) now known as public cloud, and

  • Supplying a way to take enterprise applications and data to the mobile edge.

Now, Citrix is at it again, by creating digital workspaces and redefining the very nature of applications and business intelligence. How has one company been able to not only reinvent itself again and again, but make major and correct bets on the future direction of global information technology?

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To find out, Dana Gardner, Principal Analyst at Interarbor Solutions, recently sat down to simultaneously interview three of Citrix’s chief executives from the past 30 years, Roger Roberts, Citrix CEO and Chairman from 1990 to 2002; Mark Templeton, CEO of Citrix from 2001 to 2015, and David Henshall, who became the company’s CEO in July of 2017.

Here are some excerpts:

Dana Gardner: So much has changed across the worker productivity environment over the past 30 years. The technology certainly has changed. What hasn’t changed as fast is the human factor, the people.

How do we keep moving the needle forward with technology and also try to attain productivity growth when we have this lump of clay that’s often hard to manage, hard to change?

Mark Templeton

Templeton

Mark Templeton: The human factor “lump of clay” is changing as rapidly as technology because of the changing demographics of the workforce. Today’s baby boomers are being followed by generations of millennials, Gen Y, Gen X and then Gen Z will be making important decisions 20 years from now.

So the human factor clay is changing rapidly and providing great opportunity for innovation and invention of new technology in the workplace.

Gardner: The trick is to be able to create technology that the human factor will adopt. It’s difficult to solve a chicken and egg relationship when you don’t know what’s going to drive the other.

What about the past 30 years at Citrix gives you an edge in finding the right formula?

David Henshall: Citrix has always had an amazing ability to stay focused on connecting people and information — and doing it in a way that it’s secure, managed, and available so that we can abstract away a lot of the complexity that’s inherent with technology.

Because, at the end of the day, all we are really focused on is driving those outcomes and allowing people to be as productive, successful, and engaged as humanly possible by giving them the tools to — as we frame it up — work in a way that’s most effective for them. That’s really about creating the future of work and allowing people to be unleashed so that they can do their best working.

Gardner: Roger, when you started, so much of the IT world was focused on platforms and applications and how one drives the other. You seem to have elevated yourself above that and focused on services, on delivery of productivity – because, after all, they are supposed to be productivity applications. How were you able to see above and beyond the 1980s platform-application relationship?

Roger Roberts

Roberts

Roger Roberts: We grew up when the personal computer (PC) and local area networks (LANs) like when Novell NetWare came on the scene. Everybody wanted to use their own PC, driven primarily by things such as the Lotus applications.

So [applications like] spreadsheets, WordPerfectdBase were the tremendous bulk of the market demand at that time. However, with the background that I shared with [Citrix Co-Founder] Ed Iacobucci, we had been in the real world working from mainframes through minicomputers and then to the PCs, and so we knew there were applications out there, where the existing model – well, it really sucked.

The trick then was to take advantage of the increasing processing power we knew the PC was going to deliver and put it in a robust environment that would have stability so we could target specific customers with specific applications. Those customers were always intrigued with our story.

Our story was not formed to meet the mass market. Things like running ads or trying to search for leads would have been a waste of time and money. It made no sense in those days because the vast majority of the world had no idea of what we were talking about.

Gardner: What turned out to be the killer application for Citrix’s rise? What were the use cases you knew would pay off even before the PC went mainstream?

The personnel touch 

Roberts: The easiest one to relate to is personnel systems. Brown and Root Construction out of Houston, Texas was a worldwide operation. Most of their offices were on construction sites and in temporary buildings. They had a great deal of difficulty managing their personnel files, including salaries, when someone was promoted, reviewed, or there was a new hire.

The only way you could do it in the client-server LAN world was to replicate the database. And let me tell you, nobody wants to replicate their human resources (HR) database across 9,000 or 10,000 sites.

The only way you could do it in the client-server-LAN world was to replicate the database. And let me tell you, nobody wants to replicate their HR database across 10,000 sites. We came in and said, “We can solve that problem for you.”

So we came in and said, “We can solve that problem for you, and you can keep all of your data secure at your corporate headquarters. It will always be synchronized because there is only one copy. And we can give you the same capabilities that the LAN-based PC user experiences even over fairly slow telecommunication circuits.”

That really resonated with the people who had those HR problems. I won’t say it was an easy sell. When you are a small company, you are vulnerable. They ask, “How can we trust you to put in a major application using your technology when you don’t have a lot of business?” It was never the technology or the ability to get the job done that they questioned. It was more like having the staying power. That turned out to be the biggest obstacle.

Gardner: David, does it sound a little bit familiar? Today, 30 years later, we’re still dealing with distance, the capability of the network, deciding where the data should reside, how to manage privacy, and secure regulatory compliance. When you listen to Citrix’s use cases and requirements from 30 years ago, does it ring a bell?

Organize, guide, and predict work 

David Henshall

Henshall

Henshall: It absolutely resonates because a lot of what we’re doing is still dealing with the inherent complexity of enterprise IT. Some of our largest customers today are dealing with thousands and thousands of underlying applications. Those can be everything from mainframe applications that Roger talked about through the different eras of client-server — the PC, web, and mobile. A lot of those applications are still in use today because they are adding value to the business, and they are really hard to pull out of the infrastructure.

We can now help them abstract away a lot of that complexity put in over the last 30 years. We start by helping organize IT, allowing them to manage all that complexity of the application tier, and present that out in a way that is easier to consume, easier to manage, and easier to secure.

Several years ago, we began bringing together all of these application types in a way that I would describe as helping to move from organizing IT to organizing work. That means bringing not only the apps but access to all the content and files — whether those reside in on-premises data repositories or in any cloud — including Citrix Cloud. We make that all accessible across universal endpoints management. Then you layer underneath that all kinds of platform capabilities such as security, access control, management, and analytics.

WorkerWhere we’re taking the company in the future is one step beyond organizing work to helping to guide and predict work. That will drive more engagement and productivity by leveraging machine learning (ML)artificial intelligence (AI), and a lot of other capabilities to present work to people in real time and suggest and advise on what they need to be to be most productive.

That’s all just a natural evolution from back when the same fundamental concept was to connect people with the information they need to be productive in real time.

Gardner: One of the ways to improve on these tough problems, Mark, is being in the right place in an ecosystem. Citrix has continuously positioned itself between the data, the systems of record, and the end-user devices. You made a big bet on virtualization as a means to do that.

How do we understand the relationship between the technology and productivity? Is being in the right place and at the right time the secret sauce?

Customers first, innovation always

Templeton: Generically, right place and time is key in just about every aspect of life, but especially the timing of invention and innovation, how it’s introduced, and how to get it adopted.

Citrix adopted a philosophy from an ecosystem perspective from pretty early on. We thought of it as a Switzerland-type of mindset, where we’re willing to work with everyone in the ecosystem — devices, networks, applications, etc. – to interoperate, even as they evolved. So we were basically device-, network-, and application-independent around the kind of the value proposition that David and Roger talked about.

We made a great reputation for ourselves by being able to provide a demilitarized zone so that customers could manage and control their own destiny. When a customer is better off, we are better off. But it starts with making the customer better off.

That type of a cooperative mindset is always in style because it is customer-centered. It’s based upon value-drivers for customers, and my experience is that when there are religious wars in the industry — the biggest losers are customers. They pay for the fight, the incompatibilities, and obsolescence.

We made a great reputation for ourselves then by being able to provide a demilitarized zone (DMZ), or platform for détente, so that customers could manage and control their own destiny. The company has that culture and mindset and it’s always been that way. When a customer is better off, we are better off. But it starts with making the customer better off.

Gardner: Roger, we have often seen companies that had a great leap in innovation but then plateaued and got stuck in the innovator’s dilemma, as it’s been called. That hasn’t been the case with Citrix. You have been able to reinvent yourselves pretty frequently. How do you do that as a culture? How do you get people to stay innovative even when you have a very successful set of products? How do you not rest on your laurels?

Templeton: I think for the most part, people don’t change until they have to, and to actively disrupt yourself is a very unnatural act. Being aware of an innovator’s dilemma is the first step in being able to act on it. And we did have an innovator’s dilemma here on multiple occasions.

That we saw the cliff allowed us to make a turn – mostly ahead of necessity. We made a decision, we made a bet, and we made the innovator’s dilemma actually work for us. We used it as a catalyst for driving change. When you have a lot of smart engineers, if you help them see that innovator’s dilemma, they will fix it, they will innovate.

Gardner: The pace of business sure has ramped up in the last 30 years. You can go through that cycle in 9 or 10 months, never mind 9 or 10 years. David, is that something that keeps you up at night? How do you continue to be one step ahead of where the industry is going?

Embrace, empower change 

Henshall: The sine waves of business cycles are getting much more compressed and with much higher volatility. Today we simply have devices that are absolutely transient. The ways to consume technology and information are coming and going at a pace that is extraordinary. The same thing is true for applications and infrastructure, which not that many years ago involved a major project to install and manage.

Today, it’s a collection of mesh services in so many different areas. By their very nature they become transient. Instead of trying to fight these forces, we look for ways to embrace them and make them part of what we do.

citrix-logo-blackWhen we talk about the Citrix Workspace platform, it is absolutely device- and infrastructure-independent because we recognize our customers have different choices. It’s very much like the Switzerland approach that Mark talked about. The fact that those choices change over time — and being able to support that change — is critical for our own staying power and stickiness. It also gives customers the level of comfort that we are going to be with them wherever they are in their journey.

But it’s the sheer laws of physics that have taken these disruptions to a place where, not that many years ago, it was about how fast physical goods could transfer across state or national boundaries. Today’s market moves on a Tweet or a notification or a new service — something that was just not even possible a few years ago.

Roberts: At the time I retired from Citrix, we were roughly at $500 million [in annual revenue] and growing tremendously. I mean we grew a factor of 10 in four years, and that still amazes me.

Our piece of the market at that time was 100 percent Microsoft Windows-centric. At the same time, you could look at that and tell we could be a multibillion-dollar company just in that space. But then came the Internet, with web apps, web app servers, new technology, HTML, and Java and you knew the world we were in had a very lucrative and long run, but if we didn’t do something, inevitably it was going to die. I think it would have been a slow death, but it would have been death.

Gardner: The relationship with Microsoft that you brought up. It’s not out of the question to say that you were helping them avoid the innovator’s dilemma. In instances that I can recall, you were able to push Microsoft off of its safety block. You were an accelerant to Microsoft’s next future. Is that fair, Mark?

Templeton: Well, I don’t think we were an accelerant to Microsoft per se. We were helping Microsoft extend the reach of Windows into places and use cases that they weren’t providing a solution for. But the Windows brand has always been powerful, and it helped us certainly with our [1995] initial public offering (IPO). In fact, the tagline on our IPO was that “Citrix extends the reach of Microsoft Windows,” in many ways, in terms of devices, different types of connectivity, over the Internet, over dial-up and on wireless networks.

Our value to Microsoft was always around being a value-added kind of partner, even though we had a little bit of a rough spot with them. I think most people didn’t really understand it, but I think Microsoft did, and we worked out a great deal that’s been fantastic for both companies for many, many years.

Gardner: David, as we look to the future and think about the role of AI and ML, having the right data is such an important part of that. How has Citrix found itself in the catbird seat when it comes to having access to broad data? How did your predecessors help out with that?

Data drives, digests, and directs the future 

Henshall: Well, if I think about data and analytics right now, over the last couple of years we’ve spent an extraordinary amount of time building out what I call an analytics platform that sits underneath the Citrix Workspace platform.

We have enough places that we can instrument to capture information from everything, from looking backward across the network, into the application, the user, the location, the files, and all of those various attributes. We collect a rich dataset of many, many different things.

Taking it to a platform approach allows us to step back and begin introducing modules, if you will, that use this information not just in a reporting way, but in a way to actually drive enforcement across the platform. Those great data collection points are also places where we can enforce a policy.

Over the last couple of years we have spent an extraordinary amount of time building out what I call an analytics platform that sits underneath the Citrix Workspace platform.We collect a rich dataset of many, many different things.

Gardner: The metadata has become more important in many cases than the foundational database data. The metadata about what’s going on with the network, the relationship between the user and their devices, what’s going on between all the systems, and how the IT infrastructure beneath them is performing.

Did you have a clue, Mark, that the metadata about what’s going on across an IT environment would become so powerful one day?

Templeton: Absolutely. While I was at Citrix, we didn’t have the technical platform yet to handle big data the way you can handle it now. I am really thrilled to hear that under David’s leadership the company is diving into that because it’s behavioral data around how people are consuming systems — which systems, how they’re working, how available are they, and whether they’re performing. And there are many things that data can express around security, which is a great opportunity for Citrix.

Back in my time, in one of the imagination presentations, we would show IT customers how they eventually would have the equivalent of quarterly brokerage reports. You could see all the classes of investments — how much is invested in this type of app, that type of app, the data, where it’s residing, its performance and availability over time. Then you could make important decisions – even simple ones like when do we turn this application off. At that time, there was very little data to help IT make such hard decisions.

So that was always in the idea, but I’m really thrilled to see the company doing it now.

Gardner: So David, now that you have all of that metadata, and the big data systems to analyze it in real-time, what does that get for you?

Serving what you need, before you need it 

Henshall: The applications are pretty broad, actually. If you think about our data platform right now, we’re able to do lots of closed-loop analytics across security, operations, and performance — and really drive all three of those different factors to improve overall performance. You can customize that in an infinite number of ways so customers can manage it in the way that’s right for their business.

But what’s really interesting now is, as you start developing a pattern of behaviors in the way people are going about work, we can predict and guide work in ways that were unavailable not that long ago. We can serve up the information before you need it based on the graph of other things that you’re doing at work.

WorkspacesA great example is mobile applications for airlines today. The smart ones are tied into the other things that you are doing. So an hour before your flight, it already gives you a notification that it’s time to leave for the airport. When you get to your car, you have your map of the fastest route to the airport already plotted out. As you check in, using biometrics or some other form of authentication, it simplifies these workflows in a very intuitive way.

We have amazing amounts of information that will take that example and allow us to drive it throughout a business context.

Gardner: Roger, in 30 years, we have gone from delivering a handful of applications to people in a way that’s acceptable — given the constraints of the environment and the infrastructure — to a point where the infrastructure data doesn’t have any constraints. We are able to refine work and tell people how they should be more productive.

Is that something you could have imagined back then?

Roberts: Quite frankly, as good as I am, no. It’s beyond my comprehension.

I have an example. I was recently in Texas, and we had an airplane that broke down. We had to get back, and using only my smartphone, I was able to charter a flight, sign a contract with DocuSign, pay for it with an automated clearing house (ACH) transfer, and track that flight on FlightAware to the nearest 15 seconds. I could determine how much time it would take us to get home, and then arrange an Uber ride. Imagine that? It still amazes me; it truly amazes me.

Gardner: You guys would know this better than I do, but it seems that you can run a multinational corporation on a device that fits in your palm. Is that an exaggeration?

Device in hand still needs hands-on help 

Templeton: In many ways, it still is an exaggeration. You can accomplish a lot with the smart device in your hand, and to the degree that leadership is largely around communications and the device in your hand gives you information and the ability to communicate, you can do a lot. But it’s not a substitute entirely for other types of tasks and work that it takes to run a big business, including the human relationships.

Gardner: David, maybe when the Citrix vision for 2030 comes out, you will be able to — through cloud, AI, and that device — do almost anything?

Henshall: It will be more about having the right information on demand when you need it, and that’s a trend that we’ve seen for quite some time.

The amount of information is absolutely staggering. But turning that into something that is actually useful is nearly impossible. The businesses that are going to be successful are those that can put the right information at people’s fingertips at the right time to interact with different business opportunities.

If you look at the broader trends and technology, I mean, we are entering the yottabyte era now, which is one with 24 zeros after it. The amount of information is absolutely staggering. But turning that into something that is actually useful is nearly impossible.

That’s where AI and ML — and a lot of these other advancements — will allow you to parse through that all and give people the freedom of information that probably just never existed before. So the days of proprietary knowledge, of proprietary data, are quickly coming to an end. The businesses that are going to be successful are those that can put the right information at people’s fingertips at the right time to interact with different business opportunities.

That’s what the technology allows you to do. Advancements in network and compute are making that a very near-term reality. I think we are just on that continuum.

Goodbye digital, hello contextual era 

Templeton: You don’t realize an era is over until you’re in a new one. For example, I think the digital era is now done. It ended when people woke up every day and started to recognize that they have too many devices, too many apps that do similar things, too many social things to manage, and blah, blah, blah. How do you keep track of all that stuff in a way where you know what to look at and when?

Workspace AppThe technologies underlying AI and ML are defining a new era that I call the “contextual era.” A contextual era works exactly how David just described it. It senses and predicts. It makes the right information available in true context. Just like Roger was saying, it brings all those the things he needs together for him, situationally. And, obviously, it could even be easier than the experience that he described.

We are in the contextual era now because the amount of data, the number of apps, and the plethora of devices that we all have access to is beyond human comprehension.

Gardner: David, how do you characterize this next era? Imagine us having a conversation in 30 years with Citrix, talking about how it was able to keep up with the times.

Henshall: Mark put it absolutely the way I would, in terms of being able to be contextual in such a way that it brings purpose through the chaos, or the volume of data, or the information that exists out there. What we are really trying to do in many dimensions is think about our technology platform as a way that creates space. Space for people to be successful, space for them to really do their best work. And you do that by removing a lot of the clutter.

You remove a lot of the extraneous things that bog people down. When we talk about it with our customers, the statistics behind-the-scenes are amazing. We are interrupted every two minutes in this world right now; a Tweet, a text, an email, a notification. And science shows that humans are not very good at multitasking. Our brains just haven’t evolved that way.

Gardner: It goes back to that lump of clay we talked about at the beginning. Some things don’t change.

Henshall: When you are interrupted, it takes you 20 minutes on average to get back to the task at hand. That’s one of the fundamental reasons why the statistics around engagement around the world are horrible.

For the average company, 85 percent of their employee base is disengaged — 85 percent! Gallup even put a number on that — they say it’s a $7 trillion annual problem. It’s enormous. We believe that part of that is a technology problem. We have created technologies that are no longer enhancing people’s ability to be productive and to be engaged.

If we can simplify those interactions, allow workers to engage in a way that’s more intuitive, more focused on the task at hand versus the possibility of interruption, it just helps the entire ecosystem move forward. That’s the way I think about it.

CEO staying-power strategies 

Gardner: On the subject of keeping time on your side, it’s not very often I get together with 30 years’ worth of CEOs to talk about things. For those in our audience who are leaders of companies, small or large, what advice can you give them on how to keep their companies thriving for 30 years?

Roberts: Whenever you are running a company — you are running the company. It puts a lot of pressure on you to think about the future, when technology is going to change, and how you get ahead of the power curve before it’s too late.

There is a hell of an operational component. How do you keep the wheel turning and the current moving? How do you keep it functioning, how do you grow staff, and how do you put in systems and infrastructure?

The challenge of managing as the company grows is enormously more complicated. There is the complexity of the technology, the people, the market, and what’s going on in the ecosystem. But never lose sight of the execution component, because it can kill you quicker than losing sight of the strategy.

The challenge of managing as the company grows is enormously more complicated. But never lose sight of the execution component, because it can kill you quicker than losing sight of the strategy. 

One thing I tried to do was instill a process in the company where seemingly hard questions were easy, because it was part of the fabric of how people measured and kept up with their jobs, what they were doing, and what they were forecasting. Things as simple as, “Jennifer, how many support calls are we going to get in the second quarter next year or the fourth quarter of the following year?” It’s how do you think about what you need, to be able to answer questions like those.

“How much are we going to sell?” Remember, we were selling packaged product, through a two-step distribution channel. There was no backlog. Backlog was a foreign concept, so every 30 days we had to get up and do it all over again.

It takes a lot of thought, depending on how big you want to be. If you are a CEO, the most important thing to figure out is how big you want to be. If you want to be a lifestyle, small company, then hats off; I admire you. There is nothing wrong with that.

If you want to be a big company, you need to be putting in process, systems, infrastructure, strategy, and marketing now — even though you might not think you need it. And then the other side of that is, if you go overboard in that direction, process will kill you. Where everybody is so ingrained in the process, nobody is questioning, nobody is thinking, they are just going through the process, that is as deadly as not having one.

So process is necessary, process is not sufficient. Process will help you, and it will also kill you.

Gardner: Mark, same question, advice to keep a company 30 years’ young?

Templeton: Going after Roger is the toughest thing in the world. I’ll share where I focused at Citrix. Number one is making sure you have an opinion about the future, that you believe strongly enough to bet your career and business on it. And number two, to make sure that you are doing the things that make your business model, your products, and your services more relevant over time. That allows you to execute some of the great advice that Roger just gave, so the wind’s at your back, so you are using the normal forces of change and evolution in the world to work for you, because it’s already too hard and you need all the help you can get.

A simple example is the whole idea of consumerization of IT. Pretty early on, we had an opinion about that, so, at Citrix, we created a bring-your-own-device (BYOD) policy and an experimental program. I think we were among the first and we certainly evangelized it. We developed a lot of technology to help support it, to make it work and make it better. That BYOD idea became more and more relevant over time as the workforce got younger and younger and began bringing their own devices to the office, and Citrix had a solution.

So that’s an example. We had that opinion and we made a bet on it. And it put some wind at our back.

Gardner: David, you are going to be able to get tools that these guys couldn’t get. You are going to have AI and ML on your side. You are going to be able to get rid of some of those distractions. You are going to take advantage of the intelligence embedded in the network — but you are still going to also have to get the best of what the human form factor, that lump of clay, that wetware, can do.

So what’s the CEO of the future going to do in terms of getting the right balance between what companies like Citrix are providing them as tools — but not losing track of what’s the best thing that a human brain can do?

IT’s not to do and die, but to reason why 

Henshall: It’s an interesting question. In a lot of ways, technology and the pace of evolution right now are breaking down the historical hierarchy that has existed in a lot of organizations. It has created the concept of a liquid enterprise, similar what we’ve talked about with those who can respond and react in different ways.

But what that doesn’t ever replace is what Roger and Mark were talking about — the need to have a future-back methodology, one that I subscribe to a lot, where we help people understand where we’re going, but more importantly, why.

Citrix campusAnd then you operationalize that in a way that people have context, so everybody understands clarity in terms of roles and responsibilities, operational outcomes, milestones, metrics, and how we are going to measure that along the way. Then that becomes a continuous process.

There is no such thing as, “Set it and forget it.” Without a perspective and a point of view, everything else doesn’t have enough purpose. And so you have to marry those going forward. Make sure you’re empowering your teams with culture and clarity — and then turn them loose and let them go.

Gardner: Productivity in itself isn’t necessarily a high enough motivator.

Henshall: No, productivity by itself is just a metric, and it’s going to be measured in 100 different ways. Productivity should be based on understanding clarity in terms of what the outcomes need to be and empowering that, so people can do their best work in a very individual and unique way.

The days of measuring tasks are mostly in the past. Measuring outcomes, which can be somewhat loosely defined, are really where we are going. And so, how do we enable that? That’s how I think about it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Citrix, Cloud computing, Cyber security, data center, Data center transformation, Enterprise transformation, Information management, machine learning, Microsoft | Tagged , , , , , , , , , , , , , , | Leave a comment

How the ArchiMate modeling standard helps Enterprise Architects deliver successful digital transformation

Blue Chip Client Testing Enterprise MobilityThe next BriefingsDirect business trends discussion explores how the latest update to the ArchiMate® standard helps Enterprise Architects (EAs) make complex organizations more agile and productive.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Joining us is Marc Lankhorst, Managing Consultant and Chief Technology Evangelist at BiZZdesign in The Netherlands. He also leads the development team within the ArchiMate Forum at The Open Group. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There are many big changes happening within IT, business, and the confluence of both. We are talking about Agile processes, lean developmentDevOps, the ways that organizations are addressing rapidly changing business environments and requirements.

Companies today want to transform digitally to improve their business outcomes. How does Enterprise Architecture (EA) as a practice and specifically the ArchiMate standard support being more agile and lean?

Lankhorst: The key role of enterprise architecture in that context is to control and reduce complexity, because complexity is the enemy of change. If everything is connected to everything else, it’s too difficult to make any changes, because of all of the moving parts.

Marc LankhorstAnd one of the key tools is to have models of your architecture to create insights into how things are connected so you know what happens if you change something. You can design where you want to go by making something that is easier to change from your current state.

It’s a misunderstanding that if you have Agile development processes like Scrum or SAFe then eventually your company will also become an agile organization. It’s not enough. It’s important, but if you have an agile process and you are still pouring concrete, the end result will still be inflexibility.

Stay flexible, move with the times

So the key role of architecture is to ensure that you have flexibility in the short-term and in the long-term. Models are a great help in that. And that’s of course where the ArchiMate standard comes in. It lets you create models in standardized ways, where everybody understands them in the same way. It lets you analyze your architecture across many aspects, including identifying complexity bottlenecks, cost issues, and risks from outdated technology — or any other kind of analysis you want to make.

Enterprise architecture is the key discipline in this new world of digital transformation and business agility. Although the discipline has to change to move with the times, it’s still very important to make sure that your organization is adaptive, can change with the times, and doesn’t get stuck in an overly complex, legacy world.

Find Out More About

The Open Group ArchiMate Forum

Gardner: Of course, Enterprise Architecture is always learning and improving, and so the ArchiMate standard is advancing, too. So please summarize for me the improvements in the new release of ArchiMate, version 3.1.

Lankhorst: The most obvious new addition to the standard is the concept of a value stream, that’s the latest new concept or new standard. That’s inspired by business architecture, and those of you who follow things like TOGAF®, a standard of The Open Group, or the BIZBOK will know this that value streams are a key concept in there, next to things like capabilities. ArchiMate didn’t yet have a value stream concept. Now it does, and it plays the same role as the value stream does for the TOGAF framework.

the-open-group-logoIt lets you express how a company produces its value and what the stages in the value production are. So that helps describe how an organization realizes its business outcomes. That’s the most visible addition.

Next to that, there are some other changes, minor things, such as you can have a directed association relationship instead of only an undirected one. That can come in very handy in all kinds of modeling situations. And there are some technical improvements to various definitions; they have been clarified. The specification of the metamodel has been improved.

One technical improvement specifically of interest to ArchiMate specialists is the way in which we deal with so-called derived relationships. A derived relationship is basically the conclusion you can draw from a whole chain of things connected together. You might want to see what’s actually the end-to-end connection between things on that chain so there are rules on that. We have changed, improved, and formalized these rules. That allows, at a technical level, some extra capabilities in the language.

And that’s really for the specialists. I would say the first two things, the value stream concept and this directed association — those are the most visible for most end users.

Overall value of the value stream 

Gardner: It’s important to understand how value streams now are being applied holistically. We have seen them, of course, in the frameworks — and now with ArchiMate. Value streams provide a common denominator for organizations to interpret and then act. That often cuts across different business units. Help us understand why value streams as a common denominator are so powerful.

Lankhorst: Value stream helps express the value that an organization produces for its stakeholders, the outcomes it produces, and the different stages needed to produce that value. It provides a concept that’s less detailed than looking at your individual business processes.

Value stream helps express the value that an organization produces for its stakeholders, the outcomes it produces, and the different stages needed to produce that value. It provides a concept that’s less detailed than looking at your individual business processes. 

If you look at the process level, you might be standing too closely in front of the picture. You don’t see the overall perspective of how a company creates value for its customers. You only see the individual tasks that you perform, but how that actually adds value for your stakeholders — that’s really the key.

The capability concept and the mapping between them is also very important. That allows you see what capabilities are needed for the stages in the value production. And in that way, you have a great starting point for the rest of the development of your architecture. It tells you what you need to be able to do in order to add value in these different stages.

You can use that at a relatively high level, an economic perspective, where you look at classical value chains from, say, a supplier via internal production to marketing and sales and to the consumer. You can also use that at a fine-grade level. But the focus is really always about the value you create — rather than the tasks you perform.

Gardner: For those who might not be familiar with ArchiMate, can you provide us with a brief history? It was first used in The Netherlands in 2004 and it’s been part of The Open Group since 2008. How far back is your connection with ArchiMate?

Lankhorst: Yes, it started as a research and development project in The Netherlands. At that time, I worked at an applied research institute in IT. We did joint collaborative projects with industry and academia. In the case of ArchiMate, there was a project in which we had, for example, a large bank and a pension fund and the Dutch tax administration. A number of these large organizations needed a common way of describing architectures.

Download the New

ArchiMate Specification 3.1 

That began in 2002. I was the project manager of that project until 2004. Already during the projects the participating companies said, “We need this. We need a description technique for architecture. We also want you to make this a standard.” And we promised to make it into a standard. We needed a separate organization for that.

So we were in touch with The Open Group in 2004 to 2005. It took a while, but eventually The Open Group adopted the standard, and the official version under the aegis of The Open Group came out in 2008, version 1. We had a number of iterations: in 2012, version 2.0, and in 2016, version 3.0. Now, we are at version 3.1.

Gardner: The vision for ArchiMate is to be a de facto modeling notation standard for Enterprise Architecture that helps improve communication between different stakeholders across an organization, a company, or even a country or a public agency. How do the new ArchiMate improvements help advance this vision, in your opinion?

The value streams concept gives a broader perspective of how value is produced — even across an ecosystem of organizations. This broad perspective is important. 

Lankhorst: The value streams concept gives a broader perspective of how value is produced — even across an ecosystem of organizations. That’s broader than just a single company or a single government agency. This broad perspective is important. Of course it works internally for organizations, it has worked like that, but increasingly we see this broader perspective.

Just to name two examples of that. The North Atlantic Treaty Organization (NATO) in its most recent NATO Architecture Framework version 4 came out early last year, now specify ArchiMate as one of the two allowed metamodels for specifically modeling architecture for NATO.

For these different countries and how they work together, this is one of the allowed standards. For example, the British Ministry of Defence wants to use ArchiMate models and the ArchiMate Exchange format to communicate with industry. For example, when seek a request for proposal (RFP), they use ArchiMate models for describing the context of that and then require industry to provide ArchiMate models to describe their solution.

Another example is in the European System of Central Banks. They have joint systems for doing transactions between central banks. They have completely modeled those out in ArchiMate. So, all of these different central banks have the same understanding of the architecture, across, between, and within organizations. Even within organizations you can have the same problems of understanding what’s actually happening, how the bits fit together, and make sure everybody is on the same page.

A manifesto to control complexity 

Gardner: It’s very impressive, the extent to which ArchiMate is now being used and applied. One of the things that’s also been impressive is that the goal of ArchiMate to corral complexity hasn’t fallen into the trap of becoming too complex itself. One of its goals was to remain as small as possible, not to cover every single scenario.

How do you manage not to become too complex? How has that worked for ArchiMate?

Lankhorst: One of the key principles behind the language is that we want to keep it as small and simple as possible. When we drew up our own ArchiMate manifesto — some might know of the Agile manifesto – and the ArchiMate manifesto is somewhat similar.

One of the key principles is that we want to cover 80 percent of cases for the 80 percent of the common users, rather than try to cover a 100 percent for a 100 percent of the users. That would give you exotic use cases that require very specific features in the language that hardly anybody uses. It can clutter the picture for all the users. It would be much more complicated.

Find Out More About

The Open Group ArchiMate Forum

So, we have been vigilant to avoid that feature-creep, where we keep adding and adding all sorts of things to the language. We want to keep it as simple as possible. Of course, if you are in a complex world, you can’t always keep it completely straightforward. You have to be able to address that complexity. But keeping the language as easy to use and as easy to understand as possible has and will remain the goal.

Gardner: The Open Group has been adamant about having executable standards as a key principle, not too abstract but highly applicable. How is the ArchiMate standard supporting this principle of being executable and applicable?

Lankhorst: In two major ways. First, because it is implemented by most major architecture tools in the market. If you look at the Gartner Magic Quadrant and the EA tools in there, pretty much all of them have an implementation of the ArchiMate language. It is just the standard for EA.

In that sense, it becomes the one standard that rules them all in the architecture field. At a more detailed level, the executable standards, the ArchiMate Exchange format has played an important role. It makes it possible to exchange models between different tools for different applications. I mentioned the example of the UK Ministry of Defence which wants to exchange models with industry, specify their requirements, and get back specifications and solutions using ArchiMate models. It’s really important to make these kinds of models and this kind of information available in ways that the different tools can use, manipulate, and analyze.

Gardner: That’s ArchiMate 3.1. When did that become available?

Lankhorst: The first week of November 2019.

Gardner: What are the next steps? What does the future hold? Where do you take ArchiMate next?

Lankhorst: We haven’t made any concrete plans yet for possible improvements. But some things you can think about is simplifying the language further so that it is even easier to use, perhaps having a simplified notation for certain use cases so you don’t need the precision of the current notation. Maybe having an alternative notation that looks easier to the eye.

There are some other things that we might want to look at. For example, ArchiMate currently assumes that you already have a fair idea about what kind of solution you are developing. But maybe it’s moving an upstream to the brainstorming phase of architecture. So supporting the initial stages of design. That might be something we want to look into.

There are various potential directions but it’s our aim to keep things simple and help architects express what they want to do — but not make the language overly complicated and more difficult to learn.

Download the New

ArchiMate Specification 3.1 

So simplicity, communication, and maybe expanding a bit toward early-stage design. Those are the ideas that I currently have. Of course, there is a community, the ArchiMate Forum within The Open Group. All of the members have a say. There are other outside influences as well, with various ideas of where we could take this.

Gardner: It’s also important to note that the certification program around ArchiMate is very active. How can people learn more about certification in ArchiMate?

Certification basics 

Lankhorst: You can find more details on The Open Group website, it’s all laid out there. Basically, there are two levels of certification and you can take the exams for that.  You can take courses with various course providers, BiZZdesign being one of them, and then prepare for the exam.

Increasingly, I see in practice of this is the requirements when architects are hired, that they are certified so that the company that hires, say consultants, knows that at least they know the basics. So, I would certainly recommend taking an exam if you are into Enterprise Architecture.

Gardner: And of course there are also the events around the world. These topics come up and are often very uniformly and extensively dealt with at The Open Group events, so people should look for those at the website as well.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, risk assessment, Security, The Open Group | Tagged , , , , , , , , , , , , , , , | Leave a comment

How smart IT infrastructure has evolved into the era of data centers-as-a-service

Guys in DCThere has never been a better time to build efficient, protected, powerful, and modular data centers — yet many enterprises and public sector agencies cling to aging, vulnerable, and chaotic legacy IT infrastructure.

The next BriefingsDirect interview examine how automation, self-healing, data-driven insights, and increasingly intelligent data center components are delivering what amount to data centers-as-a-service. 

Listen to the podcast. Find it on iTunes. Read a full transcript or  download a copy. 

Here to explain how a modern data center strategy includes connected components and data analysis that extends from the data center to the computing edge is Steve Lalla, Executive Vice President of Global Services at Vertiv. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Steve, when we look at the evolution of data center infrastructure, monitoring, and management software and services, they have come a long way. What’s driving the need for change now? What’s making new technology more pressing and needed than ever?

Lalla: There are a number of trends taking place. The first is the products we are building and the capabilities of those products. They are getting smarter. They are getting more enabled. Moore’s Law continues. What we are able to do with our individual products is improving as we progress as an industry.

Steve Lalla

Lalla

The other piece that’s very interesting is it’s not only how the individual products are improving, but how we connect those products together. The connective tissue of the ecosystem and how those products increasingly operate as a subsystem is helping us deliver differentiated capabilities and differentiated performance.

So, data center infrastructure products are becoming smarter and they are becoming more interconnected.

The second piece that’s incredibly important is broader network connectivity — whether it’s wide area connectivity or local area connectivity. Over time, all of these products need to be more connected, both inside and outside of the ecosystem. That connectivity is going to enable new services and new capabilities that don’t exist today. Connectivity is a second important element.

Interconnectivity across ecosystems

Third, data is exploding. As these products get smarter, work more holistically together, and are more connected, they provide manufacturers and customers more access to data. That data allows us to move from a break/fix type of environment into a predictive environment. It’s going to allow us to offer more just-in-time and proactive service versus reactive and timed-based services.

And when we look at the ecosystems themselves, we know that over time these centralized data centers — whether they be enterprise data centers, colocation data centers, or cloud data centers — are going to be more edge-based and module-based data centers.

And as that occurs, all the things we talked about — smarter products, more connectivity, data and data enablement — are going to be more important as those modular data centers become increasingly populated in a distributed way. To manage them, to service them, is going to be increasingly difficult and more important.

A lot of the folks who interact with these products and services will face what I call knowledge thinning. The talent is reaching retirement age and there is a high demand for their skills.

And one final cultural piece is happening. A lot of the folks who interact with these products and services will face what I call knowledge thinning. The highly trained professionals — especially on the power side of our ecosystem — that talent is reaching retirement age and there is a high demand for their skills. As data center growth continues to be robust, that knowledge thinning needs to be offset with what I talked about earlier.

So there are a lot of really interesting trends under way right now that impact the industry and are things that we at Vertiv are looking to respond to.

Gardner: Steve, these things when they come together form, in my thinking, a whole greater than the sum of the parts. When you put this together — the intelligence, efficiency, more automation, the culture of skills — how does that lead to the notion of data center-as-a-service?

1600x636-liebert_room_97147_0Lalla: As with all things, Dana, one size does not fit all. I’m always cautious about generalizing because our customer base is so diverse. But there is no question that in areas where customers would like us to be operating their products and their equipment instead of doing it themselves, data center-as-a-service reduces the challenges with knowledge thinning and reduces the issue of optimizing products. We have our eyes on all those products on their behalf.

And so, through the connectivity of the product data and the data lakes we are building, we are better at predicting what should be done. Increasingly, our customers can partner with us to deliver a better performing data center.

Gardner: It seems quite compelling. Modernizing data centers means a lot of return on investment (ROI), of doing more with less, and becoming more predictive about understanding requirements and then fulfilling them.

Why are people still stuck? What holds organizations back? I know it will vary from site to site, but why the inertia? Why don’t people run to improve their data centers seeing as they are so integral to every business?

Adoption takes time

Lalla: Well, these are big, complex pieces of equipment. They are not the kind of equipment that every year you decide to change. One of the key factors that affects the rate at which connectivity, technology, processing capability, and data liberation capability gets adopted is predicated by the speed at which customers are able to change out the equipment that they currently have in their data centers.

Now, I think that we, as a manufacturer, have a responsibility to do what we can to improve those products over time and make new technology solutions backward compatible. That can be through updating communication cards, building adjunct solutions like we do with Liebert® ICOMTM-S and gateways, and figuring out how to take equipment that is going to be there for 15 or 20 years and make it as productive and as modern as you can, given that it’s going to be there for so long.

So number one, the duration of product in the environment is certainly one of the headwinds, if you will.

Smart thingAnother is the concept of connectivity. And again, different customers have different comfort levels with connectivity inside and outside of the firewall. Clearly the more connected we can be with the equipment, the more we can update the equipment and assess its performance. Importantly, we can assess that performance against a big data lake of other products operating in an ecosystem. So, I think connectivity, and having the right solutions to provide for great connectivity, is important.

And there are cultural elements to our business in that, “Hey, if it works, why change it, right?” If it’s performing the way you need it to perform and it’s delivering on the power and cooling needs of the business, why make a change? Again, it’s our responsibility to work with our customers to help them best understand that when new technology gets added — when new cards get added and when new assistants, l call them digital assistants, get added — that that technology will have a differential effect on the business.

So I think there is a bit of reality that gets in the way of that sometimes.

Gardner: I suppose it’s imperative for organizations like Vertiv to help organizations move over that hump to get to the higher-level solutions and overcome the obstacles because there are significant payoffs. It also sets them up to be much more able to adapt to the future when it comes to edge computing, which you mentioned, and also being a data-driven organization.

How is Vertiv differentiating yourselves in the industry? How does combining services and products amount to a solution approach that helps organizations modernize

Three steps that make a difference

Lalla: I think we have a differentiated perspective on this. When we think about service, and we think about technology and product, we don’t think about them as separate. We think about them altogether. My responsibility is to combine those software and service ecosystems into something more efficient that helps our customers have more uptime, and it becomes more predictive versus break/fix to just-in-time-types of services.

We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what we can do once data and information are liberated.

And the way we do that is through three steps. Number one, we have to continue to work closely with our product teams to ensure early in the product definition cycle which products need to be interconnected into an as-a-service or a self-service ecosystem.

We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what, in fact, we can do once data and information gets liberated. A great strategy always starts with great product, and that’s core to our solution.

The next step is a clear understanding that some of our customers want to service equipment themselves. But many of our customers want us to do that for them, whether it’s physically servicing equipment or monitoring and managing the equipment remotely, such as with our LIFETM management solution.

We are increasingly looking at that as a continuum. Where does self-service end, and where do delivered services begin? In the past it’s been relatively different in what we do — from a self-service and delivered service perspective. But increasingly, you see those being blended together because customers want a seamless handover. When they discover something needs to be done, we at Vertiv can pick up from there and perform that service.

So the connective tissue between self-service and Vertiv-delivered service is something that we are increasing clarity on.

And then finally, we talked about this earlier, we are being very active at building a data lake that comes from all the ecosystems I just talked about. We have billions of rows of normalized data in our data lake to benefit our customers as we speak.

Gardner: Steve, when you service a data center at that solution-level through an ecosystem of players, it reminds me of when IT organizations started to manage their personal computers (PCs) remotely. They didn’t have to be on-site. You could bring the best minds and the best solutions to bear on a problem regardless of where the problem was — and regardless of where the expertise was. Is that what we are seeing at the data center level?

Self-awareness remotely and in-person

Lalla: Let’s be super clear, to upgrade the software on an uninterruptible power supply (UPS) is a lot harder than to upgrade software on a PC. But the analogy of understanding what must be done in-person and what can be done remotely is a good one. And you are correct. Over years and years of improvement in the IT ecosystems, we went from a very much in-person type of experience, fixing PCs, to one where very much like mobile phones, they are self-aware and self-healing.

This is why I talked about the connectivity imperative earlier, because if they are not connected then they are not aware. And if they are not aware, they don’t know what they need to do. And so connectivity is a super important trend. It will allow us to do more things remotely versus always having to do things in-person, which will reduce the amount of interference we, as a provider of services, have on our customers. It will allow them to have better uptime, better ongoing performance, and even over time allow tuning of their equipment.

You could argue the mobile phone and PC are at very late stages of their journey of automation. We are on the very early stages of it, and smarter products, connectivity, and data are all important factors.

We are at the early stages of that journey. You could argue the mobile phone and the PC guys are at the very late stages of their journey of automation. We are in the very early stages of it, but the things we talked around earlier — smarter products, connectivity, and data — all are important factors influencing that.

Gardner: Another evolution in all of this is that there is more standardization, even at the data center level. We saw standardization as a necessary step at the server and storage level — when things became too chaotic, too complex. We saw standardization as a result of virtualization as well. Is there a standardization taking place within the ecosystem and at that infrastructure foundation of data centers?

Standards and special sauce

Lalla: There has been a level of standardization in what I call the self-service layer, with protocols like BACnetModbus, and SNMP. Those at least allow a monitoring system to ingest information and data from a variety of diverse devices for minimally being able to monitor how that equipment is performing.

I don’t disagree that there is an opportunity for even more standardization, because that will make that whole self-service, delivered-as-a-service ecosystem more efficient. But what we see in that control plane is really Vertiv’s unique special sauce. We are able to do things between our products with solutions – like Liebert ICOM-S — that allow our thermal products to work better together than if they were operating independently.

DC walkersYou are going to see an evolution of continued innovation in peer-to-peer networking in the control plane that probably will not be open and standard. But it will provide advances in how our products work together. You will see in that self-service, as-a-service, and delivered-service plane continued support for open standards and protocols so that we can manage more than just our own equipment. Then our customers can manage and monitor more of their own equipment.

And this special sauce, which includes the data lakes and algorithms — a lot of intellectual property and capital in building those algorithms and those outcomes — help customers operate better. We will probably stay close to the vest in the short term, and then we’ll see where it goes over time.

Gardner: You earlier mentioned moving data centers to the edge. We are hearing an awful lot architecturally about the rationale for not moving the edge data to the cloud or the data center, but instead moving the computational capabilities right out to the edge where that data is. The edge is where the data streams in, in massive quantities, and needs to be analyzed in real-time. That used to be the domain of the operational technology (OT) people.

As we think about data centers moving out to the edge, it seems like there’s a bit of an encroachment or even a cultural clash between the IT way of doing things and the OT way of doing things. How does Vertiv fit into that, and how does making data center-as-a-service help bring the OT and IT together — to create a whole greater than the sum of the parts?

OT and IT better together  

Lalla: I think maybe there was a clash. But with modular data centers and things like SmartAisle and SmartRow that we do today, they could be fully contained, standalone systems. Increasingly, we are working with strategic IT partners on understanding how that ecosystem has to work as a complete solution — not with power and cooling separate from IT performance, but how can we take the best of the OT world power and cooling and the best of the IT world and combine that with things like alarms and fire suppression. We can build a remote management and monitoring solution that can be outsourced if you wanted, to consume it as a service, or in-sourced if you want to do it yourself.

And there’s a lot of work to do in that space. As an industry, we are in the early stages, but I don’t think it’s hard to foresee a modular data center that should operate holistically as opposed to just the sum of its parts.

Gardner: I was thinking that the OT-IT thing was just an issue at the edge. But it sounds like you’re also referring to it within the data center itself. So flesh that out a bit. How do OT and IT together — managing all the IT systems, components, complexity, infrastructure, support elements — work in the intelligent, data center-as-a-service approach?

Lalla: There is the data center infrastructure management (DCIM) approach, which says, “Let’s bring it all together and manage it.” I think that’s one way of thinking about OT and IT, and certainly Vertiv has solutions in that space with products like TrellisTM.

But I actually think about it as: Once the data is liberated, how do we take the best of computing solutions, data analytics solutions, and stuff that was born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in our industrial OT space?

It’s not necessarily that OT and IT are one thing, but how do we apply the best of all of technology solutions? Things like security. There is a lot of great stuff that’s emerged for security. How do we take a security-solutions perspective in the IT space if we are going to get more connected in the OT space? Well, let’s learn from what’s going on in IT and see how we can apply it to OT.

Once the data is liberated we can take the best of data analytics solutions born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in the industrial OT space.

Just because DCIM has been tackled for years doesn’t mean we can’t take more of the best of each world and see how you can put those together to provide a solution that’s differentiated.

I go back to the Liebert ICOM-S solution, which uses desktop computing and gateway technology, and application development running on a high-performance IT piece of gear, connected to OT gear to get those products that normally would work separately to actually work more seamlessly together. That provides better performance and efficiency than if those products operated separately.

Liebert ICOM-S is a great example of where we have taken the best of the IT world compute technology connectivity and the best of the OT world power and cooling and built a solution that makes the interaction differentiated in the marketplace.

Gardner: I’m glad you raised an example because we have been talking at an abstract level of solutions. Do you have any other use cases or concrete examples where your concept for infrastructure data center-as-a-service brings benefits? When the rubber hits the road, what do you get? Are there some use cases that illustrate that?

Real LIFE solutions

Lalla: I don’t have to point much further than our Vertiv LIFE Services remote monitoring solution. This solution came out a couple years ago, partly from our Chloride® Group acquisition many years ago. LIFE Services allows customers to subscribe to have us do the remote monitoring, remote management, and analytics of what’s happening — and whenever possible do the preventative care of their networks.

And so, LIFE is a great example of a solution with connectivity, with the right data flowing from the products, and with the right IT gear so our personnel take the workload away from the customer and allow us to deliver a solution. That’s one example of where we are delivering as-a-service for our customers.

logoWe are also working with customers — and we can’t expose who they are — to bring their data into our large data lake so we can help them better predict how various elements of their ecosystem will perform. This helps them better understand when they need just-in-time service and maintenance versus break/fix service and maintenance.

These are two different examples where Vertiv provides services back to our customers. One is running a network operations center (NOC) on their behalf. Another uses the data lake that we’ve assimilated from billions of records to help customers who want to predict things and use the broad knowledge set to do that.

Gardner: We began our conversation with all the great things going on in modern data center infrastructure and solutions to overcome obstacles to get there, but economics plays a big role, too. It’s always important to be able to go to the top echelon of your company and say, “Here is the math, here’s why we think doing data center modernization is worth the investment.”

What is there about creating that data lake, the intellectual property, and the insights that help with data center economics? What’s the total cost of ownership (TCO) impact? How do you know when you’re doing this right, in terms of dollars and cents?

Uptime is money

Lalla: It’s difficult to generalize too much but let me give you some metrics we care about. Stuff is going to break, but if we know when it’s going to break — or even if it does break — we can understand exactly what happened. Then we can have a much higher first-time fix rate. What does that mean? That means I don’t have to come out twice, I don’t have to take the system out of commission more than once, and we can have better uptime. So that’s one.

Number two, by getting the data we can understand what’s going on with the network time-to-repair and how long it takes us from when we get on-site to when we can fix something. Certainly it’s better if you do it the first time, and it’s also better if you know exactly what you need when you’re there to perform the service exactly the way it needs to be done. Then you can get in and out with minimal disruption.

A third one that’s important — and one that I think will grow in importance — is we’re beginning to measure what we call service avoidance. The way we measure service avoidance is we call up a customer and say, “Hey, you know, based on all this information, based on these predictions, based on what we see from your network or your systems, we think these four things need to be addressed in the next 30 days. If not, our data tells us that we will be coming out there to fix something that broken as opposed to fixing it before it breaks.” So service avoidance or service simplification is another area that we’re looking at.

There are many more — I mean, meeting service level agreements (SLAs), uptime, and all of those — but when it comes to the tactical benefits of having smarter products, of being more connected, liberating data, and consuming that data and using it to make better decisions as a service — those are the things that customers should expect differently.

Gardner: And in order to enjoy those economic benefits through the Vertiv approach and through data center-as-a-service, does this scale down and up? It certainly makes sense for the larger data center installations, but what about a small- to medium-sized business (SMB)? What about a remote office, or a closet and a couple of racks? Does that make sense, too? Do the economic and the productivity benefits scale down as well scale up?

Lalla: Actually when we look at our data, more customers who don’t have all the expertise to manage and monitor their single-phase or small three-phase or Liebert CRV [cooling] units, and they don’t have the skill set — those are the customers that really appreciate what we can do to help them. It doesn’t mean that they don’t appreciate it as you go up the stack, because as you go up the stack what those customers appreciate isn’t the fact that they can do some of the services themselves. They may be more of a self-service-oriented customer, but what they increasingly are interested in is how we’re using data in our data lake to better predict things that they can’t predict by just looking at their own stuff.

smartaisleSo, the value shifts depending on where you are in the stack of complexity, maturity, and competency. It also varies based on hyperscale, colocation, enterprise, small enterprise, and point-of-sale. There are a number of variables so that’s why it’s difficult to generalize. But this is why the themes of productivity, smarter products, edge ecosystems, and data liberation are common across all those segments. How they apply the value that’s extracted in each segment can be slightly different.

Gardner: Suffice it to say data center-as-a-service is highly customizable to whatever organization you are and wherever you are on that value chain.

Lalla: That’s absolutely right. Not everybody needs everything. Self-service is on one side and as-a-service is on the other. But it’s not a binary conversation.

Customers who want to do most of the stuff themselves with technology, they may need only a little information or help from Vertiv. Customers who want most of their stuff to be managed by us — whether it’s storage systems or large systems — we have the capability of providing that as well. This is a continuum, not an either-or.

Gardner: Steve, before we close out, let’s take a look to the future. As you build data lakes and get more data, machine learning (ML) and artificial intelligence (AI) are right around the corner. They allow you to have better prediction capabilities, do things that you just simply couldn’t have ever done in the past.

So what happens as these products get smarter, as we are collecting and analyzing that data with more powerful tools? What do you expect in the next several years when it comes to the smarter data center-as-a-service?

Circle of knowledge gets smart 

Lalla: We are in the early stages, but it’s a great question, Dana. There are two outcomes that will benefit all of us. One, that information with the right algorithms, analysis, and information is going to allow us to build products that are increasingly smarter.

There is a circle of knowledge. Products produce information going to the data lake, we run the right algorithms, look for the right pieces of information, feed that back into our products, and continually evolve the capability of our products as time goes on. Those products will break less, need less service, and are more reliable. We should just expect that, just as you have seen in other industries. So that’s number one.

Number two, my hope and belief are that we move from a break-fix mentality or environment of where we wait for something to show up on a screen as an alarm or an alert. We move from that to being highly predictive and just-in-time.

As an industry — and certainly at Vertiv — first-time fix, service avoidance, and time for repair are all going to get much better, which means one simple thing for our customers. They are going to have more efficient and well-tuned data centers. They are going to be able to operate with higher rates of uptime. All of those things are going to result in goodness for them — and for us.

Listen to the podcast. Find it on iTunes. Read a full transcript or  download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Cloud computing, data analysis, data center, Data center transformation, enterprise architecture, hyperconverged infrastructure, Vertiv | Tagged , , , , , , , , , , , , , , , , | Leave a comment

How Unisys and Dell EMC head off backup storage cyber security vulnerabilities

cloud graphicThe next BriefingsDirect data security insights discussion explores how data — from one end of its life cycle to the other — needs new protection and a means for rapid recovery.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we examine how backup storage especially needs to be made safe and secure if companies want to quickly right themselves from an attack. To learn more, please welcome Andrew Peters, Stealth Industry Director at Unisys, and George Pradel, Senior Systems Engineer at Dell EMC. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s changed in how data is being targeted by cyber attacks? How are things different from three years ago?

Andrew Peters

Peters

Peters: Well, one major thing that’s changed in the recent past has been the fact that the bad guys have found out how to monetize and extort money from organizations to meet their own ends. This has been something that has caught a lot of companies flatfooted — the sophistication of the attacks and the ability to extort money out of organizations.

Gardner: George, why does all data — from one end of its life cycle to the other –now need to be reexamined for protection?

Pradel: Well, Andrew brings up some really good points. One of the things we have seen out in the industry is ransomware-as-a-service. Folks can just dial that in. There are service level agreements (SLAs) on it. So everyone’s data now is at risk.

Another of the things that we have seen with some of these attacks is that these people are getting a lot smarter. As soon as they go in to try and attack a customer, where do they go first? They go for the backups. They want to get rid of those, because that’s kind of like the 3D chess where you are playing one step ahead. So things have changed quite a bit, Dana.

Peters: Yes, it’s really difficult to put the squeeze on an organization knowing that they can recover themselves with their backup data. So, the heat is on the bad guys to go after the backup systems and pollute that with their malware, just to keep companies from having the capability to recover themselves.

Gardner: And that wasn’t the case a few years ago?

George Pradel

Pradel

Pradel: The attacks were so much different a few years ago. They were what we call script kiddie attacks, where you basically get some malware or maybe you do a denial-of-service attack. But now these are programmatized, and the big thing about that is if you are a target once, chances are really good that the thieves are just going to keep coming back to you, because it’s easy money, as Andrew pointed out.

Gardner: How has the data storage topology changed? Are organizations backing up differently than they did a few years ago as well? We have more cloud use, we have hybrid, and different strategies for managing de-dupe and other redundancies. How has the storage topology and landscape changed in a way that affects this equation of being secure end to end?

The evolution of backup plans 

Pradel: Looking at how things have changed over the years, we started out with legacy systems, the physical systems that many of us grew up with. Then virtualization came into play, and so we had to change our backups. And virtualization offered up some great ways to do image-level backups and such.

Now, the big deal is cloud. Whether it’s one of the public cloud vendors, or a private cloud, how do we protect that data? Where is our data residing? Privacy and security are now part of the discussion when creating a hybrid cloud. This creates a lot of extra confusion — and confusion is what thieves zone in on.

We want to make sure that no matter where that data resides that we are making sure it’s protected. We want to provide a pathway for bringing back the data that is air gapped or via one of our other technologies that helps keeps the data in a place that allows for recoverability. Recoverability is the number one thing here, but it definitely has changed in these last few years.

Gardner: Andrew, what do you recommend to customers who may have thought that they had this problem solved? They had their storage, their backups, they protected themselves from the previous generations of security risk. When do you need to reevaluate whether you are secure enough?

Stay prepared 

Peters: There are a few things to take into consideration. One, they should have an operation that can recover their data and bring their business back up and running. You could get hit with an attack that turns into a smoking hole in the middle of your data center. So how do you bring your organization back from that without having policies, guidance, a process and actual people in place in the systems to get back to work?

Learn More About Cyber Recovery

With Unisys Stealth 

Another thing to consider is the efficacy of the data. Is it clean? If you are backing up data that is already polluted with malware, guess what happens when you bring it back out and you recover your systems? It rehydrates itself within your systems and you still have the same problem you had before. That’s where the bad guys are paying attention. That’s what they want to have happen in an organization. It’s a hand they can play.

Value WheelIf the malware can still come out of the backup systems and rehydrate itself and re-pollute the systems when an organization is going through its recovery, it’s not only going to hamper the business and the time to recovery, and cost them, it’s also going to force them to pay the ransoms that the bad guys are extorting.

Gardner: And to be clear, this is the case across both the public and the private sector. We are hearing about ransomware attacks in lots of cities and towns. This is an equal opportunity risk, isn’t it?

Peters: Malware and bad guys don’t discriminate.

Pradel: You are exactly right about that. One of the customers that I have worked with recently in a large city got hit with a ransomware attack. Now, one of the things about ransomware attacks is that they typically want you to pay in bitcoin. Well, who has $100,000 worth of bitcoin sitting around?

If you have a government attacked, one of the problems is that chaos ensues. Police officers in their cars were not able to pull up license plates on the computer to check on cars they were pulling over.

But let’s take a look at why it’s so important to eliminate these types of attacks. If you have a government attacked, one of the problems is that chaos ensues. In one particular situation, police officers in their cars were not able to pull up license plates on the computer to check on cars they were pulling over, to see if they had a couple of bad tickets or perhaps the person was wanted for some reason. And so it is a very dangerous situation you may put into play for all of these officers.

That’s one tiny example of how these things can proliferate. And like you said, whether it’s public sector or private sector, if you are a soft target, chances are at some point you are going to get hit with ransomware.

Secure the perimeter and beyond 

Gardner: What are we doing differently in terms of the solutions to head this off, especially to get people back and up and running and to make sure that they have clean and useable data when they do so?

Peters: A lot of security had been predicated on the concept of a perimeter, something where we can put up guards, gates, and guns and in a moat. There is an inside and an outside, and it’s generally recognized today that that doesn’t really exist.

And so, one of the new moves in security is to defend the endpoint, the application, and to do that using a technology called micro-segmentation. It’s becoming more popular because it allows us to have a security perimeter and a policy around each endpoint. And if it’s done correctly, you can scale to hundreds to thousands to hundreds of thousands, and potentially millions of endpoint devices, applications, servers and virtually anything you have in an environment.

And so that’s one big change: Let’s secure the endpoint, the application, the storage, and each one comes with its own distinct security policy.

Gardner: George, how do you see the solutions changing, perhaps more toward the holistic infrastructure side and not just the endpoint issues?

Pradel: One of the tenets that Andrew related to is called security by obscurity. The basic tenet is, if you can’t see it’s much safer. Think about a safe in your house. If the safe is back behind the bookcase and you are the only person that knows it’s there, that’s an extra level of security. Well, we can do that with technology.

So you are seeing a lot of technologies being employed. Many of them are not new types of security technologies. We are going back to what’s worked in the past and building some of these new technologies on that. For example, we add on automation, and with that automation we can do a lot of these things without as much user intervention, and so that’s a big part of this.

Incidentally, if any type of security that you are using has too much user intervention, then it’s very hard for the company to cost-justify those types of resources.

Gardner: Something that isn’t different from the past is having that Swiss Army knife approach of multiple layers of security. You use different tools, looking at this as a team sport where you want to bring as many solutions as possible to bear on the problem.

How have Unisys and Dell EMC brought different strengths together to create a whole greater than the sum of the parts?

Hide the data, so hackers can’t seek

Peters: One thing that’s fantastic that Dell has done is that they have put together a Cyber Recovery solution so when there is a meltdown you have gold copies of critical data required to reestablish the business and bring it back up and get into operation. They developed this to be automated, to contain immutable copies of data, and to assure the efficacy of the data in there.

Now, they have set this stuff up with air gapping, so it is virtually isolated from any other network operations. The bad guys hovering around in the network have a terrible time of trying to even touch this thing.

Learn More About Dell EMC PowerProtect

Cyber Recovery Solution 

Unisys put what we call a cryptographic wrapper around that using our micro-segmentation technology called Stealth. This creates a cryptographic air gap that virtually disappears that vault and its recovery operations from anything else in the network, if they don’t have a cryptographic key. If they have a cryptographic key that was authorized, they could talk to it. If they don’t, they can’t. So any bad guys and malware can’t see it. If they can’t see, they can’t touch, and they can’t hack. This then turns into an extraordinarily secure means to recover an organization’s operations.

Gardner: The economics of this is critical. How does your technology combination take the economic incentive away from these nefarious players?

Pradel: Number one, you have a way to be able to recover from this. All of a sudden the bad guys are saying, “Oh, shoot, we are not going to get any money out of these guys.”

You are not going to be a constant target. They are going to go after your backups. Unisys Stealth can hide the targets that these people go after. Once you have this type of a Cyber Recovery solution in place, you can rest a lot easier at night.

As part of the Cyber Recovery solution, we actually expect malware to get into the Cyber Recovery vault. And people shake their head and they go, “Wait, George, what do you mean by that?”

Yes, we want to get malware into the Cyber Recovery vault. Then we have ways to do analytics to see whether our point-in times are good. That way, when we are doing that restore, as Andrew talked about earlier, we are restoring a nice, clean environment back to the production environment.

Recovery requires commitment, investment

So, these types of solutions are an extra expense, but you have to weigh the risks for your organization and factor what it really costs if you have a cyber recovery incident.

Additionally, some people may not be totally versed on the difference between a disaster recovery situation and a cyber recovery situation. A disaster recovery may be from some sort of a physical problem, maybe a tornado hits and wipes out a facility or whatever. With cyber recovery, we are talking about files that have been encrypted. The only way to get that data back — and get back up and running — is by employing some sort of a cyber recovery solution, such as the Unisys and Dell EMC solution.

Gardner: Is this tag team solution between Unisys and Dell EMC appropriate and applicable to all kinds of business, including cloud providers or managed service providers?

Peters: It’s really difficult to measure the return on investment (ROI) in security, and it always has been. We have a tool that we can use to measure risk, probability, and financial exposure for an organization. You can actually use the same methodologies that insurance companies use to underwrite for things like cybersecurity and virtually anything else. It’s based on the reality that there is a strong likelihood that there is going to be a security breach. There is going to be perhaps a disastrous security breach, and it’s going to really hurt the organization.

Plan on the fact that it’s probably going to happen. You need to invest in your systems and your recovery. If you think you can sustain a complete meltdown on your company and go out of operations for weeks to months, then you probably don’t need to put money into it.

Plan on the fact that it’s probably going to happen. You need to invest in your systems and your recovery. If you think that you can sustain a complete meltdown on your company and go out of operation for weeks to months, then you probably don’t need to put money into it.

If you understand how exposed that you potentially are, and the fact that the bad guys are staring at the low hanging fruit — which may be state governments, or cities, or other things that are less protected.

The fact is, the bad guys are extraordinarily patient. If your payoff is in the tens of millions of dollars, you might spend, as the bad guys did with Sony, years mapping systems, learning how an operation works, and understanding their complete operations before you actually take action, and in potentially the most disastrous way possible.

So ergo, it’s hard to put a number on that. An organization will have to decide how much they have to lose, how much they have at risk, and what the probability is that they are actually going to get hit with an attack.

Gardner: George, also important on this applicability as to where it’s the right fit is that automation and skills. What sort of organizations typically will go at this and what skills are required?

Automate and simplify

Pradel: That’s been the basis for our Cyber Recovery solution. We have written a number of APIs to be able to automate different pieces of a recovery situation. If you have a cyber recovery incident, it’s not a matter of just, “Okay, I have the data, now I can restore it.” We have a lot of experts in the field. What they do is figure out exactly where the attack came from, how it came in, what was affected, and those types of things.

unisys bugWe make it as simple as possible for the administration. We have done a lot of work creating APIs that automate items such as recovering backup servers. We take point-in-time copies of the data. I don’t want to go into it too deeply, but our data domain technology is the basis for this. And the reason why it’s important to note is because the replication we do is based upon our variable-length deduplication.

Now, that may sound a little gobbledygook, but what that means is that we have the smallest replication times that you could have for a certain amount of data. So when we are taking data into the Cyber Recovery vault, we are reducing what’s called our dwell time. This is the area where you would have someone that could see that you had a connection open.

Learn More About Cyber Recovery

With Unisys Stealth 

But a big part of this is on a day-to-day basis, I don’t have to be concerned. I don’t have a whole team of people that are maintaining this Cyber Recovery vault. Typically, with our customers, they already have the understanding of how our base technology works and so that part is very straightforward. And what we have is automation, we have policies that are set up in the Cyber Recovery vault that will, on a regular basis, hold the data, whatever is changed from the production environment, typically once a day.

And a rule of thumb for some people that might be thinking, this sounds really interesting, but how much data would I put in this? Typically we’ll do 10 to 15 percent of a customer’s production environment, that might go into the Cyber Recovery vault. So we want to make this as simple as possible, we want to automate as much as possible.

And on the other side, when there is an incident, we want to be able to also automate that part because that is when all heck is going on. If you’ve ever been involved in one of those situations, it’s not always your clearest thinking moment. So automation is your best friend and can help you get back up and running as quickly as possible.

Gardner: George, run us through an example, if you would, of how this works in the real-world.

One step at a time for complete recovery

Pradel: What will happen is that at some point somebody clicks on that doggone attachment that was on that e-mail that had a free trip to Hawaii or something and it had a link to some ransomware.

Once the security folks have determined that there has been an attack, sometimes it’s very obvious. There is one attack where there is a giant security skeleton that comes up on your screen and basically says, “Got you.” It then gives instructions on how you would go about sending them the money so that you can get your data back.

However, sometimes it’s not quite so obvious. Let’s say your security folks have determined there has been attack and then the first thing that you would want to do is access the cyber recovery provided by putting the Cyber Recovery vault with Stealth. You would go to the Cyber Recovery vault and lock down the vault, and it’s simple and straightforward. We talked about this a little earlier about the way we do the automation is you click on the lock, that locks everything down and it stops any future replications from coming in.

Dell_EMC_logoAnd while the security team is looking to find out how bad it is, what was affected, one of the things the cyber recovery team does is to go in and run some analysis, if you haven’t done so already. You can automate this type of analysis, but let’s say you haven’t done that. Let’s say you have 30 point-in times, so one for each day throughout the last month. You might want to check and run an analysis against maybe the last five of those to be able to see whether or not those come up as suspicious or as okay.

The way that’s done is to look at the entropy of the different point-in-time backups. One thing to note is that you do not have to rehydrate the backup in order to analyze it. So let’s say you backed it up with Avamar and then you wanted to analyze that backup. You don’t have to rehydrate that in the vault in order to get it back up and running.

Once that’s done, then there’s a lot of different ways that you can decide what to do. If you have physical machines but they are not in great shape, they are suspect in that. But, if the physical parts of it are okay, you could then decide that at some point you’re going to reload those machines with the gold copies or very typical to have in the vault and then put the data and such on it.

If you have image-level backups that are in the vault, those are very easy to get back up and running on a VMWare ESX host store, or Microsoft Hyper-V host that you have in your production environment. So, there are a lot of different ways that you can do that.

The whole idea, though, is that our typical Cyber Recovery solution is air-gapped and we recommend customers have a whole separate set of physical controls as well as the software controls.

Now, one of those steps may not be practical in all situations. That’s why we looked at Unisys Stealth, to provide a virtual air gap by installing the pieces from Stealth.

Remove human error

Peters: One of the things I learned in working with the United States Air Force’s Information Warfare Center was the fact that you can build the most incredibly secure operation in the world and humans will do things to change it.

With Stealth, we allow organizations to be able to get access into the vault from a management perspective to do analytics, and also from a recovery perspective, because anytime there’s a change to the way that vault operates, that’s an opportunity for bad guys to find a way in. Because, once again, they’re targeting these systems. They know they’re there; they could be watching them and they can be spending years doing this and watching the operations.

Unisys Stealth removes the opportunity for human error. We remove the visibility that any bad guys, or any malware, would have inside a network to observe a vault. They may see data flowing but they don’t know what it’s going to, they don’t know what it’s for, they can’t read it because it’s going to be encrypted. They are not going to be able to even see the endpoints because they will never be able to get an address on them. We are cryptographically disappearing or hiding or cloaking, whatever word you’d like to use — we are actively removing those from visibility from anything else on the network unless it’s specifically authorized.

Gardner: Let’s look to the future. As we pointed out earlier in our discussion, there is a sort of a spy versus spy, dog chasing the cat, whatever you want to use as a metaphor, one side of the battle is adjusting constantly and the other is reacting to that. So, as we move to the future, are there any other machine learning (ML)-enabled analytics on these attacks to help prevent them? How will we be able to always stay one step ahead of the threat?

Peters: With our technology we already embody ML. We can do responses called dynamic isolation. A device could be misbehaving and we could change its policy and be able to either restrict what it’s able to communicate with or cut it off altogether until it’s been examined and determined to be safe for the environment.

We can provide a lot of automation, a lot of visibility, and machine-speed reaction in response to threats as they are happening. Malware doesn’t have to get that 20-second head start. We might be able to cut off in 10 seconds and be able to make it a dynamic change to the threat surface.

Gardner: George, what’s in the future that it’s going to allow you to stay always one step ahead of the bad guys? Also, is there is an advantage for organizations doing a lot of desktops-as-a-service (DaaS) or virtual desktops? Do they have an advantage in having that datacenter image of all of the clients?

Think like a bad guy 

Pradel: Oh, yes, definitely. How do we stay in front of the bad guys? You have to think like the bad guys. And so, one of the things that you want to do is reduce your attack surface. That’s a big part of it, and that’s why the technology that we use to analyze the backups, looking for malware, uses 100 different types of objects of entropy.

As we’re doing ML of that data, of what’s normal what’s not normal, we can figure out exactly where the issues are to stay ahead of them.

Now an air gap on its own surface is extremely secure because it keeps that data in an environment where no one can get at it. We have situations where Unisys Stealth helped with closing the air gap situation where a particular general might have three different networks that they need to connect to and Stealth is a fantastic solution for that.

If you’re doing DaaS, there are ways that it can help. We’re always looking at where the data resides, and most of the time in those situations the data is going to reside back at the corporate infrastructure. That’s a very easy place to be able to protect data. When the data is out on laptops and things like that, then it makes it a little bit more difficult, not impossible, but you have a lot of different end points that you’re pulling from. To be able to bring the system back up — if you’re using virtual desktops, that kind of thing, actually it’s pretty straightforward to be able to do that because that environment, chances are they’re not going to bring down the virtual desktop environment, they’re going to encrypt the data.

Learn More About Dell EMC PowerProtect

Cyber Recovery Solution 

Now, that said, one of the things when we’re having these conversations, it’s not as straightforward of a conversation as ever. We talk about how long you might be out of business depending upon what you’ve implemented. We have to engineer for all the different types of malware attacks. And what’s the common denominator? It’s the data and keeping that data safe, keeping that data so it can’t be deleted.

We have a retention lock capability so you can lock that up for as many as 70 years and it takes two administrators to unlock it. That’s the kind of thing that makes it robust.

In the old days, we would do a WORM drive and copy stuff off to a CD to make something immutable. This is a great way to do it. And that’s one way to stay ahead of the bad guys as best as we can.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys.

You may also be interested in:

Posted in AIOps, big data, Cloud computing, Cyber security, Dell, DevOps, disaster recovery, Information management, Security, storage, Unisys | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

How Unisys and Microsoft team up to ease complex cloud adoption for governments and enterprises

cloud imageThe path to cloud computing adoption persistently appears complex and risky to both government and enterprise IT leaders, recent surveys show.

This next BriefingsDirect managed cloud methodologies discussion explores how tackling complexity and security requirements upfront helps ease the adoption of cloud architectures. By combining managed services, security solutions, and hybrid cloud standardization, both public and private sector organizations are now making the cloud journey a steppingstone to larger business transformation success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We’ll now explore how cloud-native apps and services modernization benefit from prebuilt solutions with embedded best practices and automation. To learn how, we welcome Raj Raman, Chief Technology Officer (CTO) for Cloud at Unisys, and Jerry Rhoads, Cloud Solutions Architect at Microsoft. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Raj, why are we still managing cloud adoption expectations around complexity and security? Why has it taken so long to make the path to cloud more smooth — and even routine?

Raman: Well, Dana, I spend quite a bit of time with our customers. A common theme we see — be it a government agency or a commercial customer – is that many of them are driven by organizational mandates and getting those organizational mandates in place often proves challenging, more so than what one thinks.

Cloud adoption challenges

Raj Raman

Raman

The other part is that while Amazon Web Services (AWS) or Microsoft Azure may be very easy to get on to, the question then becomes how do you scale up? They have to either figure out how to develop in-house capabilities or they look to a partner like Unisys to help them out.

Cloud security adoption continues to be a challenge because enterprises still try and wish to apply traditional security practices to the cloud. Having a security and risk posture on AWS or Azure means having a good understanding of the shared security model across user level, application and infrastructure layers of the cloud.

And last, but not least, a very clear mandate such as digital transformation or a specific initiative, where there is a core sponsor around it, oftentimes does ease the whole focus on some of these.

These are some of the reasons we see for cloud complexity. The applications transformation can also be quite arduous for many of our clients.

Gardner: Jerry, what are you seeing for helping organizations get cloud-ready? What best practices make for a smoother on-ramp?

Rhoads: One of the best practices beforehand is to determine what your endgame is going to look like. What is your overall cloud strategy going to look like?

Jerry Rhoads

Rhoads

Instead of just lifting and shifting a workload, what is the life cycle of that workload going to look like? It means a lot of in-depth planning — whether it’s a government agency or private enterprise. Once we get into the mandates, it’s about, “Okay, I need this application that’s running in my on-premises data center to run in the cloud. How do I make it happen? Do I lift it and shift it or do I re-architect it? If so, how do I re-architect for the cloud?”

That’s a big common theme I’m seeing: “How do I re-architect my application to take better advantage of the cloud?”

Gardner: One of the things I have seen is that a lot of organizations do well with their proof of concepts (POCs). They might have multiple POCs in different parts of the organization. But then, getting to standardized comprehensive cloud adoption is a different beast.

Raj, how do you make that leap from spotty cloud adoption, if you will, to more holistic?

One size doesn’t fit all 

Raman: We advise customers to try and [avoid] taking it on as a one-size-fits-all. For example, we have one client who is trying – all at once – to lift and shift thousands of applications.

Now, they did a very detailed POC and they got yield from that POC. But when it came to the actual migration and transformation, they were convinced and felt confident that they could take it on and try it en masse, with thousands of applications.

The thing is, in trying to do that, not all applications are one size. One needs a phased approach for doing application discovery and application assessment. Then, based on that, you can determine which applications are well worth the effort [to move to a cloud].

So we recommend to customers that they think of migrations as a phased approach. Be very clear in terms of what you want to accomplish. Start small, gain the confidence, and then have a milestone-based approach of accomplishing it all.

Learn More About 

Unisys CloudForte 

Gardner: These mandates are nonetheless coming down from above. For the US federal government, for example, cloud has become increasingly important. We are expecting somewhere in the vicinity of $3.3 billion to be spent for federal cloud in 2021. Upward of 60 percent of federal IT executives are looking to modernization. They have both the cloud and security issues to face. Private sector companies are also seeing mandates to rapidly become cloud-native and cloud-first.

Jerry, when you have that pressure on an IT organization — but you have to manage the complexity of many different kinds of apps and platforms — what do you look for from an alliance partner like Unisys to help make those mandates come to fruition?

Rhoads: In working with partners such as Unisys, they know the customer. They are there on the ground with the customer. They know the applications. They hear the customers. They understand the mandates. We also understand the mandates and we have the cloud technology within Azure. Unisys, however, understands how to take our technology and integrate it in with their end customer’s mission.

Gardner: And security is not something you can just bolt on, or think of, after the fact in such migrations. Raj, are we seeing organizations trying to both tackle cloud adoption and improve their security? How do Unisys and Microsoft come together to accomplish those both as a tag team rather than a sequence, or even worse, a failure?

Secure your security strategy

Raman: We recently conducted a survey of our stakeholders, including some of our customers. And to no surprise security — be it as part of the migrations or in scaling up their current cloud initiatives – is by far a top area of focus and concern.

We are already partnering with Microsoft and others with our flagship security offering, Unisys Stealth. We are not just in collaboration but leapfrogging in terms of innovation. The Azure cloud team has released a specific API to make products like Stealth available. This now gives customers more choice and it allows Unisys to help meet customers in terms of where they are.

Also, earlier this year we worked very closely with the Microsoft cloud team to release Unisys CloudForte for Azure. These are foundational elements that help both governments as well as commercial customers leverage Azure as a platform for doing their digital transformation.

The Microsoft team has also stepped up and worked very closely with the Unisys team developers and architects to make these services native on Azure.

The Microsoft team has also stepped up and worked very closely with the Unisys team developers and architects to make these services native on Azure, as well as help customers understand how they can better consume Azure services.

Those are very specific examples in which we see the Unisys collaboration with Microsoft scaling really well.

Gardner: Jerry, it is, of course, about more than just the technology. These are, after all, business services. So whether a public or private organization is making the change to an operations model — paying as you consume and budgeting differently — financially you need to measure and manage cloud services differently.

How is that working out? Why is this a team sport when it comes to adopting cloud services as well as changing the culture of how cloud-based business services are consumed?

Keep pay-as-you-go under control 

Rhoads: One of the biggest challenges I hear from our customers is around going from a CAPEX model to an OPEX model. They don’t really understand how it works.

CAPEX is a longtime standard — here is the price and here is how long it is good for until you have to then re-up and buy new piece of hardware or re-up the license, or whatnot. Using cloud, it’s pay-as-you-go.

If I launch 400 servers for an hour, I’m paying for 400 virtual machines running for one hour. So if we don’t have a governance strategy in place to stop something like that, we can wind up going through one year’s worth of budget in 30 days — if it’s not governed, if it’s not watched.

And that’s why, for instance, working with Unisys CloudForte there are built-in controls to where you can go through and ping the Azure cloud backend — such as Azure Cost Management or our Cloudyn product — where you can see how much your current charges are, as well as forecast what those charges are going to look like. Then you can get ahead of the eight ball, if you will, to make sure that you are actually burning through your budget correctly — versus getting a surprise at the end of the month.

Gardner: Raj, how should organizations better manage that cultural shift around cloud consumption governance?

Raman: Adding to Jerry’s point, we see three dimensions to help customers. One is what Unisys calls setting up a clear cloud architecture, the foundations. We actually have an offering geared around this. And, again, we are collaborating with Microsoft on how to codify those best practices.

In going to the cloud, we see five pillars that customers have to contend with: cost, security, performance, availability, and operations. Each of these can be quite complex and very deep.

partnersRather than have customers figure these out themselves, we have combined product and framework. We have codified it, saying, “Here are the top 10 best practices you need to be aware of in terms of cost, security, performance, availability, and operations.”

It makes it very easy for the Unisys consultants, architects, and customers to understand at any given point — be it pre-migration or post-migration — that they have clear visibility on where they stand for their review on cost in the cloud.

We are also thinking about security and compliance upfront — not as an afterthought. Oftentimes customers go deep into the journey and they realize they may not have the controls and the security postures, and the next thing you know they start to lose confidence.

So rather than wait for that, the thinking is we arm them early. We give them the governance and the policies on all things security and compliance. And Azure has very good capabilities toward this.

The third bit, and Jerry touched on this, is overall financial governance. The ability to think about — not just cost as a matter of spinning a few Azure resources up and down – but in a holistic way, in a governance model. That way you can break it up in terms of analyzed or utilized resources. You can do chargebacks and governance and gain the ability to optimize cost on an ongoing basis.

These are distinctive foundational elements that we are trying to arm customers with and make them a lot more comfortable and gain the trust as well as the process with cloud adoption.

Gardner: The good news about cloud offerings like Azure and hybrid cloud offerings like Azure Stack is you gain a standardized approach. Not necessarily one-size-fits-all, but an important methodological and technical consistency. Yet organizations are often coming from a unique legacy, with years and years of everything from mainframes to n-tier architectures, and applications that come and go.

How do Unisys and Microsoft work together to make the best of standardization for cloud, but also recognize specific needs that each organization has?

Different clouds, same experience

Rhoads: We have Azure Stack for on-premise Azure deployments. We also have Azure Commercial Cloud as well as Azure Government Cloud and Department of Defense (DoD) Cloud. The good news is that they use the same portal, same APIs, same tooling, and same products and services across all three clouds.

Now, as services roll out, they roll out in our Commercial Cloud first, and then we will roll them out into Azure Government as well as into Azure Stack. But, again, the good news is these products are available, and you don’t have to do any special configuration or anything in the backend to make it work. It’s the same experience regardless of which product the customer wants to use.

Unisys CloudForte works with Azure Stack, with Commercial, and Azure for Government. For the end customers it’s the same cloud services that they expect to use. The difference is just where those cloud services live.

What’s more, Unisys CloudForte works with Azure Stack, with Commercial, and Azure for Government. For the end customer it’s the same cloud services that they expect to use. The difference really is just where those cloud services live, whether it’s with Azure Stack on-premises, on a cruise ship or in a mine, or if you are going with Azure Commercial Cloud, or if you need a regulated workload such as a FedRAMP high workload or an IC4, IC5 workload, then you would go into Azure Government. But there are no different skills required to use any of those clouds.

Same skill set. You don’t have to do any training, it’s the same products and services. And if the products and services aren’t in that region, you can work with Unisys or myself to engage the product teams to put those products in Azure Stack or in Azure for Government.

Gardner: How does Unisys CloudForte managed services complement these multiple Azure cloud environments and deployment models?

Rhoads: CloudForte really further standardizes it. There are different levels of CloudForte, for instance, and the underlying cloud really doesn’t matter, it’s going to be the same experience to roll that out. But more importantly, CloudForte is really an on-ramp. A lot of times I am working with customers and they are like, “Well, gee, how do I get started?”

Whether it’s setting up that subscription in-tenant, getting them on-board with that, as well as how to roll out that POC, how do they do that, and that’s where we leverage Unisys and CloudForte as the on-ramp to roll out that first POC. And that’s whether that POC is a bare-bones Azure virtual network or if they are looking to roll out a complete soup-to-nuts application with application services wrapped around it. CloudForte and Unisys can provide that functionality.

Do it your way, with support 

Raman: Unisys CloudForte has been designed as an offering on top of Azure. There are two key themes. One, meet customers where they are. It’s not about what Unisys is trying to do or what Azure is trying to do. It’s about, first and foremost, being customer obsessed. We want to help customers do things on their terms and do it the right way.

So CloudForte has been designed to meet those twin objectives. The way we go about doing it is — imagine, if you will, a flywheel. The flywheel has four parts. One, the whole consumption part, which is the ability to consume Azure workloads at any given point.

Learn More About 

Unisys CloudForte 

Next is the ability to run commands, or the operations piece. Then you follow that up with the ability to accelerate transformations, so data migrations or app modernization.

Last, but not least, is to transform the business itself, be it on a new technology, artificial intelligence (AI)machine learning (ML)blockchain, or anything that can wrap on top of Azure cloud services.

The beauty of the model is a customer does not have to buy all of these en masse; they could be fitting into any of this. Some customers come and say, “Hey, we just want to consume the cloud workloads, we really don’t want to do the whole transformation piece.” Or some customers say, “Thank you very much, we already have the basic consumption model outlined. But can you help us accelerate and transform?”

So the ability to provide flexibility on top of Azure helps us to meet customers where they are. That’s the way CloudForte has been envisioned, and a key part of why we are so passionate and bullish in working with Microsoft to help customers meet their goals.

Gardner: We have talked about technology, we have talked about process, but of course people and human capital and resources of talent and skills are super important as well. So Raj, what does the alliance between Unisys and Microsoft do to help usher people from being in traditional IT to be more cloud-native practitioners? What are we doing about the people element here?

Expert assistance available

Raman: In order to be successful, one of the big focus areas with Unisys is to arm and equip our own people, be it at the consulting level, a sales-facing level, either doing cloud architectures or even doing cloud delivery, across the stripe, rank and file. There is an absolute mandate to increase the number of certifications, especially the Azure certifications.

In fact, I can also share that Unisys, as we speak, every month we have a doubling of people who are across the rank of Azure 300 and the 900. These are the two popular certifications, the whole Azure stack of it. We have now north of 300 trained people, and maybe my number is at the lower end. We expect the number to double.

So we have absolute commitment, because customers look to us to not only come in and solve the problems, but to do it with the level of expertise that we claim. So that’s why our commitment to getting our people trained and certified on Azure is a very important piece of it.

Gardner: One of the things I love to do is to not just tell, but to show. Do we have examples of where the Unisys and Microsoft alliance — your approach and methodologies to cloud adoption, tackling the complexity, addressing the security, and looking at both the unique aspect of each enterprise and their skills or people issues — comes all together? Do you have some examples?

Raman: The California State University is a longstanding customer of ours, a good example where they have transformed their own university infrastructure using Unisys CloudForte with a specific focus on all things hybrid cloud. We are pleased to see that not only is the customer happy but they are quite eager to get back to us in terms of making sure that their mandates are met on a consistent basis.

Our federal agencies are usually reluctant to be in the spotlight. That said, what I can share are representative examples. We have some very large defense establishments working with us. We have some state agencies close to the Washington, DC area, agencies responsible for the roll-out of cloud consumption across the mandates.

We are well on our way in not only working with the Microsoft Azure cloud teams, but also with state agencies. Each of these agencies is citywide or region-wide, and within that they have a health agency or an agency focused on education or social services.

In our experience, we are seeing an absolute interest in adopting the public clouds for them to achieve their citizens’ mandates. So those are some very specific examples.

Gardner: Jerry, when we look to both public and private sector organizations, how do you know when you are doing cloud adoption right? Are there certain things you should look to, that you should measure? Obviously you would want to see that your budgets are moving from traditional IT spending to cloud consumption. But what are the metrics that you look to?

The measure of success

Rhoads: One of the metrics that I look at is cost. You may do a lift and shift and maybe you are a little bullish when you start building out your environments. When you are doing cloud adoption right, you should see your costs start to go down.

So your consumption will go up, but your costs will go down, and that’s because you are taking advantage of either platform as a service (PaaS) in the cloud, and being able to auto-scale out, or looking to move to say Kubernetes and start using things like Docker containers and shutting down those big virtual machines (VMs), and clusters of VMs, and then running your Kubernetes services on top of them.

When you see those costs go down and your services going up, that’s usually a good indicator that you are doing it right.

Gardner: Just as a quick aside, Jerry, we have also seen that Microsoft Azure is becoming very container- and Kubernetes-oriented, is that true?

Rhoads: Yes, it is. We actually have Brendan Burns, as a matter of fact, who was one of the co-creators of Kubernetes during his time at Google.

Gardner: Raj, how do you know when you are getting this right? What do you look to as chief metrics from Unisys’s perspective when somebody has gone beyond proof of concept and they are really into a maturity model around cloud adoption?

Raman: One of the things we take very seriously is our mandate to customers to do cloud on your terms and do it right. And what we mean by that is something very specific, so I will break it in two.

One is from a customer-led metric perspective. We actually rank ourselves very seriously in terms of Net Promoter Score. We have one of the highest in the industry relative to the rest of our competitions. And that’s something that’s hard-earned, but we keep striving to raise the bar on how our customers talk to each other and how they feel about us.

The other part is the ability to retain customers, so retention. So those are two very specific customer-focused benchmarks.

Now, building upon some of the examples that Jerry was talking about, from a cloud metric perspective, besides cost and besides cost optimization, we also look at some very specific metrics, such as how many net-net workloads are there under management. What are some of the net new services that are being launched? We especially are quite curious to see if there is a focus in terms of Kubernetes or AI and ML adoption, are there any trends toward that?

We rank ourselves very seriously in terms of Net Promoter Score. We have one of the highest in the industry, but we keep striving to raise the bar on how our customers talk to each other and feel about us.

One of the very interesting ones that I will share, Dana, is that some of our customers are starting to come and ask us, “Can you help set up an Azure Cloud center of excellence within our organization?” So that oftentimes is a good indicator that the customer is looking to transform the business beyond the initial workload movement.

And last, but not the least, is training, and absolute commitment to getting their own organization to become more cloud-native.

Gardner: I will toss another one in, and I know it’s hard to get organizations to talk about it, but fewer security breaches, fewer days or instances of downtime because of a ransomware attack. So it’s hard to get people to talk about it if you can’t always prove when you don’t get attacked, but certainly a better security posture as compared to two years, three years ago would be a high indicator on my map as to whether cloud is being successful for you.

All right, we are almost out of time, so let’s look to the future. What comes next when we get to a maturity model, when organizations are comprehensive, standardized around cloud, have skills and culture oriented to the cloud regardless of their past history? We are also of course seeing more use of the data behind the cloud, in operations and using ML and AI to gain AIOps benefits.

Where can we look to even larger improvements when we employ and use all that data that’s now being generated within those cloud services?

Continuous cloud propels the future 

Raman: One of the things that’s very evident to us is, as customers start to come to us and use the cloud at significant scale, is it is very hard for any one organization. Even for Unisys, we see this, which is how do you get scaled up and keep up with the rate of change that the cloud platform vendors such as Azure are bringing to the table; all good innovations, but how do you keep on top of that?

So that’s where a focus on what we are calling as “AI-led operations” is becoming very important for us. It’s about the ability to go and look at the operational data and have these customers go from a reactive, from a hindsight-led model, to a more proactive and a foresight-driven model, which can then guide, not only their cloud operations, but also help them think about where they can now leverage this data and use that Azure infrastructure to then launch more innovation or new business mandates. That’s where the AIOps piece, the AI-led operations piece, of it kicks in.

Learn More About 

Unisys CloudForte 

There is a reason why cloud is called continuous. You gain the ability to have continuous visibility via compliance or security, to have constant optimization, both in terms of best practices, reviewing the cloud workloads on a constant basis and making sure that their architectures are being assessed for the right way of doing Azure best practices.

And then last, but not the least, one other trend I would surface up, Dana, as a part of this, which is we are starting to see an increase in the level of conversational bots. Many customers are interested in getting to a self-service mode. That’s where we see conversational bots built on Azure or Cortana will become more mainstream.

Gardner: Jerry, how do organizations recognize that the more cloud adoption they have, the more standardization, the more benefits they will get in terms of AIOps and a virtuous adaption pattern kicks in?

Rhoads: To expand on what Raj talked about with AIOps, we actually have built in a lot of AI into our products and services. One of them is with Advanced Threat Protection (ATP) on Azure. Another one is with anti-phishing mechanisms that are deployed in Office 365.

So as more folks move into the cloud, we are seeing a lot of adoption around these products and services, as well as we are also able to bring in a lot of feedback and do a lot of learning off of some of the behaviors that we are seeing to make the products even better.

DevOps integrated in the cloud

So one of things that I do, in working with my customers is DevOps, how do we employ DevOps in the cloud? So a lot of folks are doing DevOps on-premises and they are doing it from an application point of view. I am rolling out my application on an infrastructure that is either virtualized or physical, sitting in my data center, how do I do that in the cloud, why do I do that in the cloud?

Well, in the cloud everything is software, including infrastructure. Yes, it sits on a server at the end of the day; however, it is software-defined, being it is software-defined, it has an API, I can write code. So therefore if I want to blow out or roll out a suite of VMs or I want to roll out Kubernetes clusters and put my application on top of it, I can create definable, repeatable code, if you will, that I can check into a repository someplace, press the button, and roll out that infrastructure and put my application on top of it.

So now deploying applications, especially with DevOps in the cloud, it’s not about I have an operations team and then I have my DevOps team that rolls out the application on top of existing infrastructure. Instead I actually bundle it altogether. I have tighter integration, which means I now have repeatable deployments and I can do my deployments, instead of doing them every quarter or annually, I can do deployments — I can do 20, 30, 1,000 a day if I like, if I do it right.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Microsoft.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, Cyber security, data center, Data center transformation, DevOps, Enterprise architect, Enterprise transformation, Microsoft, multicloud, Unisys | Tagged , , , , , , , , , , , , , , , | Leave a comment

How containers are the new basic currency for pay as you go hybrid IT

puzzleContainer-based deployment models have rapidly gained popularity across a full spectrum of hybrid IT architectures — from edge, to cloud, to data center.

This next edition of the BriefingsDirect Voice of the Innovator podcast series examines how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to explore the escalating benefits that come from broad container use with Robert Christiansen, Evangelist in the Office of the Chief Technology Officer at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Containers are being used in more ways and in more places. What was not that long ago a fringe favorite for certain developer use cases is becoming broadly disruptive. How disruptive has the container model become?

Christiansen: Well, it’s the new change in the paradigm. We are looking to accelerate software releases. This is the Agile motion. At the end of the day, software is consuming the world, and if you don’t release software quicker — with more features more often — you are being left behind.

robert-christiansen

Christiansen

The mechanism to do that is to break them out into smaller consumable segments that we can manage. Typically that motion has been adopted at the public cloud level on containers, and that is spreading into the broader ecosystem of the enterprises and private data centers. That is the fundamental piece — and containers have that purpose.

Gardner: Robert, users are interested in that development and deployment advantage, but are there also operational advantages, particularly in terms of being able to move workloads more freely across multiple clouds and hybrid clouds?

Christiansen: Yes, the idea is twofold. First off was to gain agility and motion, and then people started to ask themselves, “Well, I want to have choice, too.” So as we start abstracting away the dependencies of what runs a container, such as a very focused one that might be on a particular cloud provider, I can actually start saying, “Hey, can I write my container platform services and APIs compatible across multiple platforms? How do I go between platforms; how do I go between on-prem or in the public cloud?

Gardner: And because containers can be tailored to specific requirements needed to run a particular service, can they also extend down to the edge and in resource-constrained environments?

Adjustable containers add flexibility 

Christiansen: Yes, and more importantly, they can adjust to sizing issues, too. So think about pushing a container that’s very lightweight into a host that needs to have high availability of compute but may be limited on storage.

There are lots of different use cases. As you collapse the virtualization of an app — that’s what a container really does, it virtualizes app components, it virtualizes app parts and dependencies. You only deploy the smallest bit of code needed to execute that one thing. That works in niche uses like a hospital, telecommunications on a cell tower, on an automobile, on the manufacturing floor, or if you want to do multiple pieces inside a cloud platform that services a large telco. However you structure it, that’s the type of flexibility containers provide.

Gardner: And we know this is becoming a very popular model, because the likes of VMware, the leader in virtualization, is putting a lot of emphasis on containers. They don’t want to lose their market share, they want to be in the next game, too. And then Microsoft with Azure Stack is now also talking about containers — more than I would have expected. So that’s a proof-point, when you have VMware and Microsoft agreeing on something.

Christiansen: Yes, that was really interesting actually. I just saw this little blurb that came up in the news about Azure Stack switching over to a container platform, and I went, “Wow!” Didn’t they just put in three- or five-years’ worth of R and D? They are basically saying, “We might be switching this over to another platform.” It’s the right thing to do.

How to Modernize IT Operations

And Accelerate App Performance 

With Container Technology 

And no one saw Kubernetes coming, or maybe OpenShift did. But the reality now is containers suddenly came out of nowhere. Adoption has been there for a while, but it’s never been adopted like it has been now.

Gardner: And Kubernetes is an important part because it helps to prevent complexity and sprawl from getting out of hand. It allows you to have a view over these different disparate deployment environments. Tell us why Kubernetes is in an accelerant to containers adoption.

Christiansen: Kubernetes fundamentally is an orchestration platform that allows you to take containers and put them in the right place, manage them, shut them down when they are not working or having behavior problems. We need a place to orchestrate, and that’s what Kubernetes is meant to be.

racksIt basically displaced a number of other private, what we call opinionated, orchestrators. There was a number of them out there that were being worked on. And then Google released Kubernetes, which was fundamentally their platform that they had been running their world on for 10 years. They are doing for this ecosystem what the Android system did for cell phones. They were releasing and open sourcing the operating model, which is an interesting move.

Gardner: It’s very rapidly become the orchestration mechanism to rule them all. Has Kubernetes become a de facto industry standard?

Christiansen: It really has. We have not seen a technology platform gain acceptance in the ecosystem as fast as this. I personally in all my years or decades have not seen something come up this fast.

Gardner: I agree, and the fact that everyone is all-in is very powerful. How far will this orchestration model will go? Beyond containers, perhaps into other aspects of deployment infrastructure management?

Christiansen: Let’s examine the next step. It could be a code snippet. Or if you are familiar with functions, or with Amazon Web Services (AWS) Lambda [serverless functions], you are talking about that. That would be the next step of how orchestration – it allows anyone to just run code only. I don’t care about a container. I don’t care about the networking. I don’t care about any of that stuff — just execute my code.

Gardner: So functions-as-a-service (FaaS) and serverless get people used to that idea. Maybe you don’t want to buy into one particular serverless approach versus another, but we are starting to see that idea of much more flexibility in how services can be configured and deployed — not based on a platform, an OS, or even a cloud.

Containers’ tipping point 

Christiansen: Yes, you nailed it. If you think about the stepping stones to get across this space, it’s a dynamic fluid space. Containers are becoming, I bet, the next level of abstraction and virtualization that’s necessary for the evolution of application development to go faster. That’s a given, I think, right now.

Malcolm Gladwell talked about tipping points. Well, I think we have hit the tipping point on containers. This is going to happen. It may take a while before the ecosystem changes over. If you put the strategy together, if you are a decision-maker, you are making decisions about what to do. Now your container strategy matters. It matters now, not a year from now, not two years from now. It matters now.

Gardner: The flexibility that containers and Kubernetes give us refocuses the emphasis of how to do IT. It means that you are going to be thinking about management and you are going to be thinking about economics and automation. As such, you are thinking at a higher abstraction than just developing and deploying applications and then attaching and integrating data points to them.

Learn More About 

Cloud and Container Trends 

How does this higher abstraction of managing a hybrid estate benefit organizations when they are released from the earlier dependencies?

Christiansen: That’s a great question. I believe we are moving into a state where you cannot run platforms with manual systems, or ticketing-based systems. That type of thing. You cannot do that, right? We have so many systems and so much interoperability between the systems, that there has to be some sort of anatomic or artificial intelligence (AI)-based platforms that are going to make the workflow move for you.

There will still be someone to make a decision. Let’s say a ticket goes through, and it says, “Hey, there is the problem.” Someone approves it, and then a workflow will naturally happen behind that. These are the evolutions, and containers allow you to continue to remove the pieces that cause you problems.

Right now we have big, hairy IT operations problems. We have a hard time nailing down where they are. The more you can start breaking these apart and start looking at your hotspots of areas that have problems, you can be more specifically focused on solving those. Then you can start using some intelligence behind it, some actual workload intelligence, to make that happen.

Gardner: The good news is we have lots of new flexibility, with microservices, very discrete approaches to combining them into services, workflows, and processes. The bad news is we have all that flexibility across all of those variables.

Auspiciously we are also seeing a lot of interest and emphasis in what’s called AIOps, AI-driven IT operations. How do we now rationalize what containers do, but keep that from getting out of hand? Can we start using programmatic and algorithmic approaches? What you are seeing when we combine AIOps and containers?

Simplify your stack 

Christiansen: That’s like what happens when I mix oranges with apples. It’s kind of an interesting dilemma. But I can see why people want to say, “Hey, how does my container strategy help me manage my asset base? How do I get to a better outcome?”

One reason is because these approaches enable you to collapse the stack. When you take complexity out of your stack — meaning, what are the layers in your stack that are necessary to operate in a given outcome — you then have the ability to remove complexity.

HPE LogoWe are talking about dropping the containers all the way to bare metal. And if you drop to bare metal, you have taken not only cost out of the equation, you have taken some complexity out of the equation. You have taken operational dependencies out of the equation, and you start reducing those. So that’s number one.

Number two is you have to have some sort of service mesh across this thing. So with containers comes a whole bunch of little hotspots all over the place and a service manager must know where those little hotspots are. If you don’t have an operating model that’s intelligent enough to know where those are (that’s called a service mesh, where they are connected to all these things) you are not going to have autonomous behaviors over the top of that that will help you.

So yes, you can connect the dots now between your containers to get autonomous, but you have got to have that layer in between that’s telling where the problems are — and then you have intelligence above that that says how do I handle it.

Gardner: We have been talking, Robert, at an abstract level. Let’s go a bit more to the concrete. Are there use cases examples that HPE is working through with customers that illustrate the points we have been making around containers and Kubernetes?

Practice, not permanence 

Christiansen: I just met with the team, and they are working with a client right now, a very large Fortune 500 company, where they are applying the exact functions that I just talked to you about.

First thing that needed to happen is a development environment where they are actually developing code in a solid continuous integration, continuous development, and DevOps practice. We use the word “practice,” it’s like medicine and law. It’s a practice, nothing is permanent.

So we put that in place for them. The second thing is they’re trying to release code at speed. This is the first goal. Once you start releasing code at speed, with containers as the mechanism by which you are doing it, then you start saying, “Okay, now my platform on which I’m dropping on is going through development, quality assurance, integration, and then finally into production.

By the time you get to production, you need to know how you’re managing your platform. So it’s a client evolution. We are in that process right now — from end-to-end — to take one of their applications that is net new and another application that’s a refactor and run them both through this channel.

More Enterprises Are Choosing 

Containers — Here’s Why 

Now, most clients we engage with are in that early stage. They’re doing proof of concepts. There are a couple of companies out there that have very large Kubernetes installations, that are in production, but they are born-in-the-cloud companies. And those companies have an advantage. They can build that whole thing I just talked about from scratch. But 90 percent of the people out there today, what I call the crown jewels of applications, have to deal with legacy IT. They have to deal with what’s going on today, their data sources have gravity, they still have to deal with that existing infrastructure.

Those are the people I really care about. I want to give them a solution that goes to that agile place. That’s what we’re doing with our clients today, getting them off the ground, getting them into a container environment that works.

Gardner: How can we take legacy applications and deployments and then use containers as a way of giving them new life — but not lifting and shifting?

Improve past, future investments 

Christiansen: Organizations have to make some key decisions on investment. This is all about where the organization is in its investment lifecycle. Which ones are they going to make bets on, and which ones are they going to build new?

We are involved with clients going through that process. What we say to them is, “Hey, there is a set of applications you are just not going to touch. They are done. The best we can hope for is put the whole darn thing in a container, leave it alone, and then we can manage it better.” That’s about cost, about economics, about performance, that’s it. There are no release cycles, nothing like that.

The best we can hope for is put the whole darn thing in a container and we can manage it better. That’s about cost, economics, and performance.

The next set are legacy applications where I can do something to help. Maybe I can take a big, beefy application and break it into four parts, make a service group out of it. That’s called a refactor. That will give them a little bit of agility because they can only release code pieces for each segment.

And then there are the applications that we are going to rewrite. These are dependent on what we call app entanglement. They may have multiple dependencies on other applications to give them data feeds, to give them connections that are probably services. They have API calls, or direct calls right into them that allow them to do this and that. There is all sorts of middleware, and it’s just a gnarly mess.

If you try to move those applications to public cloud and try to refactor them there, you introduce what I call data gravity issues or latency issues. You have to replicate data. Now you have all sorts of cost problems and governance problems. It just doesn’t work.

You have to keep those applications in the datacenters. You have to give them a platform to do it there. And if you can’t give it to them there, you have a real problem. What we try to do is break those applications into part in ways where the teams can work in cloud-native methodologies — like they are doing in public cloud — but they are doing it on-premises. That’s the best way to get it done.

Gardner: And so the decision about on-premises or in a cloud, or to what degree a hybrid relationship exists, isn’t so much dependent upon cost or ease of development. We are now rationalizing this on the best way to leverage services, use them together, and in doing so, we attain backward compatibility – and future-proof it, too.

HPE-GreenlakeChristiansen: Yes, you are really nailing it, Dana. The key is thinking about where the app appropriately needs to live. And you have laws of physics to deal with, you have legacy issues to deal with, and you have cultural issues to deal with. And then you have all sorts of data, what we call data nationalization. That means dealing with GDPR and where is all of this stuff going to live? And then you have edge issues. And this goes on and on, and on, and on.

So getting that right — or at least having the flexibility to get it right — is a super important aspect. It’s not the same for every company.

Gardner: We have been addressing containers mostly through an applications discussion. Is there a parallel discussion about data? Can we begin thinking about data as a service, and not necessarily in a large traditional silo database, but perhaps more granular, more as a call, as an API? What is the data lifecycle and DataOps implications of containers?

Everything as a service

Christiansen: Well, here is what I call the Achilles heel of the container world. It doesn’t handle persistent data well at all. One of the things that HPE has been focused on is providing stateful, legacy, highly dependent persistent data stores that live in containers. Okay, that is a unique intellectual property that we offer. I think is really groundbreaking for the industry.

Kubernetes is a stateless container platform, which is appropriate for cloud-native microservices and those fast and agile motions. But the legacy IT world in stateful, with highly persistent data stores. They don’t work well in that stateless environment.

Through the work we’ve been doing over the last several years, specifically with an HPE-acquired company called BlueData, we’ve been able to solve that legacy problem. We put that platform into the AI, machine learning (ML), and big data areas first to really flesh that all out. We are joining those two systems together and offering a platform that is going to be really useful out in marketplace.

BlueData.logo2.jpgGardner: Another aspect of this is the economics. So one of the big pushes from HPE these days is everything as a service, of being able to consume and pay for things as you want regardless of the deployment model — whether it’s on premises, hybrid, in public clouds, or multi-clouds. How does the container model we have been describing align well with the idea of as a service from an economic perspective?

Christiansen: As-a-service is really how I want to get my services when I need them. And I only want to pay for what I need at the time I need it. I don’t want to overpay for it when I don’t use it. I don’t want to be stuck without something when I do need it.

Top Trends — Stateful Apps Are Key 

To Enterprise Container Strategy 

Solving that problem inside various places in the ecosystem is a different equation, it comes up differently. Some clients want to buy stuff, they want to capitalize it and just put it on the books. So we have to deal with that.

You have other people who say, “Hey, I’m willing to take on this hardware burden as a financer, and you can rent it from me.” You can consume all the pieces you need and then you’ve got the cloud providers as a service. But more importantly, let’s go back to how the containers allow you to have much finer granularity about what it is you’re buying. And if you want to deploy an app, maybe you are paying for that app to be deployed as opposed to the container. But the containers are the encapsulation of it and where you want to have it.

So you still have to get to what I call the basic currency. The basic currency is a container. Where does that container run? It has to run either on premises, in the public cloud, or on the edge. If people are going to agree on that basic currency model, then we can agree on an economic model.

Gardner: Even if people are risk averse, I don’t think they’re in trouble by making some big bets on containers as their new currency and to attain capabilities and skills around both containers and Kubernetes. Recognizing that this is not a big leap of faith, what do you advise people to do right now to get ready?

Christiansen: Get your arms around the Kubernetes installations you already have, because you know they’re happening. This is just like when the public cloud was arriving and there was shadow IT going on. You know it’s happening; you know people are writing code, and they’re pushing it into a Kubernetes cluster. They’re not talking to the central IT people about how to manage or run it — or even if it’s going to be something they can handle. So you’ve got to get a hold of them first.

Teamwork works

Gfind your hand raisers. That’s what I always say. Who are the volunteers? Who has their hands in the air? Openly say, “Hey, come in. I’m forming a containers, Kubernetes, and new development model team.” Give it a name. Call it the Michael Jordan team of containers. I don’t care. But go get them. Go find out who they are, right?

If your data lives on-premises and an application is going to need data, you’re going to need to have an on-premises solution for containers that can handle legacy and cloud at the same time. If that data goes to the cloud, you can always move the container there, too.

And then form and coalesce that team around that knowledge base. Learn how they think, and what is the best that’s going on inside of your own culture. This is about culture, culture, culture, right? And do it in public so people can see it. This is why people got such black eyes when they were doing their first stuff around public cloud because they snuck off and did it, and then they were really reluctant not to say anything. Bring it out in the open. Let’s start talking about it.

Next thing is looking for instantiations of applications that you either are going to build net new or you are going to refactor. And then decide on your container strategy around that Kubernetes platform, and then work it as a program. Be open about transparency about what you’re doing. Make sure you’re funded.

And most importantly, above all things, know where your data lives. If your data lives on-premises and that application you’re talking about is going to need data, you’re going to need to have an on-premises solution for containers, specifically those that handle legacy and public cloud at the same time. If that data decides it needs to go to public cloud, you can always move it there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, Data center transformation, DevOps, Docker, Hewlett Packard Enterprise, multicloud, Software, storage, User experience, Virtualization | Tagged , , , , , , , , , , , , , , , , | Leave a comment

HPE strategist Mark Linesch on the surging role of containers in advancing the hybrid IT estate

HPE containers

Openness, flexibility, and speed to distributed deployments have been top drivers of the steady growth of container-based solutions. Now, IT operators are looking to increase automation, built-in intelligence, and robust management as they seek container-enabled hybrid cloud and multicloud approaches for data and workloads.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

This next edition of the BriefingsDirect Voice of the Innovator podcast series examines the rapidly evolving containers innovation landscape with Mark Linesch, Vice President of Technology Strategy in the CTO Office and Hewlett Packard Labs at Hewlett Packard Enterprise (HPE). The composability strategies interview is conducted byDana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s look at the state of the industry around containers. What are the top drivers for containers adoption now that the technology has matured?

Linesch: The history of computing, as far back as I can remember, has been about abstraction; abstraction of the infrastructure and then a separation of concern between the infrastructure and the applications.

Mark LineschIt used to be it was all bare metal, and then about a decade ago, we went on the journey to virtualization. And virtualization is great, it’s an abstraction that allows for certain amount of agility. But it’s fairly expensive because you are virtualizing the entire infrastructure, if you will, and dragging along a unique operating system (OS) each time you do that.

So the industry for the last few years has been saying, “Well, what’s next, what’s after virtualization?” And clearly things like containerization are starting to catch hold.

Why now? Well, because we are living in a hybrid cloud world, and we are moving pretty aggressively toward a more distributed edge-to-cloud world. We are going to be computing, analyzing, and driving intelligence in all of our edges — and all of our clouds.

Things such as performance- and developer-aware capabilities, DevOps, the ability to run an application in a private cloud and then move it to a public cloud, and being able to drive applications to edge environments on a harsh factory floor — these are all aspects of this new distributed computing environment that we are entering into. It’s a hybrid estate, if you will.

Containers have advantages for a lot of different constituents in this hybrid estate world. First and foremost are the developers. If you think about development and developers in general, they have moved from the older, monolithic and waterfall-oriented approaches to much more agile and continuous integration and continuous delivery models.

And containers give developers a predictable environment wherein they can couple not only the application but the application dependencies, the libraries, and all that they need to run an application throughout the DevOps lifecycle. That means from development through test, production, and delivery.

Containers carry and encapsulate all of the app’s requirements to develop, run, test, and scale. With bare metal or virtualization, as the app moved through the DevOps cycle, I had to worry about the OS dependencies and the type of platforms I was running that pipeline on.

Developers’ package deal

A key thing for developers is they can package the application and all the dependencies together into a distinct manifest. It can be version-controlled and easily replicated. And so the developer can debug and diagnose across different environments and save an enormous amount of time. So developers are the first beneficiaries, if you will, of this maturing containerized environment.

How to Modernize Your IT 

With Container Technology 

But next are the IT operations folks because they now have a good separation of concern. They don’t have to worry about reconfiguring and patching all these kinds of things when they get a hand-off from developers into a production environment. That capability is fundamentally encapsulated for them, and so they have an easier time operating.

And increasingly in this more hybrid distributed edge-to-cloud world, I can run those containers virtually anywhere. I can run them at the edge, in a public cloud, in a private cloud, and I can move those applications quickly without all of these prior dependencies that virtualization or bare metal required. It contains an entire runtime environment and application, plus all the dependencies, all the libraries, and the like.

The third area that’s interesting for containers is around isolation. Containers virtualize the CPU, the memory, storage network resources – and they do that at the OS level. So they use resources much more efficiently for that reason.

Unlike virtualization, which includes your entire OS as well as the application, containers run on a single OS. Each container shares the OS kernel with other containers so it’s lightweight, uses less resources, and spins up instantly.

Unlike virtualization, which includes your entire OS as well as the application, containers run on a single OS. Each container shares the OS kernel with other containers, so it’s lightweight, uses much fewer resources, and spins up almost instantly — in seconds versus virtual machines (VMs) that spin up in minutes.

When you think about this fast-paced, DevOps world we live in — this increasingly distributed hybrid estate from the many edges and many clouds we compute and analyze data in — that’s why containers are showing quite a bit of popularity. It’s because of the business benefits, the technical benefits, the development benefits, and the operations benefits.

Gardner: It’s been fascinating for me to see the portability and fit-for-purpose containerization benefits, and being able to pass those along a DevOps continuum. But one of the things that we saw with virtualization was that too much of a good thing spun out of control. There was sprawl, lack of insight and management, and eventually waste.

How do we head that off with containers? How do containers become manageable across that entire hybrid estate?

Setting the standard 

Linesch: One way is standardizing the container formats, and that’s been coming along fairly nicely. There is an initiative called The Open Container Initiative, part of the Linux Foundation, that develops to the industry standard so that these containers, formats, and runtime software associated with them are standardized across the different platforms. That helps a lot.

Number two is using a standard deployment option. And the one that seems to be gripping the industry is KubernetesKubernetes is an open source capability that provides mechanisms for deploying, maintaining, and scaling containerized applications. Now, the combination of the standard formats from a runtime perspective with the ability to manage that with capabilities like Mesosphere or Kubernetes has provided the tooling and the capabilities to move this forward.

Gardner: And the timing couldn’t be better, because as people are now focused on as-a-service for so much — whether it’s an application, infrastructure, and increasingly, entire data centers — we can focus on the business benefits and not the underlying technology. No one really cares whether it’s running in a virtualized environment, on bare metal, or in a container — as long as you are getting the business benefits.

Linesch: You mentioned that nobody really cares what they are running on, and I would postulate that they shouldn’t care. In other words, developers should develop, operators should operate. The first business benefit is the enormous agility that developers get and that IT operators get in utilizing standard containerized environments.

How to Extend the Cloud Experience 

Across Your Enterprise 

Not only do they get an operations benefit, faster development, lower cost to operate, and those types of things, but they take less resources. So containers, because of their shared and abstracted environment, really take a lot fewer resources out of a server and storage complex, out of a cluster, so you can run your applications faster, with less resources, and at lower total cost.

This is very important when you think about IT composability in general because the combination of containerized environments with things like composable infrastructure provides the flexibility and agility to meet the needs of customers in a very time sensitive and very agile way.

Ship

Gardner: How are IT operators making a tag team of composability and containerization? Are they forming a whole greater than the sum of the parts? How do you see these two spurring innovation?

Linesch: I have managed some of our R&D centers. These are usually 50,000-square-foot data centers where all of our developers and hardware and software writers are off doing great work.

And we did some interesting things a few years ago. We were fully virtualized, a kind of private cloud environment, so we could deliver infrastructure-as-a-service (IaaS) resources to these developers. But as hybrid cloud hit and became more of a mature and known pattern, our developers were saying, “Look, I need to spin this stuff up more quickly. I need to be able to run through my development-test pipeline more effectively.”

And containers-as-a-service was just a super hit for these guys. They are under pressure every day to develop, build, and run these applications with the right security, portability, performance, and stability. The containerized systems — and being able to quickly spin up a container, to do work, package that all, and then move it through their pipelines — became very, very important.

From an infrastructure operations perspective, it provides a perfect marriage between the developers and the operators. The operators can use composition and things like our HPE Synergy platform and our HPE OneViewtooling to quickly build container image templates. These then allow those developers to populate that containers-as-a-service infrastructure with the work that they do — and do that very quickly.

Gardner: Another hot topic these days is understanding how a continuum will evolve between the edge deployments and a core cloud, or hybrid cloud environment. How do containers help in that regard? How is there a core-to-cloud and/or core-to-cloud-to-edge benefit when containers are used?

Gaining an edge 

Linesch: I mentioned that we are moving to a much more distributed computing environment, where we are going to be injecting intelligence and processing through all of our places, people, and things. And so when you think about that type of an environment, you are saying, “Well, I’m going to develop an application. That application may require more microservices or more modular architecture. It may require that I have some machine learning (ML) or some deep learning analytics as part of that application. And it may then need to be provisioned to 40 — or 400 — different sites from a geographic perspective.”

When you think about edge-to-cloud, you might have a set of factories in different parts of the United States. For example, you may have 10 factories all seeking to develop inferencing and analyzed actions on some type of an industrial process. It might be video cameras attached to an assembly line looking for defects and ingesting data and analyzing that data right there, and then taking some type of a remediation action.

How to Optimize Your IT Operations 

With Composable Infrastructure 

And so as we think about this edge-to-cloud dance, one of the things that’s critical there is continuous integration and continuous delivery — of being able to develop these applications and the artificial intelligence (AI) models associated with analyzing the data on an ongoing basis. The AI models, quite frankly, drift and they need to be updated periodically. And so continuous integration and continuous delivery types of methodologies are becoming very important.

Then, how do I package up all of those application bits, analytics bits, and ML bits? How do I provision that to those 10 factories? How do I do that in a very fast and fluid way?

That’s where containers really shine. They will give you bare-metal performance. They are packaged and portable – and that really lends itself to the fast-paced delivery and delivery cycles required for these kinds of intelligent edge and Internet of Things (IoT) operations.

Gardner: We have heard a lot about AIOps and injecting more intelligence into more aspects of IT infrastructure, particularly at the June HPE Discover conference. But we seem to be focusing on the gathering of the data and the analysis of the data, and not so much on the what do you do with that analysis – the execution based on the inferences.

It seems to me that containers provide a strong means when it comes to being able to exploit recommendations from an AI engine and then doing something — whether to deploy, to migrate, to port.

Am I off on some rough tangent? Or is there something about containers — and being able to deftly execute on what the intelligence provides — that might also be of benefit?

Linesch: At the edge, you are talking about many applications where a large amount of data needs to be ingested. It needs to be analyzed, and then take a real-time action from a predictive maintenance, classification, or remediation perspective.

We are seeing the benefits of containers really shine in these more distributed edge-to-cloud environments. At the edge, many apps need a large amount of data ingested. The whole cycle time of ingesting data, analyzing it, and taking some action back is highly performant with containers.

And so containers spin up very quickly. They use very few resources. The whole cycle-time of ingesting data, analyzing that data through a container framework, taking some action back to the thing that you are analyzing is made a whole lot easier and a whole lot performant with less resources when you use containers.

Now, virtualization still has a very solid set of constituents, both at the hybrid cloud and at the intelligent edge. But we are seeing the benefits of containers really shine in these more distributed edge-to-cloud environments.

Gardner: Mark, we have chunked this out among the developer to operations and deployment, or DevOps implications. And we have talked about the edge and cloud.

But what about at the larger abstraction of impacting the IT organization? Is there a benefit for containerization where IT is resource-constrained when it comes to labor and skills? Is there a people, skills, and talent side of this that we haven’t yet tapped into?

Customer microservices support 

Linesch: There definitely is. One of the things that we do at HPE is try to help customers move into these new models like containers, DevOps, and continuous integration and delivery. We offer a set of services that help customers, whether they are medium-sized customers or large customers, to think differently about development of applications. As a result, they are able to become more agile and microservices-oriented.

Microservice-oriented development really lends itself to this idea of containers, and the ability of containers to interact with each other as a full-set application. What you see happening is that you have to have a reason not to use containers now.

How to Simplify and Automate 

Across Your Datacenter 

That’s pretty exciting, quite frankly. It gives us an opportunity to help customers to engage from an education perspective, and from a consulting, integration, and support perspective as they journey through microservices and how to re-architect their applications.

Our customers are moving to a more continuous integration-continuous development approach. And we can show them how to manage and operate these types of environments with high automation and low operational cost.

Gardner: A lot of the innovation we see along the lines of digital transformation at a business level requires taking services and microservices from different deployment models — oftentimes multi-cloud, hybrid cloud, software-as-a-service (SaaS) services, on-premises, bare metal, databases, and so forth.

Are you seeing innovation percolating in that way? If you have any examples, I would love to hear them.

Linesch: I am seeing that. You see that every day when you look at the Internet. It’s a collaboration of different services based on APIs. You collect a set of services for a variety of different things from around these Internet endpoints, and that’s really as-a-service. That’s what it’s all about — the ability to orchestrate all of your applications and collections of service endpoints.

Furthermore, beyond containers, there are new as-a-function-based, or serverless, types of computing. These innovators basically say, “Hey, I want to consume a service from someplace, from an HTTP endpoint, and I want to do that very quickly.” They very effectively are using service-oriented methodologies and the model of containers.

ContainersforDummiesWe are seeing a lot of innovation in these function-as-a-service (FaaS) capabilities that some of the public clouds are now providing. And we are seeing a lot of innovation in the overall operations at scale of these hybrid cloud environments, given the portability of containers.

At HPE, we believe the cloud isn’t a place — it’s an experience. The utilization of containers provides a great experience for both the development community and the IT operations community. It truly helps better support the business objectives of the company.

Investing in intelligent innovation

Gardner: Mark, for you personally, as you are looking for technology strategy, how do you approach innovation? Is this something that comes organically, that bubbles up? Or is there a solid process or workflow that gets you to innovation? How do you foster innovation in your own particular way that works?

Linesch: At HPE, we have three big levers that we pull on when we think about innovation.

The first is we can do a lot of organic development — and that’s very important. It involves understanding where we think the industry is going, and trying to get ahead of that. We can then prove that out with proof of concepts and incubation kinds of opportunities with lead customers.

We also, of course, have a lever around inorganic innovation. For example, you saw recently an acquisition by HPE of Cray to turbocharge the next generation of high-performance computing (HPC) and to drive the next generation of exascale computing.

The third area is our partnerships and investments. We have deep collaboration with companies like Docker, for example. They have been a great partner for a number of years, and we have, quite frankly, helped to mature some of that container management technology.

We are an active member of the standards organizations around the containers. Being able to mature the technology with partners like Docker, to get at the business value of some of these big advancements is important. So those are just three ways we innovate.

Longer term, with other HPE core innovations, such as composability and memory-driven computing, we believe that containers are going to be even more important. You will be able to hold the containers in memory-driven computingsystems, in either Dynamic random-access memory (DRAM) or storage-class memory (SCM).

You will be able to spin them up instantly or spin them down instantly. The composition capabilities that we have will increasingly automate a very significant part of bringing up such systems, of bringing up applications, and really scaling and moving those applications to where they need to be.

One of the principles that we are focused on is moving the compute to the data — as opposed to moving the data to the compute. And the reason for that is when you move the compute to the data, it’s a lot easier, simpler, and faster with less resources.

One of the principles that we are focused on is moving the compute to the data — as opposed to moving the data to the compute. And the reason for that is when you move the compute to the data, it’s a lot easier, simpler, and faster — with less resources.

This next generation of distributed computing, memory-driven computing, and composability is really ripe for what we call containers in microseconds. And we will be able to do that all with the composability tooling we already have.

Gardner: When you get to that point, you’re not just talking about serverless. You’re talking about cloudless. It doesn’t matter where the FaaS is being generated as long as it’s at the right performance level that you require, when you require it. It’s very exciting.

Before we break, I wonder what guidance you have for organizations to become better prepared to exploit containers, particularly in the context of composability and leveraging a hybrid continuum of deployments? What should companies be doing now in order to be getting better prepared to take advantage of containers?

Be prepared, get busy

Linesch: If you are developing applications, then think deeply about agile development principles, and developing applications with a microservice-bent is very, very important.

If you are in IT operations, it’s all about being able to offer bare metal, virtualization, and containers-as-a-service options — depending on the workload and the requirements of the business.

How to Manage Your Complex 

Hybrid Cloud More Effectively 

I recommend that companies not stand on the sidelines but to get busy, get to a proof of concept with containers-as-a-service. We have a lot of expertise here at HPE. We have a lot of great partners, such as Docker, and so we are happy to help and engage.

We have quite a bit of on-boarding and helpful services along the journey. And so jump in and crawl, walk, and run through it. There are always some sharp corners on advanced technology, but containers are maturing very quickly. We are here to help our customers on that journey.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Docker, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud | Tagged , , , , , , , , , , , , | Leave a comment

The venerable history of IT systems management meets the new era of AIOps-fueled automation over hybrid and multicloud complexity

data centerThe next edition of the BriefingsDirect Voice of the Innovator podcast series explores the latest developments in hybrid IT management.

IT operators have for decades been playing catch-up to managing their systems amid successive waves of heterogeneity, complexity, and changing deployment models. IT management technologies and methods have evolved right along with the challenge, culminating in the capability to optimize and automate workloads to exacting performance and cost requirements.

But now automation is about to give an AIOps boost from new machine learning (ML) and artificial intelligence (AI) capabilities — just as multicloud and edge computing deployments become more common — and demanding.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we explore the past, present, and future of IT management innovation with a 30-year veteran of IT management, Doug de Werd, Senior Product Manager for Infrastructure Management at Hewlett Packard Enterprise (HPE). The interview is conducted byDana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Management in enterprise IT has for me been about taking heterogeneity and taming it, bringing varied and dynamic systems to a place where people can operate over more, using less. And that’s been a 30-year journey.

Yet heterogeneity these days, Doug, includes so much more than it used to. We’re not just talking about platforms and frameworks – we’re talking about hybrid cloud, multicloud, and many Software as a service (SaaS) applications. It includes working securely across organizational boundaries with partners and integrating business processes in ways that never have happened before.

With all of that new complexity, with an emphasis on intelligent automation, where do you see IT management going next?

Managing management 

Doug deWerd

de Werd

de Werd: Heterogeneity is known by another term, and that’s chaos. In trying to move from the traditional silos and tools to more agile, flexible things, IT management is all about your applications — human resources and finance, for example – that run the core of your business. There’s also software development and other internal things. The models for those can be very different and trying to do that in a single manner is difficult because you have widely varying endpoints.

Gardner: Sounds like we are now about managing the management.

de Werd: Exactly. Trying to figure out how to do that in an efficient and economically feasible way is a big challenge.

Gardner: I have been watching the IT management space for 20-plus years and every time you think you get to the point where you have managed everything that needs to be managed — something new comes along. It’s a continuous journey and process.

But now we are bringing intelligence and automation to the problem. Will we ever get to the point where management becomes subsumed or invisible?

de Werd: You can automate tasks, but you can’t automate people. And you can’t automate internal politics and budgets and things like that. What you do is automate to provide flexibility.

How to Support DevOps, Automation,

And IT Management Initiatives 

But it’s not just the technology, it’s the economics and it’s the people. By putting that all together, it becomes a balancing act to make sure you have the right people in the right places in the right organizations. You can automate, but it’s still within the context of that broader picture.

rackGardner: When it comes to IT management, you need a common framework. For HPE, HPE OneView has been core. Where does HPE OneView go from here? How should people think about the technology of management that also helps with those political and economic issues?

de Werd: HPE OneView is just an outstanding core infrastructure management solution, but it’s kind of like a car. You can have a great engine, but you still have to have all the other pieces.

And so part of what we are trying to do with HPE OneView, and we have been very successful, is extending that capability out into other tools that people use. This can be into more traditional tools like with our Microsoft or VMware partnerships and exposingand bringing HPE OneView functionality into traditional things.

The integration allows the confidence of using HPE OneView as a core engine. All those other pieces can still be customized to do what you need to do — yet you still have that underlying core foundation of HPE OneView.

But it also has a lot to do with DevOps and the continuous integration development types of things with Docker, Chef, and Puppet — the whole slew of at least 30 partners we have.

That integration allows the confidence of using HPE OneView as a core engine. All those other pieces can still be customized to do what you need to do — yet you still have that underlying core foundation of HPE OneView.

Gardner: And now with HPE increasingly going to an as-a-service orientation across many products, how does management-as-a-servicework?

Creativity in the cloud 

de Werd: It’s an interesting question, because part of management in the traditional sense — where you have a data center full of servers with fault management or break/fix such as a hard-drive failure detection – is you want to be close, you want to have that notification immediately.

As you start going up in the cloud with deployments, you have connectivity issues, you have latency issues, so it becomes a little bit trickier. When you have more up levels, up the stack, where you have software that can be more flexible — you can do more coordination. Then the cloud makes a lot of sense.

Management in the cloud can mean a lot of things. If it’s the infrastructure, you tend to want to be closer to the infrastructure, but not exclusively. So, there’s a lot of room for creativity.

Gardner: Speaking of creativity, how do you see people innovating both within HPE and within your installed base of users? How do people innovate with management now that it’s both on- and off-premises? It seems to me that there is an awful lot you could do with management beyond red-light, green-light, and seek out those optimization and efficiency goals. Where is the innovation happening now with IT management?

de Werd: The foundation of it begins with automation, because if you can automate you become repeatable, consistent, and reliable, and those are all good in your data center.

Transform Compute, Storage, and Networking

Into Software-Defied Infrastructure 

You can free up your IT staff to do other things. The truth is if you can do that reliably, you can spend more time innovating and looking at your problems from a different angle. You gain the confidence that the automation is giving you.

Automation drives creativity in a lot of different ways. You can be faster to market, have quicker releases, those types of things. I think automation is the key.

Gardner: Any examples? I know sometimes you can’t name customers, but can you think of instances where people are innovating with management in ways that would illustrate its potential?

Automation innovation 

de Werd: There’s a large biotech genome sequencing company, an IT group that is very innovative. They can change their configuration on the fly based on what they want to do. They can flex their capacity up and down based on a task — how much compute and storage they need. They have a very flexible way of doing that. They have it all automated, all scripted. They can turn on a dime, even as a very large IT organization.

And they have had some pretty impressive ways of repurposing their IT. Today we are doing X and tonight we are doing Y. They can repurpose that literally in minutes — versus days for traditional tasks.

Gardner: Are your customers also innovating in ways that allow them to get a common view across the entire lifecycle of IT? I’m thinking from requirements, through development, deployment, test, and continuous redeployment.

de Werd: Yes, they can string all of these processes together using different partner tools, yet at the core they use HPE OneView and HPE Synergy underneath the covers to provide that real, raw engine.

By using the HPE partner ecosystem integrated with HPE OneView, they have visibility. Then they can get into things like Docker Swarm. It may not be HPE OneView providing that total visibility. At the hardware level it is, but because we feed into upper-level apps they can adjust to meet the needs across the entire business process.

By using the HPE partner ecosystem integrated with HPE OneView, they have that visibility. Then they can get into things like Docker Swarm. It may not be HPE OneView providing that total visibility. At the hardware and infrastructure level it is, but because we are feeding into upper-level and broader applications, they can see what’s going on and determine how to adjust to meet the needs across the entire business process.

Gardner: In terms of HPE Synergy and composability, what’s the relationship between composability and IT management? Are people making the whole greater than the sum of the parts with those?

de Werd: They are trying to. I think there is still a learning curve. Traditional IT has been around a long time. It just takes a while to change the mentality, skills sets, and internal politics. It takes a while to get to that point of saying, “Yeah, this is a good way to go.”

But once they dip their toes into the water and see the benefits — the power, flexibility, and ease of it — they are like, “Wow, this is really good.” One step leads to the next and pretty soon they are well on their way on their composable journey.

Gardner: We now see more intelligence brought to management products. I am thinking about how HPE InfoSight is being extended across more storage and server products.

How to Eliminate Complex Manual Processes 

And Increase Speed of IT Delivery 

We used to access log feeds from different IT products and servers. Then we had agents and agent-less analysis for IT management. But now we have intelligence as a service, if you will, and new levels of insight. How will HPE OneView evolve with this new level of increasingly pervasive intelligence?

managersde Werd: HPE InfoSight is a great example. You see it being used in multiple ways, things like taking the human element out, things like customer advisories coming out and saying, “Such-and-such product has a problem,” and how that affects other products.

If you are sitting there looking at 1,000 or 5,000 servers in your data center, you’re wondering how I am affected by this? There are still a lot of manual spreadsheets out there, and you may find yourself pouring through a list.

Today, you have the capability of getting an [intelligent alert] that says, “These are the ones that are affected. Here is what you should do. Do you want us to go fix it right now?” That’s just an example of what you can do.

It makes you more efficient. You begin to understand howyou are using your resources, where your utilization is, and how you can then optimize that. Depending on how flexible you want to be, you can design your systems to respond to those inputs and automatically flex [deployments] to the places that you want to be.

This leads to autonomous computing. We are not quite there yet, but we are certainly going in that direction. You will be able to respond to different compute, storage, and network requirements and adjust on the fly. There will also be self-healing and self-morphing into a continuous optimization model.

Gardner: And, of course, that is a big challenge these days … hybrid cloud, hybrid IT, and deploying across on-premises cloud, public cloud, and multicloud models. People know where they want to go with that, but they don’t know how to get there.

How does modern IT management help them achieve what you’ve described across an increasingly hybrid environment?

Manage from the cloud down 

de Werd: They need to understand what their goals are first. Just running virtual machines (VMs) in the cloud isn’t really where they want to be. That was the initial thing. There are economic considerations involved in the cloud, CAPEX and OPEX arguments.

Simply moving your infrastructure from on-premises up into the cloud isn’t going to get you where you really need to be. You need to look at it from a cloud-native-application perspective, where you are using micro services, containers, and cloud-enabled programming languages — your Javas and .NETs and all the other stateless types of things – all of which give you new flexibility to flex performance-wise.

From the management side, you have to look at different ways to do your development and different ways to do delivery. That’s where the management comes in. To do DevOps and exploit the DevOps tools, you have to flip the way you are thinking — to go from the cloud down.

Cloud application development on-premises, that’s one of the great things about containers and cloud-native, stateless types of applications. There are no hardware dependencies, so you can develop the apps and services on-premises, and then run them in the cloud, run them on-premises, and/or use your hybrid cloud vendor’s capabilities to burst up into a cloud if you need it. That’s the joy of having those types of applications. They can run anywhere. They are not dependent on anything — on any particular underlying operating system.

But you have to shift and get into that development mode. And the automation helps you get there, and then helps you respond quickly once you do.

Gardner: Now that hybrid deployment continuum extends to the edge. There will be increasing data analytics, measurement, and making deployment changes dynamically from that analysis at the edge.

It seems to me that the way you have designed and architected HPE IT management is ready-made for such extensibility out to the edge. You could have systems run there that can integrate as needed, when appropriate, with a core cloud. Tell me how management as you have architected it over the years helps manage the edge, too.

Businesses need to move their processing further out to the edge and gain the instant response, instant gratification. You can’t wait to have an input analyzed by going all the way back to the cloud. You want the processing toward the edge to get that instantaneous response.

de Werd: Businesses need to move their processing further out to the edge, and gain the instant response, instant gratification. You can’t wait to have an input analyzed on the edge, to have it go all the way back to a data source or all the way up to a cloud. You want to have the processing further and further toward the edge so you can get that instantaneous response that customers are coming to expect.

But again, being able to automate how to do that, and having the flexibility to respond to differing workloads and moving those toward the edge, I think, is key to getting there.

InfoSightGardner: And Doug, for you, personally, do you have some takeaways from your years of experience about innovation and how to make innovation a part of your daily routine?

de Werd: One of the big impacts on the team that I work with is in our quality assurance (QA) testing. It’s a very complex thing to test various configurations; that’s a lot of work. In the old days, we had to manually reconfigure things. Now, as we use an Agile development process, testing is a continuous part of it.

We can now respond very quickly and keep up with the Agile process. It used to be that testing was always the tail-end and the longest thing. Development testing took forever. Now because we can automate that, it just makes that part of the process easier, and it has taken a lot of stress off of the teams. We are now much quicker and nimbler in responses, and it keeps people happy, too.

How to Get Simle, Automated Management 

Of Your Hybrid Infrastructure 

Gardner: As we close out, looking to the future, where do you see management going, particularly how to innovate using management techniques, tools, and processes? Where is the next big green light coming from?

Set higher goals 

de Werd: First, get your house in order in terms of taking advantage of the automation available today. Really think about how not to just use the technology as the end-state. It’s more of a means to get to where you want to be.

Define where your organization wants to be. Where you want to be can have a lot of different aspects; it could be about how the culture evolves, or what you want your customers’ experience to be. Look beyond just, “I want this or that feature.”

Then, design your full IT and development processes. Get to that goal, rather than just saying, “Oh, I have 100 VMs running on a server, isn’t that great?” Well, if it’s not achieving the ultimate goal of what you want, it’s just a technology feat. Don’t use technology just for technology’s sake. Use it to get to the larger goals, and define those goals, and how you are going to get there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Microsoft, multicloud, Security, storage, User experience, VMware | Tagged , , , , , , , , , , , , | Leave a comment

How the Catalyst UK program seeds the next generations of HPC, AI, and supercomputing

cray-supercomputerThe next BriefingsDirect Voice of the Customer discussion explores a program to expand the variety of CPUs that support supercomputer and artificial intelligence (AI)-intensive workloads.

The Catalyst program in the UK is seeding the advancement of the ARM CPU architecture for high performance computing (HPC) as well as establishing a vibrant software ecosystem around it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to learn about unlocking new choices and innovation for the next generations of supercomputing with Dr. Eng Lim Goh, Vice President and Chief Technology Officer for HPC and AI at Hewlett Packard Enterprise (HPE), and Professor Mark Parsons, Director of the Edinburgh Parallel Computing Centre (EPCC) at the University of Edinburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, why is there a need now for more variety of CPU architectures for such use cases as HPC, AI, and supercomputing?

Mark Parsons

Parsons

Parsons: In some ways this discussion is a bit odd because we have had huge variety over the years in supercomputing with regard to processors. It’s really only the last five to eight years that we’ve ended up with the majority of supercomputers being built from the Intel x86 architecture.

It’s always good in supercomputing to be on the leading edge of technology and getting more variety in the processor is really important. It is interesting to seek different processor designs for better performance for AI or supercomputing workloads. We want the best type of processors for what we want to do today.

Gardner: What is the Catalyst program? Why did it come about? And how does it help address those issues?

Parsons: The Catalyst UK program is jointly funded by a number of large companies and three universities: The University of Bristol, the University of Leicester, and the University of Edinburgh. It is UK-focused because Arm Holdings is based in the UK, and there is a long history in the UK of exploring new processor technologies.

Through Catalyst, each of the three universities hosts a 4,000-core ARM processor-based system. We are running them as services. At my university, for example, we now have a number of my staff using this system. But we also have external academics using it, and we are gradually opening it up to other users.

Catalyst for change in processor

We want as many people as possible to understand how difficult it will be to port their code to ARM. Or, rather — as we will explore in this podcast — how easy it is.

You only learn by breaking stuff, right? And so, we are going to learn which bits of the software tool chain, for example, need some work. [Such porting is necessary] because ARM predominantly sat in the mobile phone world until recently. The supercomputing and AI world is a different space for the ARM processor to be operating in.

Gardner: Eng Lim, why is this program of interest to HPE? How will it help create new opportunity and performance benchmarks for such uses as AI?

Dr-Eng-Lim-Goh

Goh

Goh: Mark makes a number of very strong points. First and foremost, we are very keen as a company to broaden the reach of HPC among our customers. If you look at our customer base, a large portion of them come from the commercial HPC sites, the retailers, banks, and across the financial industry. Letting them reach new types of HPC is important and a variety of offerings makes it easier for them.

The second thing is the recent reemergence of more AI applications, which also broadens the user base. There is also a need for greater specialization in certain areas of processor capabilities. We believe in this case, the ARM processor — given the fact that it enables different companies to build innovative variations of the processor – will provide a rich set of new options in the area of AI.

Gardner: What is it, Mark, about the ARM architecture and specifically the Marvell ThunderX2 ARM processor that is so attractive for these types of AI workloads?

Expanding memory for the future 

Parsons: It’s absolutely the case that all numerical computing — AI, supercomputing, and desktop technical computing — is controlled by memory bandwidth. This is about getting data to the processor so the processor core can act on it.

What we see in the ThunderX2 now, as well as in future iterations of this processor, is the strong memory bandwidth capabilities. What people don’t realize is a vast amount of the time, processor cores are just waiting for data. The faster you get the data to the processor, the more compute you are going to get out with that processor. That’s one particular area where the ARM architecture is very strong.

Goh: Indeed, memory bandwidth is the key. Not only in supercomputing applications, but especially in machine learning (ML) where the machine is in the early phases of learning, before it does a prediction or makes an inference.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

It has to go through the process of learning, and this learning is a highly data-intensive process. You have to consume massive amounts of historical data and examples in order to tune itself to a model that can make good predictions. So, memory bandwidth is utmost in the training phase of ML systems.

And related to this is the fact that the ARM processor’s core intellectual property is available to many companies to innovate around. More companies therefore recognize they can leverage that intellectual property and build high-memory bandwidth innovations around it. They can come up with a new processor. Such an ability to allow different companies to innovate is very valuable.

armchip

Gardner: Eng Lim, does this fit in with the larger HPE drive toward memory-intensive computing in general? Does the ARM processor fit into a larger HPE strategy?

Goh: Absolutely. The ARM processor together with the other processors provide choice and options for HPE’s strategy of being edge-centric, cloud-enabled, and data-driven.

Across that strategy, the commonality is data movement. And as such, the ARM processor allowing different companies to come in to innovate will produce processors that meet the needs of all these various kinds of sectors. We see that as highly valuable and it supports our strategy.

Gardner: Mark, Arm Holdings controls the intellectual property, but there is a budding ecosystem both on the processor design as well as the software that can take advantage of it. Tell us about that ecosystem and why the Catalyst UK program is facilitating a more vibrant ecosystem.

The design-to-build ecosystem 

Parsons: The whole Arm story is very, very interesting. This company grew out of home computing about 30 to 40 years ago. The interesting thing is the way that they are an intellectual property company, at the end of the day. Arm Holdings itself doesn’t make processors. It designs processors and sells those designs to other people to make.

We’ve had this wonderful ecosystem of different companies making their own ARM processors or making them for other people. It’s no surprise it’s the most common processor in the world today.

So, we’ve had this wonderful ecosystem of different companies making their own ARM processors or making them for other people. With the wide variety of different ARM processors in mobile phones, for example, there is no surprise that it’s the most common processor in the world today.

Now, people think that x86 processors rule the roost, but actually they don’t. The most common processor you will find is an ARM processor. As a result, there is a whole load of development tools that come both from ARM and also within the developer community that support people who want to develop code for the processors.

In the context of Catalyst UK, in talking to Arm, it’s quite clear that many of their tools are designed to meet their predominant market today, the mobile phone market. As they move into the higher-end computing space, it’s clear we may find things in the programs where the compiler isn’t optimized. Certain libraries may be difficult to compile, and things like that. And this is what excites me about the Catalyst program. We are getting to play with leading-edge technology and show that it is easy to use all sorts of interesting stuff with it.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

Gardner: And while the ARM CPU is being purpose-focused for high-intensity workloads, we are seeing more applications being brought in, too. How does the porting process of moving apps from x86 to ARM work? How easy or difficult is it? How does the Catalyst UK program help?

Parsons: All three of the universities are porting various applications that they commonly use. At the EPCC, we run the national HPC service for the UK called ARCHER. As part of that we have run national [supercomputing] services since 1994, but as part of the ARCHER service, we decided for the first time to offer many of the common scientific applications as modules.

You can just ask for the module that you want to use. Because we saw users compiling their own copies of code, we had multiple copies, some of them identically compiled, others not compiled particularly well.

U of E2So, we have a model of offering about 40 codes on ARCHER as precompiled where we are trying to keep them up to date and we patch them, etc. We have 100 staff at EPCC that look after code. I have asked those staff to get an account on the Catalyst system, take that code across and spend an afternoon trying to compile. We already know for some that they just compile and run. Others may have some problems, and it’s those that we’re passing on to ARM and HPE, saying, “Look, this is what we found out.”

The important thing is that we found there are very few programs [with such problems]. Most code is simply recompiling very, very smoothly.

Gardner: How does HPE support that effort, both in terms of its corporate support but also with the IT systems themselves?

ARM’s reach 

Goh: We are very keen about the work that Mark and the Catalyst program are doing. As Mark mentioned, the ARM processor came more from the edge-centric side of our strategy. In mobile phones, for example.

Now we are very keen to see how far these ARM systems can go. Already we have shipped to the US Department of Energy at the Sandia National Lab a large ARM processor-based supercomputer called Astra. These efforts are ongoing in the area of HPC applications. We are very keen to see how this processor and the compilers for it work with various HPC applications in the UK and the US.

Gardner: And as we look to the larger addressable market, with the edge and AI being such high-growth markets, it strikes me that supercomputing — something that has been around for decades — is not fully mature. We are entering a whole new era of innovation.

Mark, do you see supercomputing as in its heyday, sunset years, or perhaps even in its infancy?

Parsons: I absolutely think that supercomputing is still in its infancy. There are so many bits in the world around us that we have never even considered trying to model, simulate, or understand on supercomputers. It’s strange because quite often people think that supercomputing has solved everything — and it really hasn’t. I will give you a direct example of that.

Supercomputing is still in its infancy. There are so many bits in the world around us that we have never even considered trying to model, simulate, or understand on supercomputers. It’s strange because people think that supercomputers have already solved everything. 

A few years ago, a European project I was running won an award for simulating the highest accuracy of water flowing through a piece of porous rock. It took over a day on the whole of the national service [to run the simulation]. We won a prize for this, and we only simulated 1 cubic centimeter of rock.

People think supercomputers can solve massive problems — and they can, but the universe and the world are complex. We’ve only scratched the surface of modeling and simulation.

This is an interesting moment in time for AI and supercomputing. For a lot of data analytics, we have at our fingertips for the very first time very, very large amounts of data. It’s very rich data from multiple sources, and supercomputers are getting much better at handling these large data sources.

The reason the whole AI story is really hot now, and lots of people are involved, is not actually about the AI itself. It’s about our ability to move data around and use our data to train AI algorithms. The link directly into supercomputing is because in our world we are good at moving large amounts of data around. The synergy now between supercomputing and AI is not to do with supercomputing or AI – it is to do with the data.

Gardner: Eng Lim, how do you see the evolution of supercomputing? Do you agree with Mark that we are only scratching the surface?

Top-down and bottom-up data crunching 

Goh: Yes, absolutely, and it’s an early scratch. It’s still very early. I will give you an example.

Solving games is important to develop a method or strategy for cyber defense. If you just take the most recent game that machines are beating the best human players, the game of Go, is much more complex than chess in terms of the number of potential combinations. The number of combinations is actually 10[171], if you comprehensively went through all the different combinations of that game.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

You know how big that number is? Well, okay, if we took all computers in the world together, all the supercomputers, all of the computers in the data centers of the Internet companies and put them all together, run them for 100 years — all you can do is 10[30], which is so very far from 10[171]. So, you can see just by this one game example alone that we are very early in that scratch.

A second group of examples relates to new ways that supercomputers are being used. From ML to AI, there is now a new class of applications changing how supercomputers are used. Traditionally, most supercomputers have been used for simulation. That’s what I call top-down modeling. You create your model out of physics equations or formulas and then you run that model on a supercomputer to try and make predictions.

ARM logoThe new way of making predictions uses the ML approach. You do not begin with physics. You begin with a blank model and you keep feeding it data, the outcomes of history and past examples. You keep feeding data into the model, which is written in such a way that for each new piece of data that is fed, a new prediction is made. If the accuracy is not high, you keep tuning the model. Over time — with thousands, hundreds of thousand, and even millions of examples — the model gets tuned to make good predictions. I call this the bottom-up approach.

Now we have people applying both approaches. Supercomputers used traditionally in a top-down simulation are also employing the bottom-up ML approach. They can work in tandem to make better and faster predictions.

Supercomputers are therefore now being employed for a new class of applications in combination with the traditional or gold-standard simulations.

Gardner: Mark, are we also seeing a democratization of supercomputing? Can we extend these applications and uses? Is what’s happening now decreasing the cost, increasing the value, and therefore opening these systems up to more types of uses and more problem-solving?

Cloud clears the way for easy access

Parsons: Cloud computing is having a big impact on everything that we do, to be quite honest. We have all of our photos in the cloud, our music in the cloud, et cetera. That’s why EPCC last year got rid of its file server. All our data running the actual organization is in the cloud.

The cloud model is great inasmuch as it allows people who don’t want to operate and run a large system 100 percent of the time the ability to access these technologies in ways they have never been able to do before.

The cloud model is great inasmuch as it allows people who don’t want to operate and run a large system 100 percent of the time the ability to access these technologies in ways they have never been able to do before.

The other side of that is that there are fantastic software frameworks now that didn’t exist even five years ago for doing AI. There is so much open source for doing simulations.

It doesn’t mean that an organization like EPCC, which is a supercomputing center, will stop hosting large systems. We are still great aggregators of demand. We will still have the largest computers. But it does mean that, for the first time through the various cloud providers, any company, any small research group and university, has access to the right level of resources that they need in a cost-effective way.

Gardner: Eng Lim, do you have anything more to offer on the value and economics of HPC? Does paying based on use rather than a capital expenditure change the game?

More choices, more innovation 

Goh: Oh, great question. There are some applications and institutions with processes that work very well with a cloud, and there are some applications that don’t and processes that don’t. That’s part of the reason why you embrace both. And, in fact, we at HPE embrace the cloud and we also we build on-premises solutions for our customers, like the one at the Catalyst UK program.

We also have something that is a mix of the two. We call that HPE GreenLake, which is the ability for us to acquire the system the customer needs, but the customer pays per use. This is software-defined experience on consumption-based economics.

HPE logoThese are some of the options we put together to allow choice for our customers, because there is a variation of needs and processes. Some are more CAPEX-oriented in a way they acquire resources and others are more OPEX-oriented.

Gardner: Do you have examples of where some of the fruits of Catalyst, and some of the benefits of the ecosystem approach, have led to applications, use cases, and demonstrated innovation?

Parsons: What we are trying to do is show how easy ARM is to use. We have taken some really powerful, important code that runs every day on our big national services and have simply moved them across to ARM. Users don’t really understand or don’t need to understand they are running on a different system. It’s that boring.

We have picked up one or two problems with code that probably exist in the x86 version, but because you are running a new processor, it exposes it more, and we are fixing that. But in general — and this is absolutely the wrong message for an interview — we are proceeding in a very boring way. The reason I say that is, it’s really important that this is boring, because if we don’t show this is easy, people won’t put ARM on their next procurement list. They will think that it’s too difficult, that it’s going to be too much trouble to move codes across.

One of the aims of Catalyst, and I am joking, is definitely to be boring. And I think at this point in time we are succeeding.

More interestingly, though, another aim of Catalyst is about storage. The ARM systems around the world today still tend to do storage on x86. The storage will be running on Lustre or BeeGFS server, all sitting on x86 boxes.

We have made a decision to do everything on ARM, if we can. At the moment, we are looking at different storage software on ARM services. We are looking at Ceph, at Lustre, at BeeGFS, because unless you have the ecosystem running in ARM as well, people won’t think it’s as pervasive of a solution as x86, or Power, or whatever.

The benefit of being boring 

Goh: Yes, in this case boring is good. Seamless movement of code across different platforms is the key. It’s very important for an ecosystem to be successful. It needs to be easy to develop code for and it, and it needs to be easy to port. And those are just as important with our commercial HPC systems for the broader HPC customer base.

In addition to customers writing their own code and compiling it well and easily to ARM, we also want to make it easy for the independent software vendors (ISVs) to join and strengthen this ecosystem.

Parsons: That is one of the key things we intend to do over the next six months. We have good relationships, as does HPE, with many of the big and small ISVs. We want to get them on a new kind of system, let them compile their code, and get some help to do it. It’s really important that we end up with ISV code on ARM, all running successfully.

Gardner: If we are in a necessary, boring period, what will happen when we get to a more exciting stage? Where do you see this potentially going? What are some of the use cases using supercomputers to impact business, commerce, public services, and public health?

Goh: It’s not necessarily boring, but it is brilliantly done. There will be richer choices coming to supercomputing. That’s the key. Supercomputing and HPC need to reach a broader customer base. That’s the goal of our HPC team within HPE.

Over the years, we have increased our reach to the commercial side, such as the financial industry and retailers. Now there is a new opportunity coming with the bottom-up approach of using HPC. Instead of building models out of physics, we train the models with example data. This is a new way of using HPC. We will reach out to even more users.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

So, the success of our supercomputing industry is getting more users, with high diversity, to come on board.

Gardner: Mark, what are some of the exciting outcomes you anticipate?

Parsons: As we get more experience with ARM it will become a serious player. If you look around the world today, in Japan, for example, they have a big new ARM-based supercomputer that’s going to be similar to the Thunder X2 when it’s launched.

I predict in the next three or four years we are going to see some very significant supercomputers up at the X2 level, built from ARM processors. Based on what I hear, the next generations of these processors will produce a really exciting time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, data analysis, Hewlett Packard Enterprise, machine learning | Tagged , , , , , , , , , , | Leave a comment

HPE and PTC join forces to deliver best manufacturing outcomes from the OT-IT productivity revolution

Seagate_drives_being_testedThe next BriefingsDirect Voice of the Customer edge computing trends discussion explores the rapidly evolving confluence of operational technology (OT) and Internet of Things (IoT).

New advances in data processing, real-time analytics, and platform efficiency have prompted innovative and impactful OT approaches at the edge. We’ll now explore how such data analysis platforms bring manufacturers data-center caliber benefits for real-time insights where they are needed most.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To hear more about the latest capabilities in gaining unprecedented operational insights, we sat down with Riaan Lourens, Vice President of Technology in the Office of the Chief Technology Officer at PTC, and Tripp Partain, Chief Technology Officer of IoT Solutions at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Riaan, what kinds of new insights are manufacturers seeking into how their operations perform?

Riaan Lourens

Lourens

Lourens: We are in the midst of a Fourth Industrial Revolution, which is really an extension of the third, where we used electronics and IT to automate manufacturing. Now, the fourth is the digital revolution, a fusion of technology and capabilities that blur the lines between the physical and digital worlds.

With the influx of these technologies, both hardware and software, our customers — and manufacturing as a whole, as well as the discrete process industries — are finding opportunities to either save or make more money. The trend is focused on looking at technology as a business strategy, as opposed to just pure IT operations.

There are a number of examples of how our customers have leveraged technology to drive their business strategy.

Gardner: Are we entering a golden age by combining what OT and IT have matured into over the past couple of decades? If we call this Industrial Revolution 4.0 (I4.0) there must be some kind of major opportunities right now.

Lourens: There are a lot of initiatives out there, whether it’s I4.0, Made in China 2025, or the Smart Factory Initiative in the US. By democratizing the process of providing value — be it with cloud capabilities, edge computing, or anything in between – we are inherently providing options for manufacturers to solve problems that they were not able to solve before.

The opportunity for manufacturers today allows them to solve problems that they face almost immediately. There is quick time-to-value by leveraging technology that is consumable. 

If you look at it from a broader technology standpoint, in the past we had very large, monolith-like deployments of technology. If you look at it from the ISA-95 model, like Level 3 or Level 4, your MES deployments or large-scale enterprise resource planning (ERP), those were very large deployments that took many years. And the return on investment (ROI) the manufacturers saw would potentially pay off over many years.

The opportunity that exists for manufacturers today, however, allows them to solve problems that they face almost immediately. There is quick time-to-value by leveraging technology that is consumable. Then they can lift and drop and so scale [those new solutions] across the enterprise. That does make this an era the likes of which nobody has seen before.

Gardner: Tripp, do you agree that we are in a golden age here? It seems to me that we are able to both accommodate a great deal of diversity and heterogeneity of the edge, across all sorts of endpoints and sensors, but also bring that into a common-platform approach. We get the best of efficiency and automation.

Tripp Partain

Partain

Partain: There is a combination of two things. One, due to the smartphone evolution over the last 10 years, the types of sensors and chips that have been created to drive that at the consumer level are now at such reasonable price points you are able to apply these to industrial areas.

To Riaan’s point, the price points of these technologies have gotten really low — but the capabilities are really high. A lot of existing equipment in a manufacturing environment that might have 20 or 30 years of life left can be retrofitted with these sensors and capabilities to give insights and compute capabilities at the edge. The capability to interact in real-time with those sensors provides platforms that didn’t exist even five years ago. That combines with the right software capabilities so that manufacturers and industrials get insights that they never had before into their processes.

Gardner: How is the partnership between PTC and HPE taking advantage of this new opportunity? It seems you are coming from different vantage points but reinforcing one another. How is the whole greater than the sum of the parts when it comes to the partnership?

Partnership for progress, flexibility

Lourens: For some context, PTC is a software vendor. Over the last 30 years we targeted our efforts at helping manufacturers either engineer software with computer-aided design (CAD) or product lifecycle management (PLM). We have evolved to our growth areas today of IoT solution platforms and augmented reality (AR) capabilities.

The challenge that manufacturers face today is not just a software problem. It requires a robust ecosystem of hardware vendors, software vendors, and solutions partners, such as regional or global systems integrators.

The reason we work very closely with HPE as an alliance partner is because HPE is a leader in the space. HPE has a strong offering of compute capabilities — from very small gateway-level compute all the way through to hybrid technologies and converged infrastructure technologies.

Ultimately our customers need flexible options to deploy software at the right place, at the right time, and throughout any part of their network. We find that HPE is a strong partner on this front.

edge boxGardner: Tripp, not only do we have lower cost and higher capability at the edge, we also have a continuum of hybrid IT. We can use on-premises micro-datacenters, converged infrastructure, private cloud, and public cloud options to choose from. Why is that also accelerating the benefits for manufacturers? Why is a continuum of hybrid IT – edge to cloud — an important factor?

Partain: That flexibility is required if you look at the industrial environments where these problems are occurring for our joint customers. If you look at any given product line where manufacturing takes place — no two regions are the same and no two factories are the same. Even within a factory, a lot of times, no two production lines are the same.

There is a wide diversity in how manufacturing takes place. You need to be able to meet those challenge with the customers to give them the deployment options that meet each of those environments.

It’s interesting. Factories don’t do enterprise IT-like deployments, where every factory takes on new capabilities at the same time. It’s much more balanced in the way that products are made. You have to be able to have that same level of flexibility in how you deploy the solutions, to allow it to be absorbed the same way the factories do all of their other types of processes.

We have seen the need for different levels of IT to match up to the way they are implemented in different types of factories. That flexibility meets them where they are and allows them to get to the value much quicker — and not wait for some huge enterprise rollout, like what Riaan described earlier with ERP systems that take multiple years.

By leveraging new, hybrid, converged, and flexible environments, we allow a single plant to deploy multiple solutions and get results much quicker. We can also still work that into an enterprise-wide deployment — and get a better balance between time and return.

Gardner: Riaan, you earlier mentioned democratization. That jumped out at me. How are we able to take these advances in systems, software, and access and availability of deployments and make that consumable by people who are not data scientists? How are we able to take the results of what the technology does and make it actionable, even using things like AR?

Lourens: As Tripp described, every manufacturing facility is different. There are typically different line configurations, different programmable logic controller (PLC) configurations, different heterogeneous systems — be it legacy IT systems or homegrown systems — so the ability to leverage what is there is inherently important.

From a strategic perspective, PTC has two core platforms; one being our ThingWorx Platform that allows you to source data and information from existing systems that are there, as well as from assets directly via the PLC or by embedding software into machines.

We also have the ability to simplify and contextualize all of that information and make sense of it. We can then drive analytical insights out of the data that we now have access to. Ultimately we can orchestrate with end users in their different personas – be that the maintenance operator, supervisor, or plant manager — enabling and engaging with these different users through AR.

Four capabilities for value

There are four capabilities that allow you to derive value. Ultimately our strategy is to bring that up a level and to provide capabilities solutions to our end customers across four different areas.

One, we look at it from an enterprise operational intelligence perspective; the second is intelligent asset optimization; the third, digital workforce productivity, and fourth, scalable production management.

assembly lineSo across those four solution areas we can apply our technology together with that of our sourced partners. We allow our customers to find use-cases within those four solution areas that provides them a return on investment.

One example of that would be leveraging augmented work instructions. So instead of an operator going through a maintenance procedure by opening a folder of hundreds of pages of instructions, they can leverage new technology such as AR to guide the operator in process, and in situ, in terms of how to do something.

There are many use cases across those four solution areas that leverage the core capabilities across the IoT platform, ThingWorx, as well as the AR platform, Vuforia.

Gardner: Tripp, it sounds like we are taking the best of what people can do and the best of what systems and analytics can do. We also move from batch processing to real time. We have location-based services so we can tell where things and people are in new ways. And then we empower people in ways that we hadn’t done before, such as AR.

Are we at the point where we’re combining the best of cognitive human capabilities and machine capabilities?

Partain: I don’t know if we have gotten to the best yet, but probably the best of what we’ve had so far. As we continue to evolve these technologies and find new ways to look at problems with different technology — it will continue to evolve.

We are getting to the new sweet spot, if you will, of putting the two together and being able to drive advancements forward. One of the things that’s critical has to do with where our current workforce is.

A number of manufacturers I talk to — and I’ve heard similar from PTC’s customers and our joint customers — is you are at a tipping point in terms of the current talent pool, with those currently employed and those getting close to retirement age.

The next generation that’s coming in is not going to have the same longevity and the same skill sets. Having these newer technologies and bringing these pieces together, it’s not only a new matchup based on the new technology – it’s also better suited for the type of workers carrying these activities forward. Manufacturing is not going away, but it’s going to be a very different generation of factory workers and types of technologies.

The solutions are now available to really enhance those jobs. We are starting to see all of the pieces come together. That’s where both IoT solutions — but even especially AR solutions like PTC Vuforia — really come into play.

Gardner: Riaan, in a large manufacturing environment, only small iterative improvements can make a big impact on the economics, the bottom line. What sort of future categorical improvements value are we looking at? To what degree do we have an opportunity to make manufacturing more efficient, more productive, more economically powerful?

Tech bridges skills gap, talent shortage

Lourens: If you look at it from the angle that Tripp just referred to, there are a number of increasing pressures across the board in the industrial markets via the workers’ skills gap. Products are also becoming more complex. Workspaces are becoming more complex. There are also increasing customer demands and expectations. Markets are just becoming more fiercely competitive.

But if you leverage capabilities such as AR — which provides augmented 3-D work instructions, expert guidance, and remote assistance, training, and demonstrations — that’s one area. If you combine that, to Tripp’s point, with the new IoT capabilities, then I think you can look at improvements such as reducing waste in processes and materials.

We have seen customers reducing by 30 percent unplanned downtime, which is a very common use case that we see manufacturers target. We also see reducing energy consumption by 3 to 7 percent. And we’re looking at improving productivity by 20 to 30 percent.

We have seen customers reducing by 30 percent unplanned downtime, which is a very common use case that we see manufacturers target. We also see reducing energy consumption by 3 to 7 percent at a very large ship manufacturer, a customer of PTC’s. And we’re generally looking at improving productivity by 20 to 30 percent.

By leveraging this technology in a meaningful way to get iterative improvements, you can then scale it across the enterprise very rapidly, and multiple use cases can become part of the solution. In these areas of opportunity, very rapidly you get that ROI.

Gardner: Do we have concrete examples to help illustrate how those general productivity benefits come about?

Joint solutions reduce manufacturing pains 

Lourens: A joint-customer between HPE and PTC focuses on manufacturing and distributing reusable and recyclable food packaging containers. The company, CuBE Packaging Solutions, targeted protective maintenance in manufacturing. Their goal is to have the equipment notify them when attention is needed. That allows them to service what they need when they need to and focus on reducing unplanned downtime.

In this particular example, there are a number of technologies that play across both of our two companies. The HPE Nimble Storage capability and HPE Synergy technology were leveraged, as well as a whole variety of HPE Aruba switches and wireless access points, along with PTC’s ThingWorx solution platform.

The CuBE Packaging solution ultimately was pulled together through an ecosystem partner, Callisto Integration, which we both worked with very closely. In this use case, we not only targeted the plastic molding assets that they were monitoring, but the peripheral equipment, such as cooling and air systems, that may impact their operations. The goal is to avoid anything that could pause their injection molding equipment and plants.

Gardner: Tripp, any examples of use-cases that come to your mind that illustrate the impact?

Partain: Another joint-customer that comes to mind is Texmark Chemicals in Galena Park, Texas. They are using number of HPE solutions, including HPE Edgeline, our micro-datacenter. They are also using PTC ThingWorx and a number of other solutions.

sparksThey have very large pumps critical to the operation as they move chemicals and fluids in various stages around their plant in the refining process. Being able to monitor those in real time, predict potential failures before they happen, and use a combination of live data and algorithms to predict wear and tear, allows them to determine the optimal time to make replacements and minimize downtime.

Such uses cases are one of the advantages when customers come and visit our IoT Lab in Houston. From an HPE standpoint, not only do they see our joint solutions in the lab, but we can actually take them out to the Texmark location and Texmark will host and allow you them see these technologies in real-time working at their facility.

Similar as Riaan mentioned, we started at Texmark with condition monitoring and now the solutions have moved into additional use cases — whether it’s mechanical integrity, video as a sensor, and employee-safety-related use cases.

We started with condition monitoring, proved that out, got the technology working, then took that framework — including best-in-class hardware and software — and continued to build and evolve on top of that to solve expanded problems. Texmark has been a great joint customer for us.

Gardner: Riaan, when organizations hear about these technologies and the opportunity for some very significant productivity benefits, when they understand that more-and-more of their organization is going to be data-driven and real-time analysis benefits could be delivered to people in their actionable context, perhaps using such things as AR, what should they be doing now to get ready?

Start small

Lourens: Over the last eight years of working with ThingWorx, I have noticed the initial trend of looking at the technology versus looking at specific use-cases that provide real business value, and of working backward from the business value.

My recommendation is to target use cases that provide quick time-to-value. Apply the technology in a way that allows you to start small, and then iterate from there, versus trying to prove your ROI based on the core technology capabilities.

Ultimately understand the business challenges and how you can grow your top line or your bottom line. Then work backward from there, starting small by looking at a plant or operations within a plant, and then apply the technology across more people. That helps create a smart connected people strategy. Apply technology in terms of the process and then relative to actual machines within that process in a way that’s relevant to use cases — that’s going to drive some ROI.

Gardner: Tripp, what should the IT organization be newly thinking? Now, they are tasked with maintaining systems across a continuum of cloud-to-edge. They are seeing micro-datacenters at the edge; they’re doing combinations of data-driven analytics and software that leads to new interfaces such as AR.

How should the IT organization prepare itself to take on what goes into any nook and cranny in almost any manufacturing environment?

IT has to extend its reach 

Partain: It’s about doing all of that IT in places where typically IT has had a little or no involvement. In many industrial and manufacturer organizations, as we go in and start having conversations, IT really has usually stopped at the datacenter back-end. Now there’s lots of technology in the manufacturing side, too, but it has not typically involved the IT department.

PTC_New_LogoOne of the first steps is to get educated on the new edge technologies and how they fit into the overall architecture. They need to have the existing support frameworks and models in place that are instantly usable, but also work with the business side and frame-up the problems they are trying to solve.

As Riaan mentioned, being able to say, “Hey, here are the types of technologies we in IT can apply to this that you [OT] guys haven’t necessarily looked at before. Here’s the standardization we can help bring so we don’t end up with something completely different in every factory, which runs up your overall cost to support and run.”

It’s a new world. And IT is going to have to spend much more time with the part of the business they have probably spent the least amount of time with. IT needs to get involved as early as possible in understanding what the business challenges are and getting educated on these newer IoT, AR, virtual reality (VR), and edge-based solutions. These are becoming the extension points of traditional technology and are the new ways of solving problems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, data analysi