How to migrate your organization to a more security-minded culture

Bringing broader awareness of security risks and building a security-minded culture within any public or private organization has been a top priority for years.

Yet halfway through 2021, IT security remains as much a threat as ever — with multiple major breaches and attacks costing tens of millions of dollars occurring nearly weekly.

Why are the threat vectors not declining? Why, with all the tools and investment, are businesses still regularly being held up for ransom or having their data breached? To what degree are behavior, culture, attitude, and organizational dissonance to blame?

Join us here as BriefingsDirect probes into these more human elements of IT security with a leading chief information security officer (CISO).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about adjusting the culture of security to make organizations more resilient, please welcome Adrian Ludwig, CISO at Atlassian. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Adrian, we are constantly bombarded with headlines showing how IT security is failing. Yet, for many people, they continue on their merry way — business as usual.

Are we now living in a world where such breaches amount to acceptable losses? Are people not concerned because the attacks are perceived as someone else’s problem?

Ludwig: A lot of that is probably true, depending on whom you ask and what their state of mind is on a given day. We’re definitely seeing a lot more than we’ve seen in the past. And there’s some interesting twists to the language. What we’re seeing does not necessarily imply that there is more exploitation going on or that there are more problems — but it’s definitely the case that we’re getting a lot more visibility.

I think it’s a little bit of both. There probably are more attacks going on, and we also have better visibility.

Gardner: Isn’t security something we should all be thinking about, not just the CISOs?

Ludwig: It’s interesting how people don’t want to think about it. They appoint somebody, give them a title, and then say that person is now responsible for making security happen.

But the reality is, within any organization, doing the right thing — whether that be security, keeping track of the money, or making sure that things are going the way you’re expecting — is a responsibility that’s shared across the entire organization. That’s something that we are now becoming more accustomed to. The security space is realizing it’s not just about the security folks doing a good job. It’s about enabling the entire organization to understand what’s important to be more secure and making that as easy as possible. So, there’s an element of culture change and of improving the entire organization.

Gardner: What’s making these softer approaches — behavior, culture, management, and attitude – more important now? Is there something about security technology that has changed that makes us now need to look at how people think?

Ludwig: We’re beginning to realize that technology is not going to solve all our problems. When I first went into the security business, the company I worked for, a government agency, still had posters on the wall from World War II: Loose lips sink ships.

Learn More  

About Traceable.ai

The idea of security culture is not new, but the awareness is, across organizations that any person could be subject to phishing, or any person could have their credentials taken — those mistakes could be originating at any place in the organization. That broad-based awareness is relatively new. It probably helps that we’ve all been locked in our houses for the last year, paying a lot more attention to the media, and hearing about attacks that have been going on at governments, the hacking, and all those things. That has raised awareness as well.

Gardner: It’s confounding that people authenticate better in their personal lives. They don’t want their credit cards or bank accounts pillaged. They have a double standard when it comes to what they think about protecting themselves versus protecting the company they work for.

Data safer at home or work?

Ludwig: Yes, it’s interesting. We used to think enterprise security could be more difficult from the user experience standpoint because people would put up with it because it was work.

But the opposite might be true, that people are more self-motivated in the consumer space and they’re willing to put up with something more challenging than they would in an enterprise. There might be some truth to that, Dana.

Gardner: The passwords I use for my bank account are long and complex, and the passwords I use when I’m in the business environment … maybe not so much. It gets us back to how you think and your attitude for improved security. How do we get people to think differently?

Ludwig: There’s a few different things to consider. One is that the security people need to think differently. It’s not necessarily about changing the behavior of every employee in the company. Some of it is about figuring out how to implement critical solutions that provide security without changing behavior.

Security people need to think differently. It’s not necessarily about changing the behavior of every employee in the company. It’s about implementing solutions that provide security without changing behavior. 

There is a phrase, the paved path or road; so, making the secure way the easy way to do something. When people started using YubiKey U2F [an open authentication standard that enables internet users to securely access any number of online services with a single security key] as a second-factor authentication, it was actually a lot easier than having to input your password all over the place — and it’s more secure.

That’s the kind of thing we’re looking for. How do we enable enhanced security while also having a better user experience? What’s true in authentication could be true in any number of other places as well.

Second, we need to focus on developers. We need to make the developer experience more secure and build more confidence and trustworthiness in the software we’re building, as well as in the types of tools used to build.

Developers find strength

Gardner: You brought up another point of interest to me. There’s a mindset that when you hand something off in an organization — it could be from app development into production, or from product design into manufacturing — people like to move on. But with security, that type of hand-off can be a risk factor.

Beginning with developers, how would you change that hand-off? Should developers be thinking about security in the same way that the IT production people do?

Ludwig: It’s tricky. Security is about having the whole system work the way that everybody expects it to. If there’s a breakdown anywhere in that system, and it doesn’t work the way you’re expecting, then you say, “Oh, it’s insecure.” But no one has figured out what those hidden expectations are.

No alt text provided for this image

A developer expects the code they write isn’t going to have vulnerabilities. Even if they make a mistake, even if there’s a performance bug, that shouldn’t introduce a security problem. And there are improvements being made in programming languages to help with that.

Certain languages are highly prone to security being a common failure. I grew up using C and C++. Security wasn’t something that was even thought of in the design of those languages. Java, a lot more security was thought of in the design of that language, so it’s intrinsically safer. Does that mean there are no security issues that can happen if you’re using Java? No.

Similar types of expectations exist at other places in the development pipeline as well.

Gardner: I suppose another shift has been from applications developed to reside in a data center, behind firewalls and security perimeters. But now — with microservices, cloud-native applications, and multiple application programming interfaces (APIs) being brought together interdependently — we’re no longer aware of where the code is running.

Don’t you have to think differently as a developer because of the way applications in production have shifted?

Ludwig: Yes, it’s definitely made a big difference. We used to describe applications as being monoliths. There were very few parts of the application that were exposed.

At this point, most applications are microservices. And that means across an application, there might be 1,000 different parts of the application that are publicly exposed. They all must have some level of security checks being done on them to make sure that if they’re handling an input that might be coming from the other side of the world that it’s being handled correctly.

Learn More  

About Traceable.ai

So, yes, the design and the architecture have definitely exposed a lot more of the app’s surface. There’s been a bit of a race to make the tools better, but the architectures are getting more complicated. And I don’t know, it’s neck and neck on whether things are getting more secure or they’re getting less secure as these architectures get bigger and more exposed.

We have to think about that. How do we design processes to deal with that? How do you design technology, and what’s the culture that needs to be in place? I think part of it is having a culture of every single developer being conscious of the fact that the decisions they’re making have security implications. So that’s a lot of work to do.

Gardner: Another attitude adjustment that’s necessary is assuming that breaches are going to happen and to stifle them as quickly as possible. It’s a little different mindset, but the more people involved with looking for anomalies, who are willing to have their data or behaviors examined for anomalies makes sense.

Is there a needed cultural shift that goes with assuming you’re going to be breached and making sure the damage is limited?

Assume the worst to limit damage 

Ludwig: Yes. A big part of the cultural shift is being comfortable taking feedback from anybody that you have a problem and that there’s something that you need to fix. That’s the first step.

Companies should let anybody identify a security problem — and that could be anybody inside or outside of the company. Bug bounties. We’re in a bit of arevolution in terms of enabling better visibility into potential security problems.

But once you have that sort of culture, you start thinking, “Okay. How do I actually monitor what’s going on in each of the different areas?” With that visibility, exposure, and understanding what’s going in and out of specific applications, you can detect when there’s something you’re not expecting. That turns out to be really difficult, if what you’re looking at is very big and very, very complicated.

Decomposing an application down into smaller pieces, being able to trace the behaviors within those pieces, and understanding which APIs each of those different microservices is exposing turns out to be really important.

If you combine decomposing applications into smaller pieces with monitoring what’s going on in them and creating a culture where anybody can find a potential security flaw, surface it, and react to it — those are good building blocks for having an environment where you have a lot more security than you would have otherwise.

Gardner: Another shift we’ve seen in the past several years is the advent of big data. Not only can we manage big data quickly, but we can also do it at a reasonable cost. That has brought about machine learning (ML) and movement to artificial intelligence (AI). So, now there’s an opportunity to put another arrow in our quiver of tools and use big data ML to buttress our security and provide a new culture of awareness as a result.

Most applications are so complicated — and have been developed in such a chaotic manner — it’s impossible to understand what’s going on inside of them.Give the robots a shot and see if we can figure it out by turning the machines on themselves. 

Ludwig: I think so. There are a bunch of companies trying to do that, to look at the patterns that exist within applications, and understand what those patterns look like. In some instances, they can alert you when there’s something not operating the way that is expected and maybe guide you to rearchitecting and make your applications more efficient and secure.

There are a few different approaches being explored. Ultimately, at this point, most applications are so complicated — and have been developed in such a chaotic manner — it’s impossible to understand what’s going on inside of them. That’s the right time that the robots give it a shot and see if we can figure it out by turning the machines on themselves.

Gardner: Yes. Fight fire with fire.

Let’s get back to the culture of security. If you ask the people in the company to think differently about security, they all nod their heads and say they’ll try. But there has to be a leadership shift, too. Who is in charge of such security messaging? Who has the best voice for having the whole company think differently and better about security? Who’s in charge of security?

C-suite must take the lead 

That’s a realization it took me several years to realize. If the security person keeps saying, “The sky is falling, the sky is falling,” people aren’t going to listen. They say, “Security is important.” And the others reply, “Yes, of course, security is important to you, you’re the security guy.”If the head of the business, or the CEO, consistently says, “We need to make this a priority. Security is really important, and these are the people who are going to help us understand what that means and how to execute on it,” then that ends up being a really healthy relationship.

The companies I’ve seen turn themselves around to become good at security are the ones such as MicrosoftGoogle, or others where the CEO made it personal, and said, “We’re going to fix this, and it’s my number-one priority. We’re going to invest in it, and I’m going to hire a great team of security professionals to help us make that happen. I’m going to work with them and enable them to be successful.”

Learn More  

About Traceable.ai

Alternatively, there are companies where the CEO says, “Oh, the board has asked us to get a good security person, so I’ve hired this person and you should do what he says.” That’s the path to a disgruntled bunch of folks across the entire organization. They will conclude that security is just lip service, it’s not that important. “We’re just doing it because we have to,” they will say. And that is not where you want to end up.

Gardner: You can’t just talk the talk, you have to walk the walk and do it all the time, over and over again, with a loud voice, right?

Ludwig: Yes. And eventually it gets quieter. Eventually, you don’t need to have the top level saying this is the most important thing. It becomes part of the culture. People realize that’s just the way – and it’s not that it’s just the way we do things, but it is a number-one value for us. It’s the number-one thing for our customers, too, and so culture shift ends up happening.

Gardner: Security mindfulness becomes the fabric within the organization. But to get there requires change and changing behaviors has always been hard.

Are there carrots? Are there sticks? When the top echelon of the organization, public or private, commits to security, how do you then execute on that? Are there some steps that you’ve learned or seen that help people get incentivized — or whacked upside the head, so to speak, when necessary?

Talk the security talk and listen up

Ludwig: We definitely haven’t gone for “whacked upside the head.” I’m not sure that works for anybody at this point, but maybe I’m just a progressive when it comes to how to properly train employees.

What we have seen work is just talking about it on a regular basis, asking about the things that we’re doing from a security standpoint. Are they working? Are they getting in your way? Honestly, showing that there’s thoughtfulness and concern going into the development of those security improvements goes a long way toward making people more comfortable with following through on them.

A great example is … You roll out two-factor authentication, and then you ask, “Is it getting in the way? Is there anything that we can do to make this better? This is not the be-all and end-all. We want to improve this over time.”

No alt text provided for this image

That type of introspection by the security organization is surprising to some people. The idea that the security team doesn’t want it to be disruptive, that they don’t want to get in the way, can go a long way toward it feeling as though these new protections are less disruptive and less problematic than they might otherwise feel.

Gardner: And when the organization is focused on developers? Developers can be, you know … 

Ludwig: Ornery?

Gardner: “Ornery” works. If you can make developers work toward a fabric of security mindedness and culture, you can probably do it to anyone. What have you learned on injecting a better security culture within the developer corps?

Ludwig: A lot of it starts, again, at the top. You know, we have core values that invoke vulgarity to both emphasize how important they are, but also how simple they are.

One of Atlassian’s values is, “Don’t fuck the customer.” And as a result of that, it’s very easy to remember, and it’s very easy to invoke. “Hey, if we don’t do this correctly, that’s going to hurt the customer.” We can’t let that happen as a top-level value.

We also have “Open company, no-bullshit”. If somebody says, “I see a problem over here,” then we need to follow up on it, right? There’s not a temptation to cover it up, to hide it, to pretend it’s not an issue. It’s about driving change and making sure that we’re implementing solutions that actually fix things.

There are countless examples of a feature that was built, and we really want to ship it, but it turns out it’s got a problem and we can’t do it because that would actually be a problem for the customer. So, we back off and go from there.

How to talk about security

Gardner: Words are powerful. Brands are powerful. Messaging is powerful. What you just said made me think, “Maybe the word security isn’t the right word.” If we use the words “customer experience,” maybe that’s better. Have you found that? Is “security” the wrong word nowadays? Maybe we should be thinking about creating an experience at a larger level that connotes success and progress.

Ludwig: Super interesting. Apple doesn’t use the word “security” very much at all. As a consumer brand, what they focus on is privacy, right? The idea that they’ve built highly secure products is motivated by the users’ right to privacy and the users’ desire to have their information remain private. But they don’t talk about security.

Apple doesn’t use the word security very much at all. The idea that they’ve built highly secure products is motivated by the users’ right to privacy and the users’ desire to have their information remain private. But they don’t talk about security. 

I always thought that was a really an interesting decision on their part. When I was at Google, we did some branding analysis, and we also came up with insights about how we talked about security. It’s a negative from a customer’s standpoint. And so, most of the references that you’ll see coming out of Google are security and privacy. They always attach those two things together. It’s not a coincidence. I think you’re right that the branding is problematic.

Microsoft uses trustworthy, as in trustworthy computing. So, I guess the rest of us are a little bit slow to pick up on that, but ultimately, it’s a combination of security and a bunch of other things that we’re trying to enable to make sure that the products do what we’re expecting them to do.

Gardner: I like resilience. I think that cuts across these terms because it’s not just the security, it’s how well the product is architected, how well it performs. Is it hardened, in a sense, so that it performs in trying circumstances – even when there are issues of scale or outside threats, and so forth. How do you like “resilience,” and how does that notion of business continuity come into play when we are trying to improve the culture?

Ludwig: Yes, “resilience” is a pretty good term. It comes up in the pop psychology space as well. You can try to make your children more resilient. Those are the ones that end up being the most successful, right? It certainly is an element of what you’re trying to build.

Learn More  

About Traceable.ai

A “resilient” system is one in which there’s an understanding that it’s not going to be perfect. It’s going to have some setbacks, and you need to have it recoverable when there are setbacks. You need to design with an expectation that there are going to be problems. I still remember the first time I heard about a squirrel shorting out a data center and taking down the whole data center. It can happen, right? It does happen. Or, you know, you get a solar event and that takes down computers.

There are lots of different things that you need to build to recover from accidental threats, and there are ones that are more intentional — like when somebody deploys ransomware and tries to take your pipeline offline.

Gardner: To be more resilient in our organizations, one of the things that we’ve seen with developers and IT operations is DevOps. Has DevOps been a good lesson for broader resilience? Is there something we can do with other silos in organization to make them more resilient?

DevOps derives from experience

Ludwig: I think so. Ultimately, there are lots of different ways people describe DevOps, but I think about taking what used to be a very big thing and acknowledging that you can’t comprehend the complexity of that big thing. Choosing instead to embrace the idea that you should do lots of little things, in aggregate, and that they’re going to end up being a big thing.

And that is a core ethos of DevOps, that each individual developer is going to write a little bit of code and then they’re going to ship it. You’re going to do that over and over and over. You are going to do that very, very, very quickly. And they’re going to be responsible for running their own thing. That’s the operations part of the development. But the result is, over time, you get closer to a good product because you can gain feedback from customers, you’re able to see how it’s working in reality, and you’ll be able to get testing that takes place with real data. There are lots of advantages to that. But the critical part of it, from a security standpoint, is it makes it possible to respond to security flaws in near real-time.

No alt text provided for this image

Often, organizations just aren’t pushing code frequently enough to be able to know how to fix a security problem. They are like, “Oh, our next release window is 90 days from now. I can’t possibly do anything between now and then.” Getting to a point where you have an improvement process that’s really flexible and that’s being exercised every single day is what you get by having DevOps.

And so, if you think about that same mentality for other parts of your organization, it definitely makes them able to react when something unexpected happens.

Gardner: Perhaps we should be looking to our software development organizations for lessons on cultural methods that we can apply elsewhere. They’re on the bleeding edge of being more secure, more productive, and they’re doing it through better communications and culture.

Ludwig: It’s interesting to phrase it that way because that sounds highfalutin, and that they achieved it out of expertise and brilliance. What it really is, is the humbleness of realizing that the compiler tells you your code is wrong every single day. There’s a new user bug every single day. And eventually you get beaten down by all those, and you decide you’re just going to react every single day instead of having this big thing build up.

So, yes, I think DevOps is a good example but it’s a result of realizing how many flaws there are more than anything highfalutin, that’s for sure.

Gardner: The software doesn’t just eat the world; the software can show the world the new, better way.

Ludwig: Yes, hopefully so.

Future best security practices

Gardner: Adrian, any thoughts about the future of better security, privacy, and resilience? How will ML and AI provide more analysis and improvements to come?

Ludwig: Probably the most important thing going on right now in the context of security is the realization by the senior executives and boards that security is something they need to be proponents for. They are pushing to make it possible for organizations to be more secure. That has fascinating ramifications all the way down the line.

If you look at the best security organizations, they know the best way to enable security within their companies and for their customers is to make security as easy as possible. You get a combination of the non-security executive saying, “Security is the number-one thing,” and at the same time, the security executive realizes the number-one thing to implement security is to make it as easy as possible to embrace and to not be disruptive.

And so, we are seeing faster investment in security that works because it’s easier. And I think that’s going to make a huge difference.

No alt text provided for this image

There are also several foundational technology shifts that have turned out to be very pro-security, which wasn’t why they were built — but it’s turning out to be the case. For example, in the consumer space the move toward the web rather than desktop applications has enabled greater security. We saw a movement toward mobile operating systems as a primary mechanism for interacting with the web versus desktop operating systems. It turns out that those had a fundamentally more secure design, and so the risks there have gone down.

The enterprise has been a little slow, but I see the shift away from behind-the-firewall software toward cloud-based and software as a service (SaaS) software as enabling a lot better security for most organizations. Eventually, I think it will be for all organizations.

Those shifts are happening at the same time as we have cultural shifts. I’m really optimistic that over the next decade or two we’re going to get to a point where security is not something we talk about. It’s just something built-in and expected in much the same way as we don’t spend too much time now talking about having access to the Internet. That used to be a critical stumbling block. It’s hard to find a place now that doesn’t or won’t soon have access.

Gardner: These security practices and capabilities become part-and-parcel of good business conduct. We’ll just think of it as doing a good job, and those companies that don’t do a good job will suffer the consequences and the Darwinian nature of capitalism will take over.

Ludwig: I think it will.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: TraceableAI.

YOU MAY ALSO BE INTERESTED IN:

●      How API security provides a killer use case for ML and AI

●      Securing APIs demands tracing and machine learning that analyze behaviors to head off attacks

●      Rise of APIs brings new security threat vector — and need for novel defenses

●      Learn More About the Technologies and Solutions Behind Traceable.ai.

●      Three Threat Vectors Addressed by Zero Trust App Sec

●      Web Application Security is Not API Security

●      Does SAST Deliver? The Challenges of Code Scanning.

●      Everything You Need to Know About Authentication and Authorization in Web APIs

●      Top 5 Ways to Protect Against Data Exposure

●      TraceAI : Machine Learning Driven Application and API Security

Posted in AIOps, API Security, application transformation, artificial intelligence, Business intelligence, BYOD, Cloud computing, Cyber security, data analysis, data center, Data center transformation, digital transformation, disaster recovery, Enterprise architect, enterprise architecture, Identity, machine learning, risk assessment, Security, User experience | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Citrix research shows those ‘Born Digital’ can deliver superlative results — if leaders know what makes them tick

Self-awareness as an individual attribute provides the context to better understand others and to find common ground. But what about self-awareness of entire generations?

Are those born before the mass appeal and distribution of digital technology able to make the leap in their awareness of those who have essentially been Born Digital? Does the awareness gap extend to an even more profound disconnect between how today’s younger generations think and those more likely to be in the leadership positions in businesses?

Do the bosses really get their entry-level cohorts? And what, if any, impact has the COVID-19 pandemic had in amplifying these perception and cognition gaps?


Stay with us as BriefingsDirect explores new research into what makes the Born Digital generation tick. And we’ll also unpack ways that the gap between those born analog and more recently can be closed. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the paybacks and advantages of understanding and embracing the Born Digital Effect, please welcome Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix, and Amy Haworth, Senior Director of Employee Experience at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, your latest research into what makes those Born Digital tick bucks conventional wisdom. Why did Citrix undertake this research in the first place?

Minahan: This is the first generation to grow up in an entirely digital world. In another decade or so, the success or failure of businesses – the entire global economy — will be in the hands of this Born Digital generation. 

We wanted to get inside their heads to see what makes them tick. That helps us to help our customers design their post-pandemic work environments and work models to best support the needs of this emerging group of leaders. 

The good news is that the Born Digital generation — those born after 1997 – is primed to deliver significant economic gains — some $1.9 trillion in corporate profits. But there certainly were some divergences in what they need to do that and how they view work.

Certainly, the pandemic has forever changed the way we all work, but it had a particularly profound impact on the Born Digital generation. Many of them began or had their early careers during the crisis. Remote and technology-driven work is all that they have ever known. Organizations need to be aware of these scenarios as they plan for the future so as to not leave out or disengage from this future generation of leaders.

The Born Digital difference

Gardner: Tim, like me, you were born analog. What surprised you most about this generation? 

Minahan: Certain key findings debunked a lot of the myths around what motivates these workers. Our research reveals a fundamental disconnect. First, job stability and work-life balance are what matter most to these employees.

Citrix Research Shows Leaders Disconnected From Younger Employees. 

 Learn How to Unlock Their Full Potential. 

Largely faced with an uncertain job environment, these younger workers are most focused on fundamental work factors like career stability and security. They also want to work in their own way. So, they are looking for a good work-life balance and more flexible work models.

And this is poorly understood by leaders, who — in the same research – showed that they think, behind access to technology, the Born Digital generation values opportunities for training and meaningful, impactful work. And, while those are important, they’re further down the list.

It turns out that job satisfaction, career stability and security, and a good work-life balance ranks above compensation and the manager they work with.

And it’s become very clear — business leaders overestimate the appeal of the office. Ninety percent of Born Digital generation employees do not want to return to the office full-time post-pandemic. They prefer a more flexible or hybrid model, which is in stark contrast to the leadership where 58 percent believe that young workers will want to spend most or all their time working in an office. And this is a real Catch-22 that we’re all going to need to grapple with not years from now but in the next few months.

Gardner: Amy, does the way companies misinterpret their employees mean we need an employee experience reboot? 

Haworth: After reading this research, I felt an overwhelming sense of the importance of listening. That means getting really curious, and not only curious at big moments, like returning to the office or moving a vast number of employees out of offices — but getting curious all the time. 

If we design employee experience strategies around old assumptions, we’re missing each other in the workplace. Experiences are built in the day-to-day moments. We need to build the hybrid workplace around trust and inclusivity.

It was so clear to me that if we are designing employee experience strategies around old assumptions, we’re missing each other in the workplace. One of the frameworks for employee experience we use heavily at Citrix is the idea that experiences are built in the day-to-day moments. The touchpoints that employees have in the human space, the physical space, and the digital space. At Citrix, we have rethought and rebooted our own experience, coming back into a hybrid workplace, and built around the idea of trust and inclusivity.

And it’s interesting in this research how much trust in autonomy and in inclusivity emerged as critical components for the Born Digitals. Interestingly, that seems to extend into other generations as well. It became the framework for us and our approach to hybrid work — a philosophy — and a way to build the infrastructure for that. We wanted to record and cultivate trust in our own culture.

Work together, even when apart

Minahan: Visionary leaders are using this moment in time to rethink future of work models and turn their work environments to competitive advantage. A growing number of our customers are now trying to navigate through these situations in their post-pandemic work model planning.

One of the big topics is not just about where people work. I think there’s a false-positive that some executives are doing with the belief that everyone wants to get back to the office full-time. Because the initial burst of productivity has declined, they’re using the last 15 months as a proxy for what remote work is.

Let’s be clear. The last year and a half has not been remote work, it’s been remote isolation. There needs to be a deeper level of understanding, as Amy said, as you move into your planning of what truly motivates people. You need to truly understand what’s going to attract the right talent and importantly what’s going to engage them and allow them to be successful in driving the business outcomes that you’re hoping for them to achieve.

Gardner: I find it not just a little ironic that we’re going to be seeking to better listen and better communicate when we’re not together in an office. There may be an inability to see the trees for the forest when you’re in the same office going through the same work patterns. Maybe breaking that pattern leads to even better communication. Amy?

Haworth: I think you are spot-on, Dana. One of the metaphors I’ve come to love is the idea of being at the ocean. If you’ve ever been anywhere where the tide comes in, at first you can’t see certain things. Then as the tide goes back out, there are tide pools full of life and vibrancy. They have been there all along, but you just couldn’t see them.

And that clearly emulates what is happening in organizations. These opportunities around hybrid work give us another chance to break the script. It helps us discover pieces in our organizations that may not have been working that great to start with and were causing friction all along.

Distributed work is happening. We’re having to be more explicit about the conversations around communication, collaboration, the expectations of each other, and what it means to help each other. Raising up those things anew is so important no matter the setting, no matter the workplace.

We’re now in a unique environment where we have this window of time to get very specific and not take it for granted – but to rebuild with intention. I truly hope that organizations are smart and do that with a concerted effort, with concerted energy, and then reap the rewards.

Distributed and dynamic workplaces

Minahan: Amy hits on two great points. One is there’s a real risk, as we move to hybrid work, that we create a culture of unintentional biases for those office-first-focused folks who may be conducting meetings or collaboration styles that preclude, or don’t include, folks working remotely. 

It isn’t just about having the right technology in place, it’s also about having the right policies in place. The cultural aspects and expectations need to create a workplace that has inclusivity and equality — no matter where work is done. The reality is we are going to continue to work in a very distributed mode, where certain team members won’t all be in the same room.

Those Born Digital Will Soon Determine Your Business’s Success. 

 Here’s How to Understand Them Better.

You must harness technology, institute policies, and set the expectations that remote workers are still active participants in the process and that information flows freely. That means investing in collaborative work management solutions that create a secure digital collaboration environment. These solutions align people around similar goals and objectives and key results (OKRs) that have visibility into the status and into how projects are progressing, whether you’re in the office or somewhere else.

By understanding the dependencies between the dispersed teams and other actions that need to be done, you create the business outcomes you want. These are the types of tools and policies that support the hybrid work environments that people are so desperately trying to create right now.

Gardner: The last year and a half has given us an opportunity to change the playbook. What we’re hearing from the younger generations is they’re not opposed to that. As we seek to best change the playbook, what has the Citrix research told you? 

Born free to choose how to work

Minahan: We engaged with two external research partners on this, Coleman Parkes Research and Oxford Analytica. They surveyed and did qualitative interviews with more than 1,000 business leaders and more than 2,000 knowledge workers across 10 countries. To prepare for the future, it was very clear that leaders need to get a grip on the expectations and motivations of this Born Digital generation and adapt their work models, workplaces, and work practices to better cultivate them.

There were three primary findings. You should focus on where this generation wants to work. Prepare them for success in distributed work environments. Companies need to give employees freedom to choose where they work best. 

You should focus on where this generation wants to work. Prepare them for success in distributed work environments. Companies need to give employees freedom to choose where they work best.

To Amy’s point, it’s about fit and function. Sometimes it is important to come together in offices for collaboration and social and cultural connections. For other forms of work, it is optimal for individuals to have the space they need to think, be creative, and succeed. The Born Digital cohort wants and needs that flexibility — to have both work environments purpose-fit for the work they need to get done.

Secondly, beyond where they work, the five-day work week that has vestiges of the industrial revolution is probably not appropriate. Same for the 9 am to 5 pm workday. We’re finding that a lot of folks need to take a break mid-day to recharge. So instead of thinking about one big block of time, think about sub-blocks that allow workers to optimize the work-life balance and to recharge. That drives the best energy to do your best work. And this is a very clear finding from the study on how the Born Digital want to work.

The last part is about how they work. They want autonomy and the opportunity to work in a high-trust environment. They want to have the right tools to have transparency, collaborate, and drive connectivity with their co-workers and peers — even if they’re not physically in the room together. They want compensation that recognizes and rewards performance, as well as strong and visible leadership.https://www.youtube.com/embed/UREGghGk6gU
And so those are some of the key attributes that are important as companies design their new work models.

Gardner: Amy, we’re now talking about things like trust and motivation. It seems to me that those are universally important, whether you’re born with digital technology or not.

Why does the digital technology generation have a stronger concept around trust and motivation? Is there a connection between being Born Digital and those intrinsic-but-profound values?

Haworth: Think about how these Born Digital knowledge workers have come into the workforce. Most have had some level of college education. They were used to being very autonomous university students as they figured out their activity-based work habits. How do they get the most done? Where does work happen best — in the library, or in their dorm rooms, or apartments? 

The Future of Work Demands Flexibility,

Choice, and Autonomy. 

 Lean to Foster a More Engaged Workforce.

The transition into an office is simply another step in developing a capability that they’ve been building for years. And so, if organizations are not leading with trust, transparency, autonomy, and allowing the digital tools they’ve come to expect and leverage in their educational path, that feels like there’s a massive disconnect. They’re not only undoing some of the amazing self-leadership that these Born Digitals have grown within themselves, but organizations are also depriving themselves of rethinking the ideas that the Born Digital generation is coming up with.

They are more accustomed than some of their predecessor generations to having seniority when it comes to using digital tools. And as we take an opportunity to flip our mindset, most of the time business leaders with more seniority are thinking, “Well, we have to groom this next generation of leaders.”

We may want to flip that mindset. Instead, think about how this new generation of leaders can groom the current leadership through things like reverse-mentorships or by sharing their voices. A manager with a team that includes Born Digitals can ask for their input and give permission for them to help shape the future of work together.

The organizations that do so are going to be much more well-suited to the economic benefits of this talent, as Tim highlighted at the beginning. It’s latent talent until we unlock it. It will take a conscious decision of leadership to think about how they can we best learn from this generation. They have a whole lot of things to teach us from what they envision as the future of work.

Increase your app-titude

Minahan: Amy brings up a good point that showed up in the research. That is dissonance between what older workers and leaders perceive as their experience and that of the Born Digital generation. That gap extends to both in the tools they use to do their work, as well as on how they communicate.

On the technology side, for example, young workers and leaders inhabit very different digital worlds. The research found that only 21 percent of business leaders use instant messaging apps such as Slack or WhatsAppfor work, as compared with 81 percent of Born Digital employees.

If you want to build trust and communication, it’s very hard if you are not hanging out in the same places. Similarly, only 26 percent of business leaders like using these apps for work compared to 82 percent of the Born Digitals. Clearly, there are very different work habits and work tools that the Born Digitals prefer. As leaders look to cultivate, engage with, and recruit these Born Digital workers, they are going to need to understand what tools to use to communicate to foster the next generation of leaders.

Haworth: That statistic also caught my eye; that 26 percent of business leaders like using these apps for work compared with 82 percent of Born Digital workers. Every organization that I have spoken with in my career, honestly, but especially in the last 36 months, has talked about how hard it is to get messages out into the organization. And when you step back and say, “Well, how are you trying to communicate that message?” Oftentimes what I hear is a company intranet or email.

Citrix Research Shows Leaders Disconnected From Younger Employees. 

 Learn How to Unlock Their Full Potential. 

If we take something as incredibly important as communication and think about what could be applied from this data to specific segments — to communication, to leadership, to recruiting — this becomes a really salient point and very relevant for the planning and strategy of how to best reach these workers. 

In the employee experience space, one of the key ideas is not everybody is the same. Employee experience is built around personalization. Much of this research data is rich with aligning a strategy to personalize the experience for the Born Digitals for both their own benefit as well as the benefit of the organization. If people only take one thing from this report, to me that could be it right there.

Minahan: Yes, we could fill up a whole list of Slack conversations with that topic, absolutely!

Gardner: It strikes me that there is a propensity for these younger workers to naturally innovate. If you give them a task, they are ready and willing to figure out how to do it on their own. Older workers wait around to be told how to do things.

I wonder if this innovation propensity in the younger workers is an untapped, productivity boom, and that allowing people to do things their own way — as long as the job gets done — is a huge benefit to all.

Innovation generation integrates AI

Minahan: I think you are onto something there. With the do-it-yourself or YouTube generation, you see it in your own children, they teach themselves or find ways to figure things out — whether it’s a math problem or a hobby. 

Best practice sharing mentoring as a benefit applies to solving problems, of how to adapt and learn. Reverse mentoring, formal or informal, has a big opportunity to raise all boats.

Amy mentioned earlier the importance of reverse-mentoring, and that’s no joke. We first talked about it as teaching the older generation how to use technology. But there is a best-practice-sharing benefit as applies to solving problems, of how to constantly adapt, and continue to learn. That reverse mentoring, whether it’s formal or informal, has a real big opportunity to lift all boats.

Gardner: As these folks innovate, we also now have the means to digitally track what they are doing. We can learn, on a process basis through the data, what works better, which allows us to improve our processes constantly and iteratively. Before, we were all told how to do things. We did it, and then we redid it, and not much changed.

Is there an opportunity here to create a new business style combining the data-driven capability to measure what people are doing as well as having them continue to do it in an experimental fashion?

Haworth: Yes, there is now an amazing opportunity to think about how machine learning (ML) and artificial intelligence (AI) can become a guide. As the data fuels insights, those insights can help make workers more effective and potentially far more productive.

When I think of reverse-mentoring, I not only would love to have a Born Digital mentor me on technology, but I also wouldn’t mind having an AI coach tap into places where I’m missing things. They could intervene and help me find a better way, to guide my work, or to think about who else might be interested in this topic. That could fuel an interesting discussion and help me make connections within my organization.

Those Born Digital Will Soon Determine Your Business’s Success. 

 Here’s How to Understand Them Better.

The Born Digital generation also specified in the Citrix report how distinct their experiences are when it comes to building new connections within organizations. Technology can play a role in that, not only by removing friction to give us time to connect with other human beings, but to also guide us to where those connections might be productive ones. And by productive, I don’t necessarily mean only output, but where it leads to idea generation, further innovation, scaling, and to creating coalitions and influence that lead to desirable outcomes.

Minahan: The world is moving so quickly today. Technology is advancing at such a rapid pace; it’s changing how we engage and do business. The growth-hack skillset for the individual career right now is those who can continuously learn and quickly adapt. That’s going to be critical.

We think the Born Digital generation has a lot to offer on that front, and they can teach the entire culture to support that. As Amy said, then augmenting that culture with AI or ML and other tools so that it becomes an institutional upgrade in skills, knowledge, and best-practice-sharing — so that everyone is absolutely performing at their best and everyone can begin to see around corners and adapt much quicker — that’s what’s going to create the high-performing, curious, and growth-oriented organizations of the future.

Gardner: How do we now take this research into action? How do we move from the observation that there is an awareness and perceptions gap — and maybe $1.9 trillion at stake — and go about self-evaluating and changing?

Listen fully to learn and lead

Haworth: Number one for me is to listen. And listening is hard for some. It requires time, but I will advocate that it doesn’t take a lot of time.

I have a little game to offer everyone. It’s called 5 for 5, which means talk to five people with five questions, and ask those five questions to all of them. Don’t defend. Don’t explain. Just get genuinely curious — and start with your Born Digitals. Most organizations have an easy way for leaders to find them. They might be on your team. They might be your kids, your nieces, your nephews, or a neighbor down the street. But spend a little bit of time just listening.

And from those five people, we know you are likely to find some themes, just those five conversations. And then put it on your calendar to do that at least once a quarter. These are the most interesting opportunities leaders have to inform strategy, to think about what’s next, and to learn something about a person that they may never have known before. 

Talk to five people and ask five questions of all of them. Start with your Born Digitals. You are likely to find themes that will inform strategy for leaders. 

We recently went through a cycle of this internally at Citrix as part of our hybrid philosophy building and to help develop the capabilities and tools we need in the organization for teams to be effective. I happened to be aligned to interview our Born Digital segment. Most of them were fairly new in their careers, and some had started during COVID.

My favorite question was, “If you were a manager right now, what would you be focused on?” Across the board, each of these interviewees, employees at our organization, said, “I would be very clear on what’s expected as far as working hours and when it’s okay to log off.”

That insight alone was validated in the research. Not only is this generation looking for job stability and security, but they are also very likely to not be the ones to ask for permission. They are looking around to figure out what’s okay and not okay.

We need to be clear about helping them define boundaries and to model those boundaries because Born Digital doesn’t mean born burnt out. We want to be sure that we keep the engagement, curiosity, innovation, creativity, and energy that the Born Digital population brings into organizations. We need to help them be successful by developing a sustainable pattern for work.  

Gardner: Tim, how do you see us closing the gap in the near term? 

Keep it simple to reduce daily din

Minahan: The convergence of the digital workspace demands tools that facilitate open and equitable collaboration and transparency across teams, whether they are in the office or working remotely. That includes driving continuous learning and best-practice-sharing and achieving better business outcomes together. The physical workplace needs to be fitted for purpose when is it important to come together, when we do benefit from that, whether it’s for collaborative projects or the social aspects, such as for creating that water-cooler dynamic. 

The Future of Work Demands Flexibility,

Choice, and Autonomy. 

 Lean to Foster a More Engaged Workforce.

As Amy just mentioned, which I think is so critically important, the ultimate success in this is going to require how you transition your culture. How do you make it okay for people to turn off in this always-connected world? How do you set norms on how we create an equitable, inclusive workplace for those that work in the office and those who work remotely?

Amy has put in place here at Citrix a very good framework. Similar to that, we are advising our customers to triangulate between a Venn diagram of creating the right digital workplace, coupling it with the right purpose-built workspace, and then enabling it all with common policies and culture that foster equality, inclusiveness and focus on business outcomes.  

Gardner: Is there something about the way technology itself has been delivered into the marketplace by vendors, including Citrix, that also needs to change? When we talk about culture, behavior, and motivations, that’s not the way that technology has been shaped and delivered. Is there a lesson from this research?

Haworth: Great employee experiences are shaped by empowerment of employees at a very personal level. When technology guides and automates work experiences to free the person up from the noise, the friction, of having to log-in to multiple tools, to context switch — all of that creates a draining effect on a human. The technology is now positioned to remove that friction by letting technology do what technology does best, which is to automate, guide, and organize based on personal preferences. 

New innovations from platforms such as Citrix help unite work all in one place to simplify tasks for the employee. It means there is more that the employee doesn’t have to think about. It’s seamless. That quality of interaction is a key lever in creating positive employee experiences, which lead to engagement and commitment to an organization in a world that is fraught right now with finding talent, with fighting attrition, and cultivating the right talent to innovate into the future. All of these elements really matter, and technology has a big role to play.  

Gardner: It sounds like automation is another word we should be using. We talked about using ML and AI to help, but the more you can automate, even though that sounds in conflict with allowing people to be flexible, is important.  

Minahan: Amy hit the nail on the head. It is about automating and guiding employees, but it’s also removing the noise from their day. The dirty little secret in business is each of these individual tools that we have introduced into our workday on their own added productivity, helping us do our jobs, but collectively they have created such a cacophony of noise and distraction in our day, it’s actually frustrating employees. 

If you think back to pre-pandemic, one of the dynamics was a Gallup study that showed employees were more disengaged than at any other time in history. Some 86 percent of employees felt they were disengaged at work because they were frustrated with the complexity of the work environment, all the tools, the apps, and chat channels that were interrupting them from doing their jobs. And that’s only been exacerbated throughout the pandemic as people don’t even have a clearly defined beginning and end to their days. And so it continues. 

As we introduce technology, we need to mute the noise. We need to automate mundane tasks so employees don’t change context every two seconds. Create a unifying workspace that allows access to all tools and content in the right context.

One of the things we need to be thinking about as technologists, as we introduce technology or we build solutions, is how do you mute this noise? How do you automate some of the mundane tasks so that employees don’t need to switch context every two seconds? How do you create a unifying workspace that allows them to have access to all the tools, all the apps, all the content, all the business services they need to get their job done without needing to remember multiple passwords and go everywhere else?

And how do you begin to literally use things like AI and ML to guide them through their day, presenting them with the right information at the right time, not all the information, allowing them to execute tasks without needing to navigate multiple different environments? Then, how do you create a collaborative workspace that is equitable and provides transparency and a common place for folks to align around common goals, execute against projects, understand the status, no matter whether they are working in an office in a conference room together or are distributed to all corners of the globe? 

Gardner: For those older leaders or younger entrants into the workspace who want to learn more about this research, how can they? And what comes next for Citrix research? 

Design the future of work 

Minahan: Anyone can find this research available on citrix.com. This research effort, as well as future research efforts, are part of an initiative we took together with academia, research organizations, and governments starting well over a year ago called the Work 2035 Project to try to understand the skills, organizational structures, and role technology plays in shaping the future of work. The only difference is the future of work is arriving a heck of a lot faster than any of us ever expected.

The next big event is that we are hosting a thought leadership event that will be based in part on the latest research effort in October, a virtual summit we are calling Fieldwork, where we are going to bring together some of the industry thought leaders around the topic of how the future of work is evolving and have an open dialogue, and we will be providing more information on that as we get closer.  

Gardner: Amy, for those organizations that may have learned more about the employee experience function of governance, leadership, and management, what advice do you have for organizations should they be interested in setting up an employee experience organization? 

Haworth: First, I say congratulations to those organizations for investing and taking the time to invest in understanding what employee experience means in the context of their particular desire for business outcomes and in their particular culture.

Citrix published this year some very helpful research around the employee experience operating model. It can be found on citrix.com in the Fieldworksection. I personally have leveraged this in setting up some of the key pillars of our own philosophy and approach to employee experience. It is deep and it will also be a great springboard for moving forward with establishing both a mindset and some practices and programs leading to exceptional stronger employee experiences.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in Cloud computing | Leave a comment

How financial firms are blazing a trail to more predictive and resilient operations come what may

The last few years have certainly highlighted the need for businesses of all kinds to build up their operational resilience. With a rising tide of pandemic waves, high-level cybersecurity incidents, frequent technology failures, and a host of natural disasters — there’s been plenty to protect against.

As businesses become more digital and dependent upon end-to-end ecosystems of connected services, the responsibility for protecting critical business processes has clearly shifted. It’s no longer just a task for IT and security managers but has become top-of-mind for line-of-business owners, too.

Stay with us now as BriefingsDirect explores new ways that those responsible for business processes specifically in the financial sector are successfully leading the path to avoiding and mitigating the impact and damage from these myriad threats.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in rapidly beefing-up operational resilience by bellwether finance companies, BriefingsDirect welcomes Steve Yon, Executive Director of the EY ServiceNow Practice, and Sean Culbert, Financial Services Principal at EY. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Sean, how have the risks modern digital businesses face changed over the past decade? Why are financial firms at the vanguard of identifying and heading off these pervasive risks?

Culbert: The category of financial firms forms a broad scope of types. The risks for a consumer bank, for example, are going to be different than the risks for an investment bank or from a broker-dealer. But they all have some common threads. Those include the expectation to be always-on, at the edge, and able to get to your data in a reliable and secure way.

There’s also the need for integration across the ecosystem. Unlike product sets before, such as in retail brokerage or insurance, customers expect to be brought together in one cohesive services view. That includes more integration points and more application types.

This all needs to be on the edge and always-on, even as it includes, increasingly, reliance on third-party providers. They need to walk in step with the financial institutions in a way that they can ensure reliability. In certain cases, there’s a learning curve involved, and we’re still coming up that curve.

It remains a shifting set of expectations to the edge. It’s different by category, but the themes of integrated product lines — and being able to move across those product lines and integrate with third-parties — has certainly created complexity.

Gardner: Steve, when you’re a bank or a financial institution that finds itself in the headlines for bad things, that is immediately damaging for your reputation and your brands. How are banks and other financial organizations trying to be rapid in their response in order to keep out of the headlines?

Interconnected, system-wide security

Yon: It’s not just about having the wrong headline on the front cover of American Banker. As Sean said, the taxonomy of all these services is becoming interrelated. The suppliers tend to leverage the same services.

Products and services tend to cross different firms. The complexity of the financial institution space right now is high. If something starts to falter — because everything is interconnected — it could have a systemic effect, which is what we saw several years ago that brought about Dodd-Frank regulations.

So having a good understanding of how to measure and get telemetry on that complex makeup is important, especially in financial institutions. It’s about trust. You need to have confidence in where your money is and how things are going. There’s a certain expectation that must happen. You must deal with that despite mounting complexity. The notion of resiliency is critical to a brand promise — or customers are going to leave.

One, you should contain your own issues. But the Fed is going to worry about it if it becomes broad because of the nature of how these firms are tied together. It’s increasingly important — not only from a brand perspective of maintaining trust and confidence with your clients — but also from a systemic nature; of what it could do to the economy if you don’t have good reads on what’s going on with support of your critical business services.

Gardner: Sean, the words operational resilience come with a regulatory overtone. But how do you define it?

The operational resilience pyramid

Culbert: We begin with the notion of a service. Resilience is measured, monitored, and managed around the availability, scalability, reliability, and security of that service. Understanding what the service is from an end-to-end perspective, how it enters and exits the institution, is the center to our universe.

Around that we have inbound threats to operational resilience. From the threat side, you want the capability to withstand a robust set of inbound threats. And for us, one of the important things that has changed in the last 10 years is the sophistication and complexity of the threats. And the prevalence of them, quite frankly.

We have COVID, we have proliferation of very sophisticated cyber attacks that weren’t around 10 years ago. Geopolitically, we’re all aware of tensions, and weather events have become more prevalent. It’s a wide scope of inbound threats.

If you look at the four major threat categories we work with — weather, cyber, geopolitical, and pandemics — pick any one of those and there has been a significant change in those categories. We have COVID, we have proliferation of very sophisticated cyber attacks that weren’t around 10 years ago, often due to leaks from government institutions. Geopolitically, we’re all aware of tensions, and weather events have become more prevalent. It’s a wide scope of inbound threats.

And on the outbound side, businesses need the capability to not only report on those things, but to make decisions about how to prevent them. There’s a hierarchy in operational resilience. Can you remediate it? Can you fix it? Then, once it’s been detected, how can minimize the damage. At the top of the pyramid, can you prevent it before it hits?

So, there’s been a broad scope of threats against a broader scope of service assets that need to be managed with remediation. That was the heritage, but now it’s more about detection and prevention.

Gardner: And to be proactive and preventative, operational resilience must be inclusive across the organization. It’s not just one group of people in a back office somewhere. The responsibility has shifted to more people — and with a different level of ownership.

What’s changed over the past decade in terms of who’s responsible and how you foster a culture of operational resiliency?

Bearing responsibility for services

Culbert: The anchor point is the service. And services are processes: It’s technology, facilities, third parties, and people. The hard-working people in each one of those silos all have their own view of the world — but the services are owned by the business. What we’ve seen in recognition of that is that the responsibility for sustaining those services falls with the first line of business [the line of business interacting with consumers and vendors at the transaction level].

Yon: There are a couple of ways to look at it. One, as Sean was talking about, the lines of defense and the evolution of risk has been divvied up. The responsibilities have had line-of-sight ownership over certain sets of accountabilities. But you also have triangulation from others needing to inspect and audit those things as well.

The time is right for the new type of solution that we’re talking about now. One, because the nature of the world has gotten more complex. Two, the technology has caught up with those requirements.

The move within the tech stack has been to become more utility-based, service-oriented, and objectified. The capability to get signals on how everything is operating, and its status within that universe of tech, has become a lot easier. And with the technology now being able to integrate across platforms and operate at the service level — versus at the component level — it provides a view that would have been very hard to synthesize just a few years ago.

What we’re seeing is a big shot in the arm to the power of what a typical risk resilience compliance team can be exposed to. They can manage their responsibilities at a much greater level.

Before they would have had to develop business continuity strategies and plans to know what to do in the event of a fault or a disruption. And when those things come out, the three-ring binders, the war room gets assembled and people start to figure out what to do. They start running the playbook.

What we’re seeing is a big shot in the arm to the power of what a typical risk resilience compliance team can be exposed to. They can manage their responsibilities at a mch greater level.

The problem with that is that while they’re running the playbook, the fault has occurred, the destruction has happened, and the clock is ticking for all those impacts. The second-order consequences of the problem are starting to amass with respect to value destruction, brand reputational destruction, as well as whatever customer impacts there might be.

But now, because of technology and moving toward Internet of things (IoT)thinking across assets, people, facilities, and third-party services, technology can self-declare their state. That data can be synthesized to say, “Okay, I can start to pick up a signal that’s telling me that a fault is inbound.” Or something looks like it’s falling out of the control thresholds that they have.

That tech now gives me the capability to get out in front of something. That would be almost unheard-of years ago. The nexus of tech, need, and complexity are all hitting right now. That means we’re moving and pivoting to a new type of solution rising out of the field.

Gardner: You know, many times we’ve seen such trends happen first in finance and then percolate out to the rest of the economy. What’s happened recently with banking supervision, regulations, and principles of operational resilience?

Financial sector leads the way

Yon: There are similar forms of pressure coming from all regulatory-intense industries. Finance is a key one, but there’s also power, utilities, oil, and gas. The trend is happening primarily first in regulatory-intensive industries.

Culbert: A couple years ago, the Bank of England and the Prudential Regulation Authority (PRA) put out a consultation paper that was probably most prescriptive out of the UK. We have the equivalent over here in the US around expectations for operational resiliency. And that just made its way into policy or law. For the most part, on a principles basis, we all share a common philosophy in terms of what’s prudent.

A lot of the major institutions, the ones we deal with, have looked at those major tenets in these policies and have said they will be practiced. And there are four fundamental areas that the institutions must focus on.

One is, can it declare and describe its critical business services? Does it have threshold parameters logic assigned to those services so that it knows how far it can go before it sustains damage across several different categories? Are the assets that support those services known and mapped? Are they in a place where we can point to them and point to the health of them? If there’s an incident, can they collaborate around the sustaining of those assets?

As I said earlier, those assets generally fall into small categories: people, facilities, third parties, and technology. And, finally, do you have the tools in place to keep those services within those tolerance parameters and have other alerting systems to let you know which of the assets may well be failing you, if the services are at risk.

That’s a lay-person, high-level description of the Bank of England policy on operational risks for today’s Financial Management Information Systems (FMIS). Thematically most of the institutions are focusing on those four areas, along with having credible and actionable testing schemes to simulate disruptions on the inbound side.

In the US, Dodd-Frank mandated that institutions declare which of those services could disrupt critical operations and, if those operations were disrupted, could they in turn disrupt the general economy. The operational resilience rules and regulations fall back on that. So, now that you know what they are, can you risk-rate them based on the priorities of the bank and its counterparties? Can you manage them correctly? That’s the letter-of-the-law-type regulation here. In Japan, it’s more credential-based regulation like the Bank of England. It all falls into those common categories.

Gardner: Now that we understand the stakes and imperatives, we also know that the speed of business has only increased. So has the speed of expectations for end consumers. The need to cut time to discovery of the problems and to find root causes also must be as fast as possible.

How should banks and other financial institutions get out in front of this? How do we help organizations move faster to their adoption, transform digitally, and be more resilient to head off problems fast?

Preventative focus increases

Yon: Once there’s clarity around the shift in the goals, knowing it’s not good enough to just be able to know what to do in the event of a fault or a potential disruption, the expectation becomes the proof to regulatory bodies and to your clients that they should trust you. You must prove that you can withstand and absorb that potential disruption without impact to anybody else downstream. Once people get their head around the nature of the expectation-shifting to being a lot more preventative versus reactive, the speeds and feeds by which they’re managing those things become a lot easier to deal with.

You’d get the phone call at 3 a.m. that a critical business service was down. You’d have the tech phone call that people are trying to figure out what happened. That lack of speed killed because you had to figure a lot of things out while the clock was ticking. But now, you’re allowing yourself time to figure things out.

Back when I was running the technology at a super-regional bank, you’d get the phone call at 3 a.m. that a critical business service was down. You’d have the tech phone call that people are trying to figure out what happened because they started to notice at the help desk that a number of clients and customers were complaining. The clock had been ticking before 3 a.m. when I got the call. And so, by now, by that time, those clients are upset.

Yet we were spending our time trying to figure out what happened and where. What’s the overall impact? Are there other second-order impacts because of the nature of the issue? Are other services disrupted as well? Again, it gets back to the complexity factor. There are interrelationships between the various components that make up any service. Those services are shared because that’s how it is. People lean on those things — and that’s the risk you take.

Before, the lack of speed literally killed because you had to figure a lot of those things out while the clock was ticking and the impact was going on. But now, you’re allowing yourself time to figure things out. That’s what we call a decision-support system. You want to alert ahead of time to ensure that you understand the true blast area of what the potential destruction is going to be.

Secondly, can I spin up the right level of communications so that everybody who could be affected knows about it? And thirdly, can I now get the right people on the call — versus hunting and pecking to determine who has a problem on the fly at 3 a.m.?

The nature of having speed is when you deal with an issue by buying time for firms to deal with the thing intelligently versus in a shotgun approach and without truly understanding the nature of the impact until the next day.

Gardner: Sean, it sounds like operation resiliency is something that never stops. It’s an ongoing process. That’s what buys you the time because you’re always trying to anticipate. Is that the right way to look at it?

Culbert: It absolutely is the way to look at it. A time objective may be specific to the type of service, and obviously it’s going to be different from a consumer bank to a broker-dealer. You will have a time objective attached to a service, but is that a critical service that, if disrupted, could further disrupt critical operations that could then disrupt the real economy? That’s come into focus in the last 10 years. It has forced people to think through: If you were if a broker-dealer and you couldn’t meet your hedge fund positions, or if you were a consumer bank and you couldn’t get folks their paychecks, does that put people in financial peril?

These involve very different processes and have very different outcomes. But each has a tolerance of filling in the blank time. So now it’s just more of a matter of being accountable for those times. There are two things: There’s the customer expectation that you won’t reach those tolerances and be able to meet the time objective to meet the customers’ needs.

And the second is that technology has made it more manageable as the domino or contagion effect of one service tipping over another one. So now it’s not just, “Is your service ready to go within its objective of half an hour?” It’s about the knock-on effect to other services as well.

So, it’s become a lot more correlated, and it’s become regional. Something that might be a critical service in one business, might not be in another — or in one region, might not be in another. So, it’s become more of a multidimensional management problem in terms of categorically specific time objectives against specific geographies, and against the specific regulations that overhang the whole thing.

Gardner: Steve, you mentioned earlier about taking the call at 3 a.m. It seems to me that we have a different way of looking at this now — not just taking the call but making the call. What’s the difference between taking the call and making the call? How does that help us prepare for better operation resiliency?

Make the call, don’t take the call

Yon: It’s a fun way of looking a day in the life of your chief resiliency officer or chief risk officer (CRO) and how it could go when something bad happens. So, you could take the call from the CEO or someone from the board as they wonder why something is failing. What are you going to do about it?

You’re caught on your heels trying to figure out what was going on, versus making the call to the CEO or the board member to let them know, “Hey, these were the potential disruptions that the firm was facing today. And this is how we weathered through it without incident and without damaging service operations or suffering service operations that would have been unacceptable.”

We like to think of it as not only trying to prevent the impact to the clients but also from the possibility of a systemic problem. It could potentially increase the lifespan of a CRO by showing they can be responsible for the firm’s up-time, versus just answer questions post-disruption. It provides a little bit of levity but it’s also a truth that there are more than just the consequences to the clients, but also to those people responsible for that function within the firm.

Gardner: Many leading-edge organizations have been doing digital transformation for some time. We’re certainly in the thick of digital transformation now after the COVID requirements of doing everything digitally rather than in person.

But when it comes to finance and the services that we’re describing — the interconnections in the interdependencies — there are cyber resiliency requirements that cut across organizational boundaries. Having a moat around your organization, for example, is no longer enough.

What is it about the way that ServiceNow and EY are coming together that helps make operational resiliency an ongoing process possible?

Digital transformation opens access

Yon: There are two components. You need to ask yourself, “What needs to be true for the outcome that we’re talking about to be valid?” From a supply-side, what needs to be true is, “Do I have good signal and telemetry across all the components and assets of resources that would pose a threat or a cause for a threat to happen from a down service?”

With the move to digital transformation, more assets and resources that compose any organization are now able to be accessed. That means the state of any particular asset, in terms of its preferential operating model, are going to be known.

With the move to digital transformation, more assets and resources that compose any organization are now able to be accessed. That means the state of any particular asset, in terms of its preferential operating model, are going to be known. I need to have that data and that’s what digital transformation provides.

Secondly, I need a platform that has wide integration capabilities and that has workflow at its core. Can I perform business logic and conditional synthesis to interpret the signals that are coming from all these different systems?

That’s what’s great about ServiceNow — there hasn’t been anything that it hasn’t been able to integrate with. Then it comes down to, “Okay, do I understand the nature of what it is I’m truly looking for as a business service and how it’s constructed?” Once I do that, I’m able to capture that control, if you will, determine its threshold, see that there’s a trigger, and then drive the workflows to get something done.

For a hypothetical example, we’ve had an event so that we’re losing the trading floor in city A, therefore I know that I need to bring city B and its employees online and to make them active so I can get that up and running. ServiceNow can drive that all automatically, within the Now Platform itself, or drive a human to provide the approvals or notifications to drive the workflows as part of your business continuity plan (BCP) going forward. You will know what to do by being able to detect and interpret the signals, and then based on that, act on it.

That’s what ServiceNow brings to make the solution complete. I need to know what that service construction is and what it means within the firm itself. And that’s where EY comes to the table, and I’ll ask Sean to talk about that.

Culbert: ServiceNow brings to the table what we need to scale and integrate in a logical and straightforward way. Without having workflows that are cross-silo and cross-product at scale — and with solid integration of capabilities — this just won’t happen.

When we start talking about the signals from everywhere against all the services — it’s a sprawl. From an implementation perspective, it feels like it’s not implementable.

The regulatory burden requires focus on what’s most important, and why it’s most important to the market, the balance sheet, and the customers. And that’s not for the 300 services, but for the one or two dozen services that are important. Knowing that gives us a big step forward by being able to scope out the ServiceNow implementation.

And from there, we can determine what dimensions associated with that service we should be capturing on a real-time basis. To progress from remediation to detection on to prevention, we must be judicious of what signals we’re tracking. We must be correct.

We have the requirement and obligation to declare and describe what is critical using a scalable and integrable technology, which is ServiceNow. That’s the big step forward.

Yon: The Now platform also helps us to be fast. If you look under the hood of most firms, you’ll find ServiceNow is already there. You’ll see that there’s already been work done in the risk management area. They already know the concepts and what it means to deal with policies and controls, as well as the triggers and simulations. They have IT and other assets under management, and they know what a configuration management database (CMDB) is.

These are all accelerants that not only provide scale to get something done but provide speed because so many of these assets and service components are already identified. Then it’s just a matter of associating them correctly and calibrating it to what’s really important so you don’t end up with a science fair integration project.

Gardner: What I’m still struggling to thread together is how the EY ServiceNow alliance operational resiliency solution becomes proactive as an early warning system. Explain to me how you’re able to implement this solution in such a way that you’re going to get those signals before the crisis reaches a crescendo.

Tracking and recognizing faults

Yon: Let’s first talk about EY and how it comes with an understanding from the industry of what good looks like with respect to what a critical business service needs to be. We’re able to hone down to talking about payments or trading. This maps the deconstruction of that service, which we also bring as an accelerant.

We know what it looks like — all the different resources, assets, and procedures that make that critical service active. Then, within ServiceNow, it manages and exposes those assets. We can associate those things in the tool relatively quickly. We can identify the signal that we’re looking to calibrate on.

Then, based on what ServiceNow knows how to do, I can put a control parameter on this service or component within the threshold. It then gives me an indication whether something might be approaching a fault condition. We basically look at all the different governance, risk management, and compliance (GRC) leading indicators and put telemetry around those things when, for example, it looks like my trading volume is starting to drop off.

Based on what ServiceNow knows how to do, I can put a control parameter on this service or component within the threshold. It then gives me an indication whether something might be approaching a fault condition.

Long before it drops to zero, is there something going on elsewhere? It delivers up all the signals about the possible dimensions that can indicate something is not operating per its normal expected behavior. That data is then captured, synthesized, and displayed either within ServiceNow or it is automated to start running its own tests to determine what’s valid.

But at the very least, the people responsible are alerted that something looks amiss. It’s not operating within the control thresholds already set up within ServiceNow against those assets. This gives people time to then say, “Okay, am I looking at a potential problem here? Or am I just looking at a blip and it’s nothing to worry about?”

Gardner: It sounds like there’s an ongoing learning process and a data-gathering process. Are we building a constant mode of learning and automation of workflows? Do we do get a whole greater than the sum of the parts after a while?

Culbert: The answer is yes and yes. There’s learning and there’s automation. We bring to the table some highly effective regulatory risk models. There’s a five-pillar model that we’ve used where market and regulatory intelligence feeds risk management, surveillance, analysis, and ultimately policy enforcement.

And how the five pillars work together within ServiceNow — it works together within the business processes within the organization. That’s where we get that intelligence feeding, risk feeding, surveillance analysis, and enforcement. That workflow is the differentiator, to allow rapid understanding of whether it’s an immediate risk or concentrating risk.

And obviously, no one is going to be 100 percent perfect, but having context and perspective on the origin of the risk helps determine whether it’s a new risk — something that’s going to create a lot of volatility — or whether it’s something the institution has faced before.

Werationalize that risk — and, more importantly, rationalize the lack of a risk — to know at the onset if it’s a false positive. It’s an essential market and regulatory intelligence mechanism. Are they feeding us only the stuff that’s really important?

Our risk models tell us that. That risk model usually takes on a couple of different flavors. One flavor is similar to a FICO score. So, have you seen the risk? Have you seen it before? It is characterizable by the words coming from it and its management in the past.

And then some models are more akin to a bar calculator. What kind of volatility is this risk going to bring to the bank? Is it somebody that’s recreationally trying to get into the bank, or is it a state actor?

Once the false-positive gets escalated and disposed of — if it’s, in fact, a false positive — are we able to plug it into something robust enough to surveil for where that risk is headed? That’s the only way to get out in front of it.

The next phase of the analysis says, “Okay, who should we talk to about this? How do we communicate that this is bigger than a red box, much bigger than a red box, a real crisis-type risk? What form does that communication take? Is it a full-blown crisis management communication? Is it a standing management communication or protocol?”

We take that affected function and very quickly understand the health or the resiliency of other impacted functions. We use our own proprietary model. It helps to shift from primary states to alternative states.

And then ultimately, this goes to ServiceNow, so we take that affected function and very quickly understand the health or the resiliency of other impacted functions. We use our own propriety model. It’s a military model used for nuclear power plants, and it helps to shift from primary states to alternative states, as well as to contingency and emergency states.

At the end, the person who oversees policy enforcement must gain the tools to understand where they should be fixing the primary state issue or moving on from it. They must know to step aside or shift into an emergency state.

From our perspective, it is constant learning. But there are fundamental pillars that these events flow through that deliver the problem to the right person and give that person options for minimizing the risk.

Gardner: Steve, do we have any examples or use cases that illustrate how alerting the right people with the right skills at the right time is an essential part of resuming critical business services or heading off the damage?

Rule out retirement risks

Yon: Without naming names, we have a client within Europe, the Middle East and Africa (EMEA) we can look at. One of the things the pandemic brought to light is the need to know our posture to continuing to operate the way we want. Getting back to integration and integrability, where are we going to get a lot of that information for personnel from? Workday, their human resources (HR) system of record, of course.

Now, they had a critical business service owner who was going to be retiring. That sounds great. That’s wonderful to hear. But one of the valid things for this critical business service to be considered operating in its normal state is to check for an owner. Who will cut through the issues and process and lead going forward?

If there isn’t an owner identified for the service, I would be considered at risk for this service. It may not be capable of maintaining its continuity. So, here’s a simple use case where someone could be looking at a trigger from Workday that asks if this leadership person is still in the role and active.

Is there a control around identifying if they are going to become inactive within x number of months’ time? If so, get on that because the regulators will look at these processes potentially being out of control.

There’s a simple use case that has nothing to do with technology but shows the integrability of ServiceNow into another system of record. It turns ServiceNow into a decision-support platform that drives the right actions and orchestrates timely actions — not only to detect a disruption but anything else considered valid as a future risk. Such alerts give the time to get it taken care of before a fault happens.

Gardner: The EY ServiceNow alliance operational resilience solution is under the covers but it’s powering leaders’ ability to be out in front of problems. How does the solution enable various levels of leadership personas, even though they might not even know it’s this solution they’re reacting to?

Leadership roles evolve

Culbert: That’s a great question. For the last six to seven years, we’ve all heard about the shift from the second to the first line of primary ownership in the private sector. I’ve heard many occasions for our first line business manager saying, “You know, if it is my job, first I need to know what the scope of my responsibilities are and the tools to do my job.” And that persona of the frontline manager having good data, that’s not a false positive. It’s not eating at his or her ability to make money. It’s providing them with options of where to go to minimize the issue.

The personas are clearly evolving. It was difficult for risk managers to move solidly into the first line without these types of tools. And there were interim management levels, too. Someone who sat between the first and the second line — level 1.5. or line 1.5. And it’s clearly pushing into the first line. How do they know their own scope as relates to the risk to the services?

Now there’s a tool that these personas can use to be not only be responsible for risk but responsive as well. And that’s a big thing in terms of the solution design. With ServiceNow over the last several years, if the base data is correctly managed, then being able to reconfigure the data and recalibrate the threshold logic to accommodate a certain persona is not a coding exercise. It’s a no-code step forward to say, “Okay, this is now the new role and scope, and that role and scope will be enabled in this way.” And this power is going to direct the latest signals and options.

But it’s all about the definition of a service. Do we all agree end-to-end what it is, and the definition of the persona? Do we all understand who’s accountable and who’s responsible? Those two things are coming together with a new set of tools that are right and correct.

Yon: Just to go back to the call at 3 a.m., that was a tech call. But typically, what happens is there’s also going to be the business call. So, one of the issues we’re also solving with ServiceNow is in one system we manage the nature of information irrespective of what your persona is. You have a view of risk that can be tailored to what it is that you care about. And all the data is congruent back and forth.

It becomes a lot more efficient and accurate for firms to manage the nature of understanding on what things are when it’s not just the tech community talking. The business community wants to know what’s happening — and what’s next? And then someone can translate in between. This is a real-time way for all those personas to become a line around the nature of the issue with respect to their perspective.

Gardner: I really look forward to the next in our series of discussions around operational resilience because we’re going to learn more about the May announcement of this solution.

But as we close out today’s discussion, let’s look to the future. We mentioned earlier that almost any highly regulated industry will be facing similar requirements. Where does this go next?

It seems to me that the more things like machine learning (ML) and artificial intelligence (AI) analyze the many sources of data, they will make it even more powerful. What should we look for in terms of even more powerful implementations?

AI to add power to the equation

Culbert: When you set up the framework correctly, you can apply AI to the thinning out of false positives and for tagging certain events as credible risk events or not credible risk events. AI can also to be used to direct these signals to the right decision makers. But instead of taking the human analyst out of the equation, AI is going to help us. You can’t do it without that framework.

Yon: When you enable these different sets of data coming in for AI, you start to say, “Okay, what do I want the picture to look like in my ability to simulate these things?” It all goes up, especially using ServiceNow.

But back to the comment on complexity and the fact that suppliers don’t just supply one client, they connect to many. As this starts to take hold in the regulated industries — and it becomes more of an expectation for a supplier to be able to operate this way and provide these signals, integration points, telemetry, and transparency that people expect — anybody else trying to lever into this is going to get the lift and the benefit from suppliers who realize that the nature of playing in this game just went up. Those benefits become available to a much broader landscape of industries and for those suppliers.

Gardner: When we put two and two together, we come up with a greater sum. We’re going to be able to deal rapidly with the known knowns, as well as be better prepared for the unknown unknowns. So that’s an important characteristic for a much brighter future — even if we hit another unfortunate series of risk-filled years such as we’ve just suffered.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: ServiceNow and EY.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, Cyber security, digital transformation, disaster recovery, machine learning, managed services, Security, Software, User experience | Tagged , , , , , , , , , , , , , , , | Leave a comment

How API security provides a killer use case for ML and AI

While the use of machine learning (ML) and artificial intelligence (AI) for IT security may not be new, the extent to which data-driven analytics can detect and thwart nefarious activities is still in its infancy.

As we’ve recently discussed here on BriefingsDirect, an expanding universe of interdependent application programming interfaces (APIs) forms a new and complex threat vector that strikes at the heart of digital business.

How will ML and AI form the next best security solution for APIs across their dynamic and often uncharted use in myriad apps and services? Stay with us now as we answer that question by exploring how advanced big data analytics forms a powerful and comprehensive means to track, understand, and model safe APIs use.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn how AI makes APIs secure and more resilient across their life cycles and ecosystems, BriefingsDirect welcomes Ravi Guntur, Head of Machine Learning and Artificial Intelligence at Traceable.ai. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why does API security provide such a perfect use case for the strengths of ML and AI? Why do these all come together so well? 

Guntur: When you look at the strengths of ML, the biggest strength is to process data at scale. And newer applications have taken a turn in the form of API-driven applications.

Large pieces of applications have been broken down into smaller pieces, and these smaller pieces are being exposed as even smaller applications in themselves. To process the information going between all these applications, to monitor what activity is going on, the scale at which you need to deal with them has gone up many fold. That’s the reason why ML algorithms form the best-suited class of algorithms to deal with the challenges we face with API-driven applications. 

Gardner: Given the scale and complexity of the app security problem, what makes the older approaches to security wanting? Why don’t we just scale up what we already do with security?

More than rules needed to secure apps

Guntur: I’ll give an analogy as to why older approaches don’t work very well. Think of the older approaches as a big box with, let’s say, a single door. For attackers to get into that big box, all they must do is crack through that single door. 

No alt text provided for this image

Now, with the newer applications, we have broken that big box into multiple small boxes, and we have given a door to each one of those small boxes. If the attacker wants to get into the application, they only have to get into one of these smaller boxes. And once he gets into one of the smaller boxes, he needs to take a key out of it and use that key to open another box.

By creating API-driven applications, we have exposed a much bigger attack surface. That’s number one. Number two, of course, we have made it challenging to the attackers, but the attack surface being so much bigger now needs to be dealt with in a completely different way.

The older class of applications took a rules-based system as the common approach to solve security use cases. Because they just had a single application and the application would not change that much in terms of the interfaces it exposed, you could build in rules to analyze how traffic goes in and out of that application.

Now, when we break the application into multiple pieces, and we bring in other paradigms of software development, such as DevOps and Agile development methodologies, this creates a scenario where the applications are always rapidly changing. There is no way rules can catch up with these rapidly changing applications. We need automation to understand what is happening with these applications, and we need automation to solve these problems, which rules alone cannot do. 

Gardner: We shouldn’t think of AI here as replacing old security or even humans. It’s doing something that just couldn’t be done any other way.

Guntur: Yes, absolutely. There’s no substitute for human intelligence, and there’s no substitute for the thinking capability of humans. If you go deeper into the AI-based algorithms, you realize that these algorithms are very simple in terms of how the AI is powered. They’re all based on optimization algorithms. Optimization algorithms don’t have thinking capability. They don’t have creativity, which humans have. So, there’s no way these algorithms are going to replace human intelligence.

Learn More  

About Traceable.ai

They are going to work alongside humans to make all the mundane activities easier for humans and help humans look at the more creative and the difficult aspects of security, which these algorithms can’t do out of the box.

Gardner: And, of course, we’re also starting to see that the bad guys, the attackers, the hackers, are starting to rely on AI and ML themselves. You have to fight fire with fire. And so that’s another reason, in my thinking, to use the best combination of AI tools that you can.

Guntur: Absolutely.

Gardner: Another significant and growing security threat are bots, and the scale that threat vector takes. It seems like only automation and the best combination of human and machines can ferret out these bots.

Machines, humans combine to combat attacks

Guntur: You are right. Most of the best detection cases we see in security are a combination of humans and machines. The attackers are also starting to use automation to get into systems. We have seen such cases where the same bot comes in from geographically different locations and is trying to do the same thing in some of the customer locations.

The reason they’re coming from so many different locations is to challenge AI-based algorithms. One of the oldest schools of algorithms looks at rate anomaly, to see how quickly somebody is coming from a particular IP address. The moment you spread the IP addresses across the globe, you don’t know whether it’s different attackers or the same attacker coming from different locations. This kind of challenge has been brought by attackers using AI. The only way to challenge that is by building algorithms to counter them.

No alt text provided for this image

One thing is for sure, algorithms are not perfect. Algorithms can generate errors. Algorithms can create false positives. That’s where the human analyst comes in, to understand whether what the algorithm discovered is a true positive or a false positive. Going deeper into the output of an algorithm digs back into exactly how the algorithm figured out an attack is being launched. But some of these insights can’t be discovered by algorithms, only humans when they correlate different pieces of information, can find that out. So, it requires a team. Algorithms and humans work well as a team.

Gardner: What makes the way in which Traceable.ai is doing ML and AI different? How are you unique in your vision and execution for using AI for API security?

Guntur: When you look at any AI-based implementation, you will see that there are three basic components. The first is about the data itself. It’s not enough if you capture a large amount of data; it’s still not enough if you capture quality data. In most cases, you cannot guarantee data of high quality. There will always be some noise in the data. 

But more than volume and quality of data, what is more important is whether the data that you’re capturing is relevant for the particular use-case you’re trying to solve. We want to use the data that is helpful in solving security use-cases.

Traceable.ai built a platform from the ground up to cater to those security use cases. Right from the foundation, we began looking at the specific type of data required to solve modern API-based application security use cases. That’s the first challenge that we address, it’s very important, and brings strength to the product.

Seek differences in APIs

Once you address the proper data issue, the next is about how you learn from it. What are the challenges around learning? What kind of algorithms do we use? What is the scenario when we deploy that in a customer location?

We realized that every customer is completely different and has a completely different set of APIs, too, and those APIs behave differently. The data that goes in and out is different. Even if you take two e-commerce customers, they’re doing the same thing. They’re allowing you to look at products, and they’re selling you products. But the way the applications have been built, and the API architecture — everything is different.

We realized it’s no use to build supervised approaches. We needed to come up with an architecture where the day we deploy at the customer location; the algorithm then self-learns.

We realized it’s no use to build supervised approaches. We needed to come up with an architecture where the day we deploy at the customer location; the algorithm then self-learns. The whole concept of being able to learn on its own just by looking at data is the core to the way we build security using the AI algorithms we have. 

Finally, the last step is to look at how we deliver security use cases. What is the philosophy behind building a security product? We knew that rules-based systems are not going to work. The alternate system is modeled around anomaly detection. Now, anomaly detection is a very old subject, and we have used anomaly detection in various things. We have used it to understand whether machinery is going to go down, we have used them to understand whether the traffic patterns on the road are going to change, and we have used it for anomaly detection in security.

But within anomaly detection, we focused on behavioral anomalies. We realized that APIs and the people who use APIs are the two key entities in the system. We needed to model the behavior of these two groups — and when we see any deviation from this behavior, that’s when we’re able to capture the notion of an attack.

Learn More  

About Traceable.ai

Behavioral anomalies are important because if you look at the attacks, they’re so subtle. You just can’t easily find the difference between the normal usage of an API and abnormal usage. But very deep inside the data and very deep into how the APIs are interacting, there is a deviation in the behavior. It’s very hard for humans to figure this out. Only algorithms can tease this out and determine that the behavior is different from a known behavior.

We have addressed this at all levels of our stack: The data-capture level, and the choice of how we want to execute our AI, and the choice of how we want to deliver our security use cases. And I think that’s what makes Traceable unique and holistic. We didn’t just bolt things on, we built it from the ground up. That’s why these three pieces gel well and work well together.

Gardner: I’d like to revisit the concept you brought up about the contextual use of the algorithms and the types of algorithms being deployed. This is a moving target, with so many different use cases and company by company.

How do you keep up with that rate of change? How do you remain contextual? 

Function over form delivers context 

Guntur: That’s a very good question. The notion of context is abstract. But when you dig deeper into what context is and how you build context, it boils down to basically finding all factors influencing the execution of a particular API. 

Let’s take an example. We have an API, and we’re looking at how this API functions. It’s just not enough to look at the input and output of the API. We need to look at something around it. We need to see who triggered that input. Where did the user come from? Was it a residential IP address that the user came in from? Was it a hosted IP address? Which geolocation is the user coming from? Did this user have past anomalies within the system?

You need to bring in all these factors into the notion of context when we’re dealing with API security. Now, it’s a moving target. The context — because data is constantly changing. There comes a moment when you have fixed this context, when you say that you know where the users are coming from, and you know what the users have done in the past. There is some amount of determinism to whatever detection you’re performing on these APIs.

No alt text provided for this image

Let’s say an API takes in five inputs, and it gives out 10 outputs. The inputs and outputs are a constant for every user, but the values that go into the input varies from user to user. Your bank account is different from my bank account. The account number I put in there is different for you, and it’s different for me. If you build an algorithm that looks for an anomaly, you will say, “Hey, you know what? For this part of the field, I’m seeing many different bank account numbers.” 

There is some problem with this, but that’s not true. It’s meant to have many variations in that account number, and that determination comes from context. Building a context engine is unique in our AI-based system. It helps us tease out false positives and helps us learn the fact that some variations are genuine.

That’s how we keep up with this constant changing environment, where the environment is changing not just because new APIs are coming in. It’s also because new data is coming into the APIs.

Gardner: Is there a way for the algorithms to learn more about what makes the context powerful to avoid false positives? Is there certain data and certain ways people use APIs that allow your model to work better?

Guntur: Yes. When we initially started, we thought of APIs as rigidly designed. We thought of an API as a small unit of execution. When developers use these APIs, they’ll all be focused on very precise execution between the APIs.

We soon realized that developers bundle various additional features within the same API. We started seeing that they just provide a few more input options, but they get completely different functionality from that same API.

But we soon realized that developers bundle various additional features within the same API. We started seeing that they just provide a few more input options, and by triggering those extra input options you get completely different functionality from the same API.

We had to come up with algorithms that discover that a particular API can behave in multiple ways — depending on the inputs being transmitted. It’s difficult for us to figure out whether the API is going to change and has ongoing change. But when we built our algorithms, we assumed that an API is going to have multiple manifestations, and we need to figure out which manifestation is currently being triggered by looking at the data.

We solved it differently by creating multiple personas for the same API. Although it looks like a single API, we have an internal representation of an API with multiple personas.

Gardner: Interesting. Another thing that’s fascinating to me about the API security problem is that the way hackers try not to abuse the API. Instead, they have subtle logic abuse attacks where they’re basically doing what the API is designed to do but using it as a tool for their nefarious activities. 

How does your model help fight against these subtle logic abuse attacks?

Logic abuse detection

Guntur: When you look at the way hackers are getting into distributed applications and APIs using these attacks – it is very subtle. We classify these attacks as business logic abuse. They are using the existing business logic, but they are abusing it. Now, figuring out abuse to business logic is a very difficult task. It involves a lot of combinatorial issues that we need to solve. When I say combinatorial issues, it’s a problem of scale in terms of the number of APIs, the number of parameters that can be passed, and the types of values that can be passed.

Learn More  

About Traceable.ai

When we built the Traceable.ai platform, it was not enough to just look at the front-facing APIs, we call them the external APIs. It’s also important for us to go deeper into the API ecosystem.

We have two classes of APIs. One, the external facing APIs, and the other is the internal APIs. The internal APIs are not called by users sitting outside of the ecosystem. They’re called by other APIs within the system. The only way for us to identify the subtle logic attacks is to be able to follow the paths taken by those internal APIs.

If the internal APIs are reaching a resource like a database, and within the database it reaches a particular row and column, it then returns the value. Only then you will be able to figure out that there was a subtle attack. We’re able to figure this out only because of the capability to trace the data deep into the ecosystem.

If we had done everything at the API gateway, if we had done everything at external facing APIs, we would not have figured out that there was an attack launched that went deep into the system and touched a resource it should never have touched.

It’s all about how well you capture the data and how rich your data representation is to capture this kind of attack. Once you capture this, using tons of data, and especially graph-like data, you have no option but to use algorithms to process it. That’s why we started using graph-based algorithms to discover variations in behavior, discover outliers, and uncover patterns of outliers, and so on.

Gardner: To fully tackle this problem, you need to know a lot about data integration, a lot about security and the vulnerabilities, as well as a lot about algorithms, AI, and data science. Tell me about your background. How are you able to keep these big, multiple balls in the air at once when it comes to solving this problem? There are so many different disciplines involved.

Multiple skills in data scientist toolbox

Guntur: Yes, it’s been a journey for me. When I initially started in 2005, I had just graduated from university. I used a lot of mathematical techniques to solve key problems in natural language processing (NLP) as part of my thesis. I realized that even security use cases can be modeled as a language. If you take any operating system (OS), we typically have a few system calls, right? About 200 system calls, or maybe 400 system calls. All the programs running in the operating system are using about 400 system calls in different ways to build the different applications.

It’s similar to natural languages. In natural language, you have words, and you compose the words according to a grammar to get a meaningful sentence. Something similar happens in the security world. We realized we could apply techniques from statistical NLP into the security use cases. We discovered, for example, way back then, certain Solaris login buffer and overflow vulnerabilities.

That’s how the journey began. I then went through multiple jobs and worked on different use cases. I learned if you want to be a good data scientist — or if you want to use ML effectively — you should think of yourself as a carpenter, as somebody with a toolbox with lots of tools in it, and who knows how to use those tools very well.

No alt text provided for this image

But to best use those tools, you also need the experience from building various things. You need to build a chair, a table, and a house. You need to build various things using the same set of tools, and that took me further along that journey.

While I began with NLP, I soon ventured into image processing and video processing, and I applied that to security, too. It furthered the journey. And through that whole process, I realized that almost all problems can be mapped to canonical forms. You can take any complex problem and break it down into simpler problems. Almost all fields can be broken down into simple mathematical problems. And if you know how to use various mathematical concepts, you can solve a lot of different problems.

We are applying these same principles at Traceable.ai as well. Yes, it’s been a journey, and every time you look at data you come up with different challenges. The only way to overcome that is to dirty your hands and solve it. That’s the only way to learn and the only way we could build this new class of algorithms — by taking a piece from here, a piece from there, putting it together, and building something different. 

Gardner: To your point that complex things in nature, business, and technology can be brought down to elemental mathematical understandings, once you’ve attained that with APIs, for example, applying this first to security, and rightfully so, it’s the obvious low-lying fruit.

But over time, you also gain mathematical insights and understanding of more about how microservices are used and how they could be optimized. Or even how the relationship between developers and the IT production crews might be optimized.

Is that what you’re setting the stage for here? Will that mathematical foundation be brought to a much greater and potentially productive set of a problem-solving?

Something for everybody

Guntur: Yes, you’re right. If you think about it, we have embarked on that journey already. Based on what we have achieved as of today, and we look at the foundations over which we have built that, we see that we have something for everybody.

For example, we have something for the security folks as well as for the developer folks. The Traceable.ai system gives insights to developers as to what happens to their APIs when they’re in production. They need to know that. How is it all behaving? How many users are using the APIs? How are they using them? Mostly, they have no clue.

The mathematical foundation under which all these implementations are being done is based on relationships, relationships between APIs. You can call them graphs, but it’s all about relationships.

And on the other side, the security team doesn’t know exactly what the application is. They can see lots of APIs, but how are the APIs glued together to form this big application? Now, the mathematical foundation under which all these implementations are being done is based on relationships, relationships between APIs. You can call them graphs, you can call them sequences, but it’s all about relationships. 

One aspect we are looking at is how do you expose these relationships. Today we have this relationship buried deep inside of our implementations, inside our platform. But how do you take it out and make it visual so that you can better understand what’s happening? What is this application? What happens to the APIs?

By looking at these visualizations, you can easily figure out if there are bottlenecks within the application, for example. Is one API constantly being hit on? If I always go through this API, but the same API is also leading me to a search engine or a products catalog page, why does this API need to go through all these various functions? Can I simplify the API? Can I break it down and make it into multiple pieces? These kinds of insights are now being made available to the developer community.

Gardner: For those listening or reading this interview, how should they prepare themselves for being better able to leverage and take advantage of what Traceable.ai is providing? How can developers, security teams, as well as the IT operators get ready?

Rapid insights result in better APIs

Guntur: The moment you deploy Traceable in your environment, the algorithms kick in and start learning about the patterns of traffic in your environment. Within a few hours — or if your traffic has high volume, within 48 hours — you will receive insights into the API landscape within your environment. This insight starts with how many APIs are there in your environment. That’s a fundamental problem that a lot of companies are facing today. They just don’t know how many APIs exist in their environment at any given point of time. Once you know how many APIs are there, you can figure out how many services there are. What are the different services, and which APIs belong to which services? 

Traceable gives you the entire landscape within a few hours of deployment. Once you understand your landscape, the next interesting thing to see are your interfaces. You can learn how risky your APIs are. Are you exposing sensitive data? How many of the APIs are external facing? How to best use authentication to give access to APIs or not? And why do some APIs not have authentication? How are you exposing APIs without authentication?

Learn More  

About Traceable.ai

All these questions are answered right there in the user interface. After that, you can look at whether your development team is in compliance. Do the APIs comply with the specifications in the requirements? Because usually the development teams are rapidly churning out code, they almost never maintain the API’s spec. They will have a draft spec and they will build against it, but finally, when you deploy it, the spec looks very different. But who knows it’s different? How do you know it’s different?

Traceable’s insights tell you whether your spec is compliant. You get to see that within a few hours of deployment. In addition to knowing what happened to your APIs and whether they are compliant with the spec, you start seeing various behaviors.

No alt text provided for this image

People think that when you have 100 APIs deployed, all users use those APIs the same way. We think all of them are using the apps the same way. But you’d be surprised to learn that users use apps in many different ways. Sometimes the APIs are accessed through computational means, sometimes they are accessed via user interfaces. There is now insight for the development team on how users are actually using the APIs, which in itself is a great insight to help build better APIs, which helps build better applications, and simplifies the application deployments.

All of these insights are available within a few hours of the Traceable.ai deployment. And I think that’s very exciting. You just deploy it and open the screen to look at all the information. It’s just fascinating to see how different companies have built their API ecosystems.

And, of course, you have the security use cases. You start seeing what’s at work. We have seen, for example, what Bingbot from Microsoft looks like. But how active is it? Is it coming from 100 different IP addresses, or is it always coming from one part of a geolocation?

You can see how, for example, what search spiders’ activity looks like. What are they doing with our APIs? Why is the search engine starting to look at the APIs, which are internal language and have no information? But why are they crawling these APIs? All this information is available to you within a few hours. It’s really fascinating when you just deploy and observe.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Traceable.ai. 

YOU MAY ALSO BE INTERESTED IN: 

Posted in AIOps, application transformation, artificial intelligence, big data, Cloud computing, Cyber security, data analysis, digital transformation, Enterprise transformation, open source, Security, User experience | Leave a comment

Securing APIs demands tracing and machine learning to analyze behaviors and head off attacks

The burgeoning use of application programming interfaces (APIs) across cloud-native computing and digital business ecosystems has accelerated rapidly due to the COVID-19 pandemic.

Enterprises have had to scramble to develop and procure across new digital supply chains and via unproven business-to-business processes. Companies have also extended their business perimeters to include home workers as well as to reach more purely online end-users and customers.

In doing so, they may have given short shrift to protecting against the cybersecurity vulnerabilities inherent in the expanding use of APIs. The cascading digitization of business and commerce has unfortunately lead to an increase in cyber fraud and data manipulation.

Stay with us for Part 2 in our series where BriefingsDirect explores how APIsmicroservices, and cloud-native computing require new levels of defense and resiliency.

Listen the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest innovations for making APIs more understood, trusted, and robust, we welcome Jyoti Bansal, Chief Executive Officer and Co-Founder at Traceable.ai. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jyoti, in our last discussion, we learned how the exploding use of cloud-native apps and APIs has opened a new threat vector. As a serial start-up founder in Silicon Valley, as well as a tech visionary, what are your insights and experience telling you about the need for identifying and mitigating API risks? How is protecting APIs different from past cybersecurity threats?

Bansal: Protecting APIs is different in one fundamental way — it’s all about software and developers. APIs are created so that you can innovate faster. You want to empower your developers to move fast using DevOps and CI/CD, as well as microservices and serverless.

You want developers to break the code into smaller parts, and then connect those smaller pieces to APIs – internally, externally, or via third parties. That’s the future of how software innovation will be done.

Now, the way you secure these APIs is not by slowing down the developers. That’s the whole point of APIs. You want to unleash the next level of developer innovation and velocity. Securing them must be done differently. You must do it without hurting developers and by involving them in the API security process. 

Gardner: How has the pandemic affected the software development process? Is the shift left happening through a distributed workforce? How has the development function adjusted in the past year or so?

Software engineers at home

Bansal: The software development function in the past year has become almost completely work-from-home (WFH) and distributed. The world of software engineering was already on that path, but software engineering teams have become even more distributed and global. The pandemic has forced that to become the de facto way to do things.

Now, everything that software engineers and developers do will have to be done completely from home, across all their processes. Most times they don’t even use VPNs anymore. Everything is in the cloud. You have your source code, build systems, and CI/CD processes all in the cloud. The infrastructure you are deploying to is also in a cloud. You don’t really go through VPNs nor use the traditional ways of doing things anymore. It’s become a very open, connect-from-everywhere software development process.

Gardner: Given these new realities, Jyoti, what can software engineers and solutions architects do with APIs be made safer? How are we going to bring developers more of the insights and information they need to think about security in new ways?

Bansal: The most important thing is to have the insights. The fundamental problem is that people don’t even know what APIs are being used and which APIs have a potential security risk, or which APIs could be used by attackers in bad ways.

Learn More  

About Traceable.ai

And so, you want to create transparency around this. I call it turning on the lights. In many ways, developers are operating in the dark – and yet they’re building all these APIs.

Normally, these days you have a software development team of maybe five to 10 engineers. If you are developing using many APIs, each with augmentations, you might end up with 200 or 500 engineers. They’re all working on their own pieces, which are normally one or two microservices, and they’re all exposing them to the current APIs.

It’s very hard for them to understand what’s going on. Not only with their own stuff, but the bigger picture across all the engineering teams in the company and all the APIs and microservices that they’re building and using. They really have no idea.

No alt text provided for this image

For me, the first thing you must do is turn on the lights so that everyone knows what’s going on — so they’re not operating in the dark. They can then know which APIs are theirs, and which APIs talk to other APIs? What are the different microservices? What has changed? How does the data flow between them? They can have a real-time view of all of this. That is the number one thing to begin with.

We like to call it a Google Maps kind of view, where you can see how all the traffic is flowing, where the red lights are, and how everything connects. It shows the different highways of data going from one place to another. You need to start with that. It then becomes the foundation for developers to be much more aware and conscious about how to design the APIs in a more secure way.

Gardner: If developers benefit from such essential information, why don’t the older solutions like web application firewalls (WAFs) or legacy security approaches fit the bill? Why do developers need something different?

Bansal: They need something that’s designed to understand and secure APIs. If you look at a WAF, it was designed to protect systems against attacks on legacy web apps, like a SQL injection.

Normally a WAF will just look at whether you have a form field on your website where someone who can type in a SQL query and use it to steal some data. WAFs will do that, but that’s not how attackers steal data from APIs. They are completely different kinds of attacks.

Most WAFs work to protect against legacy attacks but they have had challenges. When it comes to APIs, WAFs really don’t have any kind of solutions to secure APIs.

Most WAFs work to protect against legacy attacks but they have had challenges of how to scale, and how to make them easy and simple to use.

But when it comes to APIs, WAFs really don’t have any kind of solution to secure APIs.

Gardner: In our last discussion, Jyoti, you mentioned how the burden for API security falls typically on the application security folks. They are probably most often looking at point solutions or patches and updates.

But it sounds to me like the insights Traceable.ai provides are more of a horizontal or one-size-fits-all solution approach. How does that approach work? And how is it better than spot application security measures?

End-to-end app security

Bansal: At Traceable.ai we take a platform approach to application security. We think application security starts with securing two parts of your app. 

One is the APIs your apps are exposing, and those APIs could be internal, external, and third-party APIs.

The second part is the clients that you yourselves build using those APIs. They could be web application clients or mobile clients that you’re building. You must secure those as well because they are fundamentally built on top of the same APIs that you’re exposing elsewhere for other kind of clients.

No alt text provided for this image

When we look at securing all of that, we think of it in a classic way. We think security is still about understanding and taking inventory of everything. What are all of the things that are there? Then, once you have an inventory, you look at protecting those things. Thirdly, you look to do it more proactively. Instead of just protecting the apps and services, can you go in and fix things where and when the problem was created.

Our solution is designed as an end-to-end, comprehensive platform for application security that can do all three of these things. All three must be done in very different ways. Compared to legacy web application firewalls or legacy Runtime Application Self Protection (RASP) solutions that security teams use; we take a very different approach. RASPs also have weaknesses that can introduce their own vulnerabilities.

Our fundamental approach builds a layer of tracing and instrumentation and we make these tracing and instrumentation capabilities extremely easy to use, thanks to the lightweight agents we provide. We have agents that run in different programming environments, like Java.NetPHPNode.js, and Python. These agents can also be put in application proxies or Kubernetesclusters. In just a few minutes, you can install these agents and not have to do any work.

We then begin instrumenting your runtime application code automatically and assess everything that is happening. First thing, in just a minute or two, based on your real-time traffic, we draw a picture of everything -the APIs in your system, all the external APIs, your internal microservices, and all the internal API endpoints on each of the microservices.

Learn More  

About Traceable.ai

This is how we assess the data flows between one microservice to a second and to a third. We begin to help you understand questions such as — What are the third-party APIs you’re invoking? What are the third-party systems you are invoking? And we’ll draw that all in Google Maps kind of traffic picture in just a matter of minutes. It shows you how everything flows in your system.

The ability to understand and embrace all of that is Traceable.ai solution’s first part, which is very different from any kind of legacy RASP app security approach out there. 

Once we understand that, the second part starts in our system that creates a behavioral learning model around the actual use of your APIs and applications to help you understand answers to questions such as – Which users are accessing which APIs? Which users are passing what data into it? What is the normal sequence of API calls or clicks in the web application that the users do? What internal microservices are invoked by every API? What pieces of data are being transferred? What volume of data is being transferred?

We develop a scoring mechanism whereby we can figure out what kind of attack someone might be trying to do. Are they trying to steal data? We can then create a remediation mechanism, such as blocking that specific user or blocking that way of invoking that API. 

All of that comes together into a very powerful machine learning (ML) model. Once that model is built, we learn the n-dimensional behavior around everything that is happening. There is often so much traffic, that it doesn’t take us long to build out a pretty accurate model.

Now, every single call that happens after that, we then compare it against the normal behavior model that we built. So, for example, normally when people call an API, they ask for data for one user. But if suddenly a call to the same API asks for data for 100,000 users, we will flag that — there is something anomalous about that behavior.

Next, we develop a scoring mechanism whereby we can figure out what kind of attack someone might be trying to do. Are they trying to steal data? And then we can create a remediation mechanism, such as blocking that specific user or blocking that particular way of invoking that API. Maybe we alert your engineering team to fix the bug there that allows this in the first place. 

Gardner: It sounds like a very powerful platform — with a lot of potential applications. 

Jyoti, as a serial startup founder you have been involved with AppDynamicsand Harness. We talked about that in our first podcast. But one of the things I’ve heard you talk about as a business person, is the need to think big. You’ve said, “We want to protect every line of code in the world,” and that’s certainly thinking big.

How do we take what you just described as your solution platform, and extrapolate that to protecting every line of code in the world? Why is your model powerful enough to do that?

Think big, save the world’s code

Bansal: It’s a great question. When we began Traceable.ai, that was the mission we started with. We have to think big because this is a big problem.

If I fast-forward to 10 years from now, the whole world will be running on software. Everything we do will be through interconnected software systems everywhere. We have to make sure that every line of the code is secure and the way we can ensure that every line of code is secure is by doing a few fundamental things, which are hard to do, but in concept they are simple.

Can we watch every line of code when it runs in a runtime environment? If an engineer wrote a thousand lines of code, and it’s out there and running, can we watch the code as it is running? That’s where the instrumentation and tracing part comes in. We can find where that code is running and watch how it is run. That’s the first part.

The second part is, can we learn the normal behavior of how that code was supposed to run? What did the developer intend when they wrote the code? And if we can learn that it’s the second part.

No alt text provided for this image

And the third component is, if you see anything abnormal, you flag it or block it, or do something about it. Even if the world has trillions and trillions of lines of code, that’s how we operate.

Every single line of code in the world should have a safety net built around it. Someone should be watching how the code is used and learn what is the normal developer intent of that code. And if some attacker, hacker, or a malicious person is trying to use the code in an unintended way, you just stop it.

That to me is a no-brainer — if we can make it possible and feasible from a technology perspective. That’s the mission we are on Traceable.ai – To make it possible and feasible.

Gardner: Jyoti, one of the things that’s implied in what we’ve been talking about that we haven’t necessarily addressed is the volume and speed of the data. It also requires being able to analyze it fast to stop a breach or a vulnerability before it does much damage.

You can’t do this with spreadsheets and sticky notes on a whiteboard. Are we so far into artificial intelligence (AI) and ML that we can take it for granted that this going to be feasible? Isn’t a high level of automation also central to having the capability to manage and secure software in this fashion?

Let machines do what they do 

Bansal: I’m with you 100 percent. In some ways, we have machines to protect against these threats. However, the amount of data and the volume of things is very high. You can’t have a human, like a security operations center (SOC) person, sitting at a screen trying to figure out what is wrong.

That’s where the challenge is. The legacy security approaches don’t use the right kind of ML and AI — it’s still all about the rules. That generates numerous false positives. Every application security, bot security, RASP, and legacy app security approach defines rules sets to define if certain variables are bad and that approach creates many false positives and junk alerts, that they drown the humans monitoring those- it’s just not possible for humans to go through it. You must build a very powerful layer of learning and intelligence to figure it out.

Learn More  

About Traceable.ai

The great thing is that it is possible now. ML and AI are at a point where you can build the right algorithms to learn the behavior of how applications and APIs are used and how data flows through them. You can use that to figure out the normal usage behaviors and stop them if they veer off – that’s the approach we are bringing to the market.

Gardner: Let’s think about the human side of this. If humans can’t necessarily get into the weeds and deal with the complexity and scale, what is the role for people? How do you oversee such a platform and the horizontal capabilities that you’re describing?

Do we need a new class of security data scientist, or does this is fit into a more traditional security management persona?

Bansal: I don’t think you need data scientists for APIs. That’s the job of products like Traceable.ai. We do the data science and convert it into actionable things. The technology behind Traceable.ai itself could be the data scientist inside.

But what is needed from the people side is the right model of organizing your teams. You hear about DevSecOps, and I do think that that kind of model is really needed. The core of DevSecOps is that you have your traditional SecOps teams, but they have become much more developer, code, and API aware, and they understand it. Your developer teams have become more security-aware than they have been in the past.

In the past we’ve had developers who don’t care about security and security people who don’t care about code and APIs. We need to bridge that from both sides.

Both sides have to come together and bridge the gap. Unfortunately, what we’ve had in the past are developers who don’t care about security, and security people who don’t care about code and APIs. They care about networks, infrastructures, and servers, because that’s where they spend most of their time trying to secure things. From an organization and people perspective, we need to bridge that from both sides.

We can help, however, by creating a high level of transparency and visibility by understanding what code and APIs are there, which ones have security challenges, and which ones do not. You then give that data to developers to go and fix. And you give that data to your operations and security teams to manage risk and compliance. That helps bridge the gap as well.

Gardner: We’ve traditionally had cultural silos. A developer silo and a security silo. They haven’t always spoken the same language, never mind work hand-in-hand. How does the data and analytics generated from Traceable.ai help bind these cultures together?

Bridge the SecOps divide

Bansal: I will give you an example. There’s this new pattern of exposing data through GraphQL. It’s like an API technology. It’s very powerful because you can expose your data into GraphQL where different consumers can write API queries directly to GraphQL.

Many developers who write these graphs to allow APIs, don’t understand the security implications. They write the API, and they don’t understand that if they don’t put in the right kind of checks, someone can go and attack them. The challenge is that most SecOps people don’t understand how GraphQL APIs work or that they exist.

So now we have a fundamental gap on both sides, right? A product like Traceable.ai helps bridge that gap by identifying your APIs, and that there are GraphQL APIs with security vulnerabilities where sensitive data can potentially be stolen.

No alt text provided for this image

And we will also tell if there is an attack happening. We will tell you that someone is trying to steal data. Once you have that data, and developers see the data, they become much more security-conscious because they see it in a dashboard that they built the GraphQL APIs from, and which has 10 security vulnerabilities and alerts that two attacks are happening.

And the SecOps team, they see the same dashboard. They know which APIs were crafted, and that by these patterns they know which attackers and hackers are trying to exploit them. Thus, having that common shared sense of data in a shared dashboard between the developers and the SecOps team creates the visibility and the shared language on both sides, for sure.

Gardner: I’d like to address the timing of the Traceable.ai solution and entry into the market.

It seems to me we have a level of trust when it comes to the use of APIs. But with the vulnerabilities you’ve described that trust could be eroded, which could be very serious. Is there a race to put in the solutions that keep APIs trustworthy before that trust gets eroded?

A devoted API security solution

Bansal: We are in the middle of the API explosion. Unfortunately, when people adopt a new technology, they think about its operational elements, and then security, performance, and scalability after that. Once they start running into those problems, they start challenging them.

We are at a point of time where people are seeing the challenges that come with API security and the threat vectors that are being opened. I think the timing is right. People, the market, and the security teams understand the need and feel the pain.

We already have had some very high-profile attacks in the industry where attackers have stolen data through improperly secured APIs. So, it’s a good time to bring a solution into the market that can address these challenges. I also think that CI/CD in DevOps is being adopted at such a rapid phase that API security and securing cloud-native microservices architectures are becoming a major bottleneck.

In our last discussion, we talked about Harness, another company that I have founded, which provides the leading CI/CD platform for developers. When we talk to our customers at Harness and ask, “What is the blocker in your adoption of CI/CD? What is the blocker in your adoption of public cloud, or using two or more microservices, or more serverless architectures?”

They say that they are hesitant due to their concerns around application security, securing these cloud-native applications, and securing the APIs that they’re exposing. That’s a big part of the blocker.

Learn More  

About Traceable.ai

Yet this resistance to change and modernization is having a big business impact. It’s beginning to reduce their ability to move fast. It’s impacting the very velocity they seek, right? So, it’s kind of strange. They should want to secure the APIs – secure everything – so that they can gain risk mitigation, protect their data, and prevent all the things that can burn your users.

But there is another timing aspect to it. If they can’t soon figure out the security, the businesses really don’t have any option other than to slow down their velocity and slow down adoption of cloud-native architectures, DevOps, and microservices, all of which will have a huge business and financial impact.

 So, you really must solve this problem. There’s no other solution or way out.

Gardner: I’d like to revisit the concept of Traceable.ai as a horizontal platform capability.

Once you’ve established the ML-driven models and you’re using all that data, constantly refining the analytics, what are the best early use cases for Traceable.ai? Then, where do you see these horizontal analytics of code generation and apps production going next?

Inventory, protection, proactivity

Bansal: There’s a logical progression to it. The low-lying fruit is to assume you may have risky APIs with improper authentication that can expose personally identifiable information (PII) and data. The API doesn’t have the right authorization control inside of it, for example. That becomes the first low-hanging fruit. Once, you put Traceable.ai in your environment, we can look at the traffic, and the learning models will tell you very quickly if you have these challenges. We make it very simple for a developer to fix that. So that’s the first level.

The second level is, once you protect against those issues, you next want to look for things you may not be able to fix. These might be very sophisticated business logic abuses that a hacker is trying to insert. Once our models are built, and you’re able to compare how people are using the services, we also create a very simple model for flagging and attributing any bad behaviors to a specific user.

The threat actor could be a bot, a particular authenticated user, or a non-authenticated user trying to do something that is not normal behavior. We see the patterns of such abuses around data theft or something happening around the data. We can alert you and block the threat actor.

This is what we call a threat actor. It could be a bot, a particular authenticated user, or a non-authenticated user trying to do something that is not normal behavior. We see the patterns of such abuses around data theft or something that is happening around the data. We can alert you and we can block the threat actor. So that becomes the second part of the value progression.

The third part then becomes, “How do we become even more proactive?” Let’s say you have something in your API that someone is trying to abuse through a sophisticated business logic approach. It could be fraud, for example. Someone could create a fraudulent transaction because the business logic in the APIs allows for that. This is a very sophisticated hacker.

Once we can figure that abuse out, we can block it, but the long-term solution is for the developers to go and fix the code logic. That then becomes the more proactive approach. By Traceable.ai bringing in that level of learning, that a particular API has been abused, we can identify the underlying root cause and show it to a developer so that they can fix it. That’s becoming the more progressive element of our solution.

Eventually you want to put this into a continuous loop. As part of your CI/CD process, you’re finding things, and then in production, you are also finding things when you detect an attack or something abnormal. We can give it all back to the developers to fix, and then it goes through the CI/CD process again. And that’s how we see the progression of how our platform can be used.

Gardner: As the next decade unfolds, and organizations are even more digital in more ways, it strikes me that you’re not just out to protect every line of code. You’re out there to protect every process of the business.

Where do the use cases progress to when it comes to things like business processes and even performance optimization? Is the platform something that moves from a code benefit to a business benefit? 

Understanding your APIs

Bansal: Yes, definitely. We think that the underlying model we are building will understand every line of code and how is it being used. We will understand every single interaction between different pieces of code in the APIs and we will understand the developer intent around those. How did the developers intend for these APIs in that piece of code to work? Then we can figure out anything that is abnormal about it.

So, yes, we are using the platform to secure the APIs and pieces of code. But we can also use that knowledge to figure out if these APIs are not performing in the right kinds of way. Are there bottlenecks around performance and scalability? We can help you with that.

What if the APIs are not achieving the business outcomes they are supposed to achieve? For example, you may build different pieces of code and have them interact with different APIs. In the end, you want a business process, such as someone applying for a credit card. But if the business process is not giving you the right outcome, you want to know why not? It may be because it’s not accurate enough, or not fast enough, or not achieving the right business outcome. We can understand that as well, and we can help you diagnose and figure out the root cause of that as well.

No alt text provided for this image

So, definitely, we think eventually, in the long-term, that Traceable.ai is a platform that understands every single line of code in your application. It understands the intent and normal behaviors of every single line of code, and it understands every time there is something anomalous, wrong, or different about it. You then use that knowledge to give you a full understanding around these different use cases over time.

Gardner: The lesson here, of course, is to know yourself by letting the machines do what they do best. It sounds like the horizontal capability of analyzing and creating models is something you should be doing sooner rather than later.

It’s the gift that keeps giving. There are ever-more opportunities to use those insights, for even larger levels of value. It certainly seems to lead to a virtuous adoption cycle for digital business.

Bansal: Definitely. I agree. It unlocks and removes the fear of moving fast by giving developers freedom to break things into smaller components of microservices and expose them through APIs. If you have such a security safety net and the insights that go beyond security to performance and business insights, it reduces the fear because you now understand what will happen.

When people start thinking of serverless, Functions, or similar technologies the idea is that you take those 200 microservices and break them into 2,000 micro-functions. And those functions all interact with each other. You can clip them independently, and every function is just a few hundred lines of code at most.

So now, how do you start to understand the 2,000 moving parts? There is a massive advantage of velocity, and reusability, but you will be challenged in managing it all. If you have a layer that understands and reduces that fear, it just unlocks so much innovation. It creates a huge advantage for any software engineering organization. 

Listen the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Traceable.ai.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, application transformation, artificial intelligence, big data, Cloud computing, Cyber security, Data center transformation, DevOps, digital transformation, enterprise architecture, Security, Software | Tagged , , , , , , , , , , , , , | Leave a comment

Rise of reliance on APIs brings new security vector — and need for novel defenses

API Application Programming Interface businessman pointing a visual icon.

Thinking of IT security as a fortress or a moat around your compute assets has given way to a more realistic and pervasive posture.

Such a cybersecurity perimeter, it turns out, was only an illusion. A far more effective extended-enterprise strategy protects business assets and processes wherever they are — and wherever they might reach.

As businesses align to new approaches such as zero trust and behavior-modeling to secure their data, applications, infrastructure, and networks, there’s a new, rapidly expanding digital domain that needs such pervasive and innovative protection.

The next BriefingsDirect security trends discussion explores how application programming interfaces (APIs), microservices, and cloud-native computing form a new frontier for cybersecurity vulnerabilities — as well as opportunities for innovative defenses and resilience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about why your expanding use of APIs may be the new weak link in your digital business ecosystem, please welcome Jyoti Bansal, Chief Executive Officer and Co-Founder at Traceable.ai. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jyoti, has the global explosion of cloud-native apps and services set us up for a new variety of security vulnerability? How serious is this new threat?

Bansal: Well, it’s definitely new and it’s quite serious. If you look at every time we go through a change in IT architectures, we get a new set of security challenges. The adoption of cloud-native architectures means challenges in a few things. 

No alt text provided for this image
Bansal

One, you have a lot of APIs and these APIs are doors and entryways into your systems and your apps. If those are not secured properly, you have more opportunities for attackers to steal data. You want to open the APIs so that you can expose data, but attackers will try to exploit that. We are seeing more examples of that happening.

The second major challenge with cloud-native apps is around the software development model. Development now is more high-velocity, more Agile. People are using DevOps and continuous integration and continuous delivery (CI/CD). That creates the velocity. You’re changing things once every hour, sometimes even more often.

That creates new kinds of opportunities for developers to make mistakes in their apps and in their APIs, and in how they design a microservice; or in how different microservices or APIs interact with each other. That often creates a lot more opportunity for attackers to exploit.

Gardner: Companies, of course, are under a lot of pressure to do things quickly and to react to very dynamic business environments. At the same time, you have to always cover your backside with better security. How do companies face the tension between speed and safety?

Speed and safety for cloud-native apps

Bansal: That’s the biggest tension, in many ways. You are forced to move fast. The speed is important. The pandemic has been even more of a challenge for a lot of companies. They had to move to more of a digital experience much faster than they imagined. So speed has become way more prominent.

But that speed creates a challenge around safety, right? Speed creates two main things. One is that you have more opportunity to make mistakes. If you ask people to do something very fast because there’s so much business and consumer pressure, sometimes you cut corners and make mistakes.

Learn More  

About Traceable.ai

Not deliberately. It’s just as software engineers can never write completely bug-free code. But if you have more bugs in your code because you are moving very, very fast, it creates a greater challenge.

So how do you create safety around it? By catching these security bugs and issues much earlier in your software development life cycle (SDLC). If a developer creates a new API and that API could be exploited by a hacker — because there is a bug in that API around security authentication check — you have to try to find it in your test cycle and your SDLC.

The second way to gain security is by creating a safety net. Even if you find things earlier in your SDLC, it’s impossible to catch everything. In the most ideal world, you’d like to ship software that has zero vulnerabilities and zero gaps of any kind when it comes to security. But that doesn’t happen, right?

You have to create a safety net so that if there are vulnerabilities because the business pressure was there to move fast, that safety net that can still block what occurs and stop those from trying to do things that you didn’t intend from your APIs and applications.

Gardner: And not only do you have to be thinking about APIs you’re generating internally, but there are a lot of third-party APIs out there, along with microservices, when doing extended-enterprise processes. It’s a bit of a Wild West environment when it comes to these third-party APIs.

Bansal: Definitely. The APIs you are building and using internally through your microservices may also have an external consumer calling those APIs. Other microservices may also be calling them — and so there is exposure around that.

No alt text provided for this image

Third-party APIs manifest in two different ways. One is that you might be using a third-party API or library in your microservice. There might be a security gap there.

The second way comes when you’re calling on third-party APIs. And now almost everything is exposed as APIs – such as if you want to check for some data somewhere or call some other software as a service (SaaS) service or cloud service, or a payment service. Everything is an API, and those APIs are not always called properly. All of those APIs are not secure, and so your system fundamentally can become more insecure.

It is getting close to a wild, Wild West with APIs. I think we have to take API security quite seriously at this point.

Gardner: We’ve been talking about API security as a function of growing pains, that you’re moving fast, and this isn’t a process that you might be used to.

But there’s also malice out there. We’ve seen advanced, persistent threats in such things as zero-day exploits and with Microsoft Exchange Serversrecently. We’ve certainly seen with the SolarWinds exploits how a supply chain can be made vulnerable.

Have we seen people take advantage of APIs, too, or is that something that we should expect?

API attacks a global threat

Bansal: Well, we should definitely expect that. We are seeing people take advantage of these APIs. If you look at data from Gartner, they stated that by 2022, API abuses will move from an infrequent to the most frequent attack vector. That will result in more data breaches in enterprises and web applications. That is the new direction because of how applications are consumed with APIs.

The API has naturally become a more frequent form of attack vector now.

Gardner: Do you expect, Jyoti, that this is going to become mission-critical? We’re only part way into the “software eats the world” thing. As we expect software to become more critical to the world, APIs are becoming more part of that. Could API vulnerabilities become a massive, global threat vector?

Bansal: Yes, definitely. We are, as you said, only partially into the software-eats-the-world trend. We are still not fully there. We are only 30 to 40 percent there. But as we see more and more APIs, those will create a new kind of attack vector.

For a long time, people didn’t think about APIs. People only thought about APIs as internal. External APIs were very few. Now, APIs are a major source of how other systems integrate across the globe. The traffic coming through APIs is significantly increasing.

It’s a matter of now taking these threats seriously. For a long time, people didn’t think about APIs. People only thought about APIs as internal APIs; that you will put internal APIs between your code and different internal services. The external APIs were very few. Most of your users were coming through a web application or a mobile application, and so you were not exposing your APIs as much to external applications.

If you look at banking, for example, most of the bank services software was about online banking. End users came through a bank web site, and then users came through mobile apps. They didn’t have to worry too much about APIs to do their business.

Now, that’s no longer the case. For any bank, APIs are a major source of how other systems integrate with them. Banks didn’t have to expose their systems through those apps that they built, but now a lot of third-party apps are written on top of those APIs — from a wallet app, to different kinds of payment systems, to all sorts of things that are out there — because that’s what consumers are looking for. So, now — as you start doing that — the amount of traffic coming through that API is not just through the web or mobile front-ends directly. It’s significantly increasing.

The general use of internal APIs is increasing. With the adoption of cloud-native and microservices architectures, the internal-to-external boundary is starting to blur too much. Internal APIs could become external at any point because the same microservice that our engineering team wrote is now being used by your other internal microservices inside of your company. But they are also being exposed to your partners or other third-party systems to do something, right?

Learn More  

About Traceable.ai

More and more APIs are being exposed out there. We will see this continued explosion of APIs because that’s the nature of how modern software is built. APIs are the building block of modern software systems.

I think we have two options as an industry. Either we say, “Okay, APIs could be risky or someone could attack them, so let’s not use APIs.” But that to me is completely wrong because APIs are what’s driving the flexibility and fluidity of modern software systems and the velocity that we need. We have to just learn as an industry to instead secure APIs and be serious about securing them.

Gardner: Jyoti, your role there as CEO and co-founder at Traceable.ai is not your first rodeo. You’ve been a serial startup leader and a Silicon Valley tech visionary. Tell us about your other major companies, AppDynamics, in particular, and why that puts you in a position to recognize the API vulnerability — but also come up with novel ways of making APIs more secure.

History of troubleshooting

At that time, we were starting to see a lot of service-oriented architectures (SOA). People were struggling when something was slow and users experienced slowdowns from their websites. How do you figure out where the slowdown is? How do you find the root cause? 

That space eventually became what is called application performance management (APM). What we came up with was, “How about we instrument what’s going on inside the code in production? How about we trace the flow of code from one service to another service, or to a third service or a database? Then we can figure out where the slow down and bottlenecks are.”

No alt text provided for this image

By understanding what’s happening in these complex software systems, you can figure out where the performance bottleneck is. We were quite successful as a company. We were acquired by Cisco just a day before we were about to go IPO.

The approach we used there solves problems around performance – so monitoring, diagnosing, and troubleshooting diagnostics. The fundamental approach was about instrumenting and learning what was going on inside the systems.

That’s the same approach we at Traceable.ai apply to solving the problems around API security. We have all these challenges around APIs; they’re everywhere, and it’s the wild, Wild West of APIs.

So how do you get in control? You don’t want to ask developers to slow down and not do any APIs. You don’t want to reduce the velocity. The way you get control over it is fundamentally a very similar approach to what we used at AppDynamics for performance monitoring and troubleshooting. And that is by understanding everything that can be instrumented in the APIs’ environment.

That means for all external APIs, all internal APIs, and all the third-party APIs. It means learning how the data flows between these different APIs, which users call different APIs, what they are trying to achieve out of it, what APIs are changed by developers, and which APIs have sensitive data in them.

Once you are in control of what is there, you can learn if some user is trying to use these APIs in a bad way. You know what seems like an attack, or if something wrong is happening. Then you can quickly go into prevention mode. You can block that attack.

Once you automatically understand that — about all of the APIs – then you start to get in control of what is there. Once you are in control of what’s there, you can learn if some user is trying to use these APIs in a bad way. You know what seems like an attack, or if something wrong is happening. There might be a data breach or something. Then you can quickly go into prevention mode. You can then block that attack.

There are a lot of similarities from my experience at my last company, AppDynamics, in terms of how we solve challenges around API security. I also started a second company, Harness. It’s in a different space, targeting DevOps and software developers, and helping them with CI/CD. Harness is now one of the leading platforms for CI/CD or DevOps.

So I have a lot of experience from the vantage point of what do modern software engineer organizations have to do from a CI/CD DevOps perspective, and what security challenges they start to run into.

We talk to Harness customers doing modern CI/CD about application and API security. And it almost always comes as one big challenge. They are worried about microservices, about cloud-native architectures, and about moving more to APIs. They need to get in control and to create a safety net around all of this.

Gardner: Does your approach of trace, monitor, and understand the behavior apply to what’s going on in operations as well as what goes on in development? Is this a one-size-fits-all solution? Or do you have to attack those problems separately?

One-size-fits-all advantages

Bansal: That’s the beauty of this approach. It is in many ways a one-size-fits-all approach. It’s about how you use the data that comes out of this trace-everything instrument. Fundamentally it works in all of these areas.

It works because the engineering teams put in what we call a lightweight agent. That agent goes inside the runtime of the code itself, running in different programming languages, such as JavaPHP, and Python. The agents could also run in your application proxies in your environment.

You put the same kinds of instruments, lightweight agents, in for your external APIs, your internal microservices APIs, as well as the third-party APIs that you’re calling. It’s all the same.

Learn More  

About Traceable.ai

When you have such instrumentation tracing, you can take the same approach everywhere. Ideally, you put the same in a pre-production environment while you are going through the software testing lifecycle in a CI/CD system. And then, after some testing, staging, and load testing, you start putting the same instrumentation into production, too. You want the same kind of approach across all of that.

In the testing cycle, we will tell you — based on all instrumentation and tracing, looking at all the calls based on your tests – that these are the places that are vulnerable, such as these are the APIs that have gaps and could be exploited by someone.

Then, once you do the same approach in production, we tell you not only about the vulnerabilities but also where to block attacks that are happening. We say, “This is the place that is vulnerable, right now there is an attacker trying to attack this API and steal data, and this is how we can block them.” This happens in real-time, as they do it.

But it’s fundamentally the same approach that is being used across your full SDLC lifecycle.

Gardner: Let’s look at the people in these roles or personas, be it developer, operations, SecOps, and traditional security. Do you have any examples or metrics of where API vulnerabilities have cropped up? What vulnerabilities are these people already seeing?

Vulnerable endpoints to protect

Bansal: A lot of API vulnerabilities crop up around unauthenticated endpoints, such as exposing an API and it doesn’t have the right kind of authentication. Second is around not using the right authorization, such as calling an API that is supposed to give you data for you as user 1, but the authorization had a flaw that could be exploited for you to take data — not just as user 1 but from someone else, a user 2, or maybe even a large number of users. That’s a common problem that happens too often with APIs.

No alt text provided for this image

There are also leaky APIs that give you more data than they should, such as it’s only supposed to give the name of someone, but it also includes more sensitive data.

In the world of application security, we have the OWASP Top Ten list that the app security teams and the security teams have followed for a long time. And normally you would have things like SQL injection and cross-site scripting, and those were always in that list.

Now there’s an additional list called the OWASP API Security Top Ten, which lists the top threats when it comes to APIs. Some of the threats I described are key parts of it. And there are a lot of examples of these API-involved attacks these days.

Just recently in 2020, we had a Starbucks vulnerability in API calls, which potentially exposed 100 million customer records. It was around an authentication vulnerability. In 2019, Capital One was a high-profile example. There was an Amazon Web Services (AWS) configuration API that wasn’t secured properly and an attacker got access to it. It exposed all the AWS resources that Capital One had.

We are starting to see patterns emerge on the vulnerabilities attackers are exploiting in APIs. No one should take API security lightly these days. It’s a big mistake if companies are not getting to this faster.

There was a very high-profile attack that happened on T-Mobile in 2018, where there was an API leaking more data than it was supposed to. Some 2.3 million customers’ data was stolen. In another high-profile attack, at Venmo, a public API was not exposing the data for the right users so 200 million transactions of data were stolen from Venmo. As you can see from these examples, we are starting to see patterns emerge on the vulnerabilities attackers are exploiting in APIs.

Gardner: Now, these types of attacks and headlines are going to get the attention of the very top of any enterprise, especially now where we’re seeing GDPR and other regulations require disclosure of these sorts of breaches and exposures. This is not just nice to have. This sounds like potentially something that could make or break a company if it’s not remediated.

Bansal: Definitely. No one should take API security lightly these days. A lot of the traditional cybersecurity teams have put a lot of their focus and energy in securing the networks and infrastructure. And many of them are just starting to get serious about this next API threat vector. It’s a big mistake if companies are not getting to this faster. They are exposing themselves in a big way.

Gardner: The top lesson for security teams, as they have seen in other types of security vulnerabilities, is you have to know what’s there, protect it, and then be proactive. What is it about the way that you’re approaching these problems that set you up to be able to be proactive — rather than reactive — over time?

Know it, protect it, and go proactive

Bansal: Yes, the fundamentals of security are the same. You have to know what is there, you have to protect it, and then you become proactive about it. And that’s the approach we have taken in our solution at Traceable.ai.

Number one is all about API discovery and risk assessment. You put us there in your environment and very quickly we’ll tell you what all the APIs are. It’s all about discovery and inventory as the very first thing. These are all your external APIs. These are all your internal APIs. These are all the third-party APIs that you are invoking. So it starts with discovery. You have to know what is there. And you create an inventory of everything.

No alt text provided for this image

The second part, when you create that inventory, is to give a risk score. We give every API a risk score: internal API, external API, and third-party, all of them. The risk score is based on many dimensions, such as which APIs have sensitive data flowing through them, which APIs are exposed publicly versus not, which APIs have what kind of authentication to them, and what APIs are internally using your critical database systems and reading data from those. Based on all of these factors, we are creating a risk heat map of all of our APIs.

The most important part for APIs security is to do this continuously. Because you’re living in the world of CI/CD, any kind of API discovery or assessment cannot be static, like you do it once a month, once a quarter, or even once a week. You have to do it dynamically all the time because code is changing. Developers are putting new code continuously out there. So the APIs are changing, with new microservices. All of the discovery and risk assessment has to happen continuously. So, that’s really the first challenge we handle at Traceable.ai.

The second problem we handle is to build a learning model. That learning model is based on a very sophisticated machine learning (ML) approach on what is the normal usage behavior of each of these APIs. What users are calling an API? What sequence do they get called? What kind of data passes through them? What kinds of data are they fetching out of where? And on and on.

We are learning all of that automatically. Once you learn that, you start comparing every new API request with what the normal model of how your APIs are supposed to be used.

Now, if you have an attacker trying to use an API to extract much more data than what is normal for that data, you know that something is abnormal about it. You could flag it, and that’s a key part of how we think of the second part, which is how do you protect these APIs from bad behavior.

Learn More  

About Traceable.ai

That cannot be done with the traditional web application firewall (WAF)and runtime application self-protection (RASP), and those kinds of approaches. Those are very rule-based or static-rules-type of base approaches. For APIs, you have to build a behavioral learning-based system. That’s what our solution is about. That’s how we get to a very high degree of protection for these APIs.

The third element to the solution is the proactive part. After a lot of this learning, we also examine the behavior of these APIs and the potential vulnerabilities, based on the models. The right way to proactively use our system is to feed that into your testing and development cycle. That brings the issues back to the developers to fix the vulnerabilities. We can help find them earlier in the lifecycle so you can integrate that into what you’re doing in your application security testing processes. It closes the loop on you doing all of this – only proactively now.

Gardner: Jyoti, what should businesses do to prepare themselves at an early stage for API security? Who should be tasked with kicking this off?

Build your app security team

Bansal: API security falls under the umbrella of app security. In many businesses, app security teams are now tasked to secure the APIs in addition to the traditional web applications.

The first thing every business has to do is to create a responsibility around securing APIs. You have to bring in something to understand the inventory. They don’t even know what all of the APIs are. Then you can start securing and getting a better posture.

In many places, we are also seeing businesses create teams around what they call product security. If you are a company with FinTechproducts, your product is an API because your product is primarily exposed to APIs. Then people start building out product security teams who are tasked with securing all of these APIs. In some cases, we see the software engineering team directly responsible for securing APIs.

The problem is they don’t even know what all of their APIs are. They may have 500 or 2,000 developers in the company. They are building all of these APIs, and can’t even track them. So most businesses have to get an understanding and some kind of control over the APIs that are there. Then you can start securing and getting a better security posture around those.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Traceable.ai.

YOU MAY ALSO BE INTERESTED IN:

Posted in application transformation, artificial intelligence, Cloud computing, containers, Cyber security, data analysis, Data center transformation, digital transformation, machine learning, Security, Software | Tagged , , , , , , , , , , , , , | Leave a comment

Creating business advantage with technology-enabled flexible work

As businesses plan for a future where more of their workforce can be located just about anywhere, how should they rethink hiring, training, and talent optimization? This major theme for 2021 and beyond poses major adjustments for both workers and savvy business leaders.

The next BriefingsDirect modern workplace strategies discussion explores how a global business process outsourcing leader has shown how distributed employees working from a “Cloud Campus” are improving productivity and their end users’ experience. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about best practices and advantageous outcomes from a broadly dispersed digital workforce, we are now joined by José Güereque, Executive Vice President of Infrastructure and Nearshore Chief Information Officer at Teleperformance SE in Monterrey, Mexico; Lance Brown, Executive Vice President Global Network, Telecom, and Architecture at Teleperformance, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, when it comes to flexible and hybrid work models we often focus on how to bring the work to the at-home workforce. But this new level of flexibility also means that we can find and attract workers from a much broader potential pool of talent.

Are companies fully taking advantage of this decentralized talent pool yet? And what benefits are those who are not yet expanding their workforce horizons missing out on?

Pick your talent anywhere

Minahan: We are at a very interesting inflection point right now. If there is any iota of a silver lining in this global pandemic it’s that it has opened people’s minds to both accelerating digitization of their business, but also opening their minds to new ways of work. It’s now been proven that work can indeed occur outside the office. Smart companies like Teleperformance are beginning to look at their entire workforce strategies — their work models — in different ways. 

It’s not about should Sam or Susie work in the office or work at home. It’s, “Gee, now that I can enable everyone with the work resources they need, and in a secure workspace environment to do their best work wherever it is, does that allow me to do new things, such as tap into new talent pools that may not be within commuting distance of my work hubs?”

This now allows me to even advance sustainability initiatives or, in some cases, we have companies now saying, “Hey, now I can also reach workers that allow me to bring more diversity into my workforce. I can enable people to work from inner cities or other locations — rural locations — that I couldn’t reach before.”

This is the thought process that a lot of forward-thinking companies are going through right now. 

Gardner: It seems that a remote, hybrid, flexible work capability is the gift that keeps giving. In many cases we have seen projections of shortages of skilled workers and gaps between labor demand and supply. Are we in just the early innings of what we can expect from the benefits of remote work? 

Minahan: Yes. If you think way back in history, about a year ago, that’s exactly what the world was grappling with. There was a global shortage of skilled workers. In fact, McKinsey estimated that there was a global shortage of 95 million medium- to high-skilled workers. So managers were trying to hire amid all that. 

But, in addition, there was a shortage of the actual modern skills that a lot of companies need to advance their business, to digitize their business. And the third part is a lot of employees were challenged and frustrated with the complexity of their work environment.

Now, more flexible work models enabled by a digital workspace that ensures employees have access to all the work resources they need, wherever work needs to get done, begins to address each of those issues. Now you can reach into new areas to find new talent. You can reach skills that you couldn’t before because you were competing in a very competitive market.

Now you can enable your employees to work where and how they want in new ways that doesn’t limit them. They no longer have a long commute that gives them added stress in their lives. In fact, our research found that 80 percent of workers feel they are being as, if not more, productive working remotely than they could be in the office.

Gardner: Let’s find out from an organization that’s been doing this. José, at Teleperformance, tell us the types of challenges you faced in terms of the right fit between your workforce and your demands for work. How have you been able to use technology to help solve that?

Güereque: Our business was mostly a finite structure of brick-and-mortar operations. When COVID struck, we realized that we faced a challenge of not being able to move people to and from the work centers. So, we rushed to move all of our people, as much as possible, to work from home (WFH).

At-Home Workers May Explore Their Options. 

Here’s What They Will Be Looking For. 

Technically, the first challenge was to restructure our network, services, and all kinds of resources to move the workforce to WFH. As you can imagine, that came in hand with security measures. Security is one of the most important things we need to address and have in place. 

But while there were big challenges, big opportunities also arose for us. The new model allows us to be more flexible in how we look for new talent. We can now find that talent in places we didn’t search before.

Our team has helped expedite this work-at-home model for us. It was not embraced in the massive way it is right now. 

Gardner: Lance, tell us about Teleperformance, your workforce, your reach, and your markets.

Remote work: Simpler, faster, safer

Brown: Teleperformance is a global customer experience company based in France. We have more than 383,000 employees worldwide in 83 countries serving over 170 markets. So it’s a very large corporation. We have a number of agents who support many Fortune 500 companies all over the world, and our associates obviously have to be able to connect and talk [in over 265 languages and dialects] to customers. 

We sent more than 220,000 of these associates home in a very quick time frame at the onset of the pandemic.

Our company is all about being simpler, faster, and safer — and working with Citrix allowed us to meet all of our transition goals. Remote work is now a simpler, faster process — and it’s a safer process. All of our security that Citrix provides is on the back end. We don’t have to worry as much with the security on our endpoint as we would in other traditional models. 

Gardner: As José mentioned, you had to snap to it and solve some major challenges from the crisis. Now that you have been adjusting to this, do you agree that it’s the gift that keeps giving? Is flexible work here to stay from your perspective?

Our company is all about being simpler, faster, and safer — and working with Citrix allowed us to meet all of our transition goals. Remote work is now a simpler, faster process — and it’s a safer process.

Brown: Yes, from Teleperformance’s perspective, we fully are working to get our clients to remain at WFH — for a large percentage of the workforce. We don’t ever see the days of going back to 100 percent brick and mortar, or even mostly brick and mortar. We were at 90 percent on-site before the pandemic. Now, at the end of the day, that will become between 50 percent to 65 percent work at home.

Gardner: Tim, because they have 390,000 people, there is going to be a great diversity of how people will react to this. One of the nice things about remote work and digital workspaces is you can be dynamic. You can adjust, change, and innovate.

How are organizations such as Teleperformance breaking new ground? Are they finding innovation that goes beyond what they may have expected from flexible work at the outset? 

Minahan: Yes, absolutely. This isn’t just about can we enable ourselves to tap into new talent in some remote locations or for disenfranchised parts of the workforce. It’s about creating an agile workforce model. Teleperformance is on the frontlines of enabling that for its own workforce. But Teleperformance is also part of the solution, due to their business process outsourcing (BPO) solutions and how they serve their clients. You begin to rethink the workforce. 

We did a study as part of our Work 2035 Project, in which we went out over the past year-and-a-half and interviewed tens of thousands of employees, thousands of senior executives, and probed into what the world of work will look like in 2035. A lot of things we are talking about here have been accelerated by the pandemic.

One of those things is moving to a more agile workforce model, where you begin to rethink your workforce strategies, and maybe where you augment full-time employees with contractors or gig workers, so you have that agility to dial up your workforce. 

Maybe it’s due to seasonality, and you need for a call center or other services to be able to dial up or back down. Or work locations shift, moving due to certain needs or responses to certain catastrophes. And like I said, that’s what a lot of forward-thinking companies are doing.

What’s so exciting about Teleperformance is they are not only doing it for their own organization — but they are also providing the solution for their own clients.

Gardner: José, please describe for us your Cloud Campus concept. Why did you call it Cloud Campus and what does it do? 

Cloud Campus engages worldwide

Güereque: Enabling people to WFH is only part of what you need. You also need to guarantee the processes in place perform as well as they used to in a brick-and-mortar environment. So our cloud solution pushes subsets of those processes and enables control — to maintain the operational procedures – at a level where our clients feel confident of how we are managing their operations. 

In the past, you needed to do a lot of things if you were an agent in our company. You needed to physically go to a central office to fulfill processes, and then you’d be commuting. Today, the Cloud Campus digitalizes these processes. Now a new employee, in many different countries, can be hired, trained, and coached — everything — on a remote basis.

We use video technology to do virtual face-to-face interactions, which we believe is important to be successful. We still are a very human-centric company. If we don’t have this face-to-face contact, we won’t succeed. So, the Cloud Campus, which is maintained by a really small team, guarantees the needed processes so people can WFH on a permanent basis. 

Gardner: Lance, it’s impressive to think about you dealing face-to-face virtually with your clients in 83 different countries and across many cultures and different ways of doing business. How have you been able to use the same technology across such a diversity of business environments? 

Brown: That’s an excellent question. As José said, the Teleperformance Cloud Campus gives us the flexibility and availability to do just that. For our employees, it just becomes a one-on-one human interaction. Our employees are getting the same coaching, counseling, and support from all aspects of the business – just as they were when they were in the brick-and-mortar office.

Planning a Post-Pandemic Workplace Strategy? 

These Timeless Insights Will Help. 

We are leveraging, like José said, video technology and other technologies to deliver the same user experience for our associates, which is key. Once we deliver that, then that translates out to our clients, too, because once we have a good associate experience, that experience is the same for all of the clients that the associate is handling. 

Gardner: Lance, when you are in a brick-and-mortar environment, a physical environment, you don’t always have the capability to gather, measure, and digitize these interactions. But when you go to a digital workspace, you get an audit trail of data.

Is that something you have been able to utilize, or how do you expect that to help you in the future? 

Digital workspaces offer data insights 

Brown: Another really good question. We continue to gather data, especially as the world is all digitized. And, like you said, we provide many digital solutions for our clients. Now we are taking those same solutions and leveraging them internally for our employees.

We continue to see a large amount of data that we can work with for our process improvements and our technology, analysis, and process excellence (T.A.P.) teams and the transformation our agents do for our clients every day. 

Gardner: Tim, when it comes to translating the value through the workforce to the end user, are there ways we can measure that productivity benefit?

Minahan: One of the key things that came up early-on in the pandemic was a huge spike in worker productivity. Companies settled into a hybrid work model, and that phase was about unifying work and providing reliable access for employees in a remote environment to all the resources they needed.

The second part was, as José said, ensuring that all employees can safely access applications and information — that our corporate information remains secure.

A solid digital workspace environment provides an environment where employees can perform at their best and collaborate from the most remote locations.

Now we have moved into the simplify-and-optimize phase. A lot of companies are asking, “Gee, what are the tools I need to introduce to remove the noise from my employees’ day? How do I guide them to the right information and the right decisions? How do I support more collaboration or collaborative work execution, even in a distributed environment?”

If you have a foundation of a solid digital workspace environment that delivers all the work resources, that secures all the work resources, and then leverages things like machine learning (ML), virtual assistants, and new collaborative work management tools that we are introducing — it provides an environment where employees can perform at their best and can collaborate from the most remote locations.

Gardner: José, most businesses nowadays want to measure everything. With things like Net Promoter Scores (NPS) from your agents and employees, when it comes to looking for the metrics of whether your return on investment (ROI) or return on innovation is working, what have you found? Have you been able to verify what we have been talking about? Does this move beyond theory into practice, and can it be measured well?

Güereque: Yes, that’s very important. As I mentioned, being able to create a Cloud Campus concept, which has all the processes and metrics in place, allows us to compare apples with apples in a way that we can understand the behavior and the performance of an agent at home — same as in brick-and-mortar. We can compare across those models and understand exactly how they are performing. 

We found that a lot of our agents live in cities, which have a lot of traffic. The commuting time for them, believe it or not, was around one-and-a-half hours – as many as two hours for some of them — just going to and from work. Now, all that commuting time is eliminated when they WFH.

At-Home Workers May Explore Their Options. 

Here’s What They Will Be Looking For. 

People started to give lot of value to those things because they can spend their time smarter — or have more family time. So from customer, client, and employee satisfaction, those employees are more motivated — and they’re performing great. Their scores are similar – and in some cases better — than before. 

So, again, if you are able to measure everything through the digitalization of the processes, you can understand the small things you need to tweak in order to maintain better satisfaction and improve all scores across both clients and employees.

No alt text provided for this image

Gardner: Lance, over the past 30 years in IT, we’ve been very fortunate that we can often do more with less. Whether it’s the speed of the processor, or the size of the disk drive. I’m wondering if that’s translating into this new work environment.

Are you able to look at cost savings when it comes to the type of client devices for your users? Are your networks more efficient? Is there a similar benefit of doing more with less when we get to remote work and digital workspaces?

Cost savings accumulate via BYOD

Brown: Yes, especially for the endpoint device costs. It becomes an interesting conversation when you’re leveraging technology like Citrix. For that [thin client] endpoint, all of the compute is back in the data center or in the cloud.

Your overall total cost of ownership continues to go down because you’re not spending as much money on your endpoint, as you had in the past. The other thing is the technology allows us to take an existing PC and make it a thin client, too. That gives you a longer life of that endpoint, which, overall, reduces your cost.

It’s also much, much safer. I can’t stress enough about the security benefits, especially in this current environment. It just makes you so much safer because your target environment and exposed landscape is reduced. Your data center is housing all the proprietary information. And your endpoint is just a dumb endpoint, for lack of better word. It doesn’t have a large attack vector. So you really reduce your attack vector by leveraging Citrix and putting more IT infrastructure in your data center and in your cloud.

Güereque: There is another really important factor, which is to enable bring your own device (BYOD) to be a reality. With the pandemic, the manufacturers of equipment, the PCs and everything, their time to deliver has been longer.

What used to take them two to three weeks to deliver now takes up to 10 weeks. Sometimes the only way to be on time is to leverage the employees’ equipment and enable its use in a secure way. So, this is not just an economic perspective of avoiding the investment in the end device, but is an opportunity to enable them to work faster rather than waiting on the delivery time of new equipment.

No alt text provided for this image

Minahan: At Citrix, we’re seeing other clients do that, too. I was recently talking with the CIO of a financial services company. For them, as the world moved through the pandemic, they saw the demand for their digital banking services quadruple or more. They needed to hire thousands of new financial guidance agents to support that.

And, to José’s point, they couldn’t be bothered with sending each one a new laptop. So BYOD allowed them to gain a distributed digital workspace and to onboard these folks very quickly. They attained the resources they needed to service their end banking clients much faster.

Güereque: Just following on Tim’s comments, I want to give you an example. Two weeks ago we were contacted by a client who needed to have 1,200 people up and running within a week. At the beginning, we were challenged. We wanted to be able to put 1,200 new employees with equipment in place, and weirdly our team came back with a plan. I can tell you that last week they were all in production. So, without this flexibility, and these enablers like Citrix, we wouldn’t be able to do it in such a small time frame.

Gardner: Lance, as we seek work-from-home solutions, we’re using words like “life” and “work balance.” We’re talking about employee behaviors and cultures. It sounds like IT is closer to human resources (HR) than ever.

Has the move to remote work using Citrix helped bond major parts of your organization — your IT capability and your HR capability, for example?

IT enables business innovation

Brown: Yes, now they’re seeing IT as an enabler. We are the enabler to allow those types of successes, from a work-life balance and human standpoint. We’re in constant contact with our operations team, our HR team, and our recruiting team. We are the enabler to help them deliver everything that we need to deliver to our clients.

In the old days, IT wasn’t viewed as an enabler. Now we’re viewed as an enabler. We come up with innovative solutions to enable the business to meet its business needs.

In the old days, IT wasn’t viewed as an enabler. Now we’re viewed as an enabler, and José and I are at the table for every conversation that’s happening in the company. We come up with innovative solutions to enable the business to meet those business needs.

Gardner: Tim, I’m going to guess that this is a nice way of looking at the glass as half full. IT enabling such business innovation is going to continue. How do you expect in the future that we’re going to continue the trend of IT as an enabler? What’s in the pipeline, if you will, that’s going to help foster that?

Minahan: With the backdrop of the continued global shortage of skills, particularly the modern skills that are needed, companies such as Teleperformance are looking at what it means for their workforce strategies. What does it mean for their customer success strategies? Employee experience is certainly becoming a top priority to recruit the best talent, but also to ensure that they can perform at their best and deliver the best services to clients.

In fact, if you look at what employees are looking for going forward, there’s the salary thing and there’s the emergence of purpose. Is this company doing something that I believe in that’s contributing to the world, the environment?

Planning a Post-Pandemic Workplace Strategy? 

These Timeless Insights Will Help. 

But right behind that is, “What are the tools and resources? How effectively are they delivering them to me so I can perform at my best?” And so IT, to Lance’s point, is a critical pillar, a key enabler, of ensuring that every company can work on making employee experience a competitive advantage.

Gardner: José, for other companies trying to make the most of a difficult situation and transitioning to more flexible work models, what would you recommend to them now that you’ve been through this at such a large, global scale? What did you learn in the process that you think they should be mindful of?

Change, challenge, partner up

Güereque: First of all, be able to change, and to challenge yourself. We can do much more than we believe sometimes. That’s definitely something that one can be skeptical of, because of the legacy we have been working through over many years. Today, we have been challenged to reinvent ourselves.

The second one is, there is tons of public information that we can leverage to be able to find successful use cases and learn from them. And the third one is, approach one consultant or partner that has experience in putting all these things in place. Because it is, as I mentioned, not a matter of just enabling people to WFH, it’s a matter of putting all the security environment in place, and all of the tools that are required to be able to perform as a team so you can deliver the results.

No alt text provided for this image

Brown: I’ll add one thing to that. It was about a year ago that I was visiting with Tim and the pandemic was starting to come to fruition. The pandemic had started overseas and was rapidly moving toward the US and other parts.

I met with Tim at Citrix and I said, “I’m not sure exactly what’s going to happen. I don’t know if this is going to be 100 people that go home or 300,000 people. But I know we need a partner to work with, and I know we have to partner through this process.”

So the big thing is that Citrix was that partner for us. You have to rely on your partners to do this because you just can’t simply do it by yourself.

Gardner: Tim, it sounds like an IT organization within Teleperformance is much more of an enabler to the rest of the organization, but you, at Citrix, are the enabler to the IT department at Teleperformance.

Minahan: Dana, to borrow a phrase, “It takes an ecosystem.” You move up that chain. We certainly partner with Teleperformance to enable their vision for a more agile workforce.

But, again, I’ll repeat that they’re doing that for their clients, allowing them to dial up and dial down resources as they need, to work-shift around the globe. So it is a true kind of agile workforce value chain that we’re creating together.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

YOU MAY ALSO BE INTERESTED IN:

Posted in BYOD, Citrix, Cloud computing, contact center, Cyber security, data center, Data center transformation, digital transformation, Enterprise transformation, Help desk, Information management, professional services, Security, User experience | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Disaster Recovery to Cyber Recovery–The New Best Future State

The clear and present danger facing businesses and governments from cybersecurity threats has only grown more clear and ever-present in 2021. As the threats from runaway ransomware attacks and state-sponsored backdoor access to networks deepen, too many businesses have a false sense of quick recovery using traditional business continuity and backup measures.

That’s because the criminals are increasingly compromising vulnerable backup systems and data first — before they attack. As a result, visions of flipping a switch to get back to a ready state may be a dangerous illusion that keeps leaders under a false sense of business as usual. 

The next BriefingsDirect security strategies discussion explores new ways of protecting backups first and foremost so that cyber recovery becomes an indispensable tool in any IT and business security arsenal. We will now uncover how Unisys and Dell Technologies are elevating what was data redundancy to protect against natural disasters into something much more resilient and powerful.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in rapid cyber recovery strategies and technologies, please welcome Andrew Peters, Director of Global Business Development for Security at Unisys, and David Finley, Director of Information Assurance and Security in the Data Protection Division at Dell Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: David, what’s happened during the last few years — and now especially with the FireEye and SolarWinds attacks — that makes cyber recovery as opposed to disaster recovery (DR) so critical?

Best defense is good offense

Finley: I have been asked that question a few times just in the last few weeks, as you might imagine. And there are a couple of things to note with these attacks, SolarWinds and FireEye.

One, especially with FireEye, it was demonstrated to the entire world something that we didn’t really have our eyes on, so to speak, and that is the fact that folks that have really good security — where they sit back and the Chief information security officer (CISO) and the security team say, “We have really good security, we spent a lot of money, we have done a lot of things, we feel pretty good about what we have done.” That’s all great, but what was demonstrated with FireEye is that even the best can be compromised. 

If you have a nation state-led attack or you are targeted by a cybercrime family, then all bets could be off. They can get in and they have demonstrated that with these latest attacks. 

The other thing is, they were able to steal tools. Nothing worse can happen than the bad guys having new toolsets that they can actually use. We believe that with the increased threat from the bad actors because of these things, we really, really need the notion of a cyber vault or the third copy, if you will. Think about the 3:1 rule — three copies, two different locations, one off-site or offline. This is really where we need to be. 

Gardner: Andrew, it sounds like we have to assume that we are going to be or are already attacked. Just having a good defense isn’t enough. What’s the next level that we need to attain? 

Peters: A lot of times organizations think their security and their defenses are strong enough to mitigate virtually anything that happens to the organization. But what’s been proven now is that the bad guys are clever and are finding ways in. With SolarWinds, they found a backdoor into organizations and are coming in as a trusted entity.Just because you have signed Security Assertion Markup Language (SAML) tokens and signed certificates that you trust, you are still letting them in. It’s just been proven that you can’t exactly trust them. And when they come inside an organization and they win, what do you do next? What do you do when you lose? The concept here is to plan to win, but at the same time prepare to lose.

Gardner: David, we have also seen an uptick in the success of ransomware payouts. How is that also changing the landscape for how we protect ourselves? 

Finley: I was recently was thinking about that and I saw something written, it might have been a Wall Street Journal article, on security recently. They said CISOs in organizations have a decision to make after these kinds of attacks. The decision really becomes pretty simple. Do they pay the ransom or do they not pay the ransom? 

We would all like to say, “Don’t pay the ransom.” The FBI says don’t pay the ransom, because of the obvious reasons. If you pay it, they may come back, they are going to want more, and it sets a bad precedent, all those things. But the reality is when this actually happens to a company, they have to sit down and make the hard decision: Do I pay or do I not pay? It’s based upon getting the business running again. 

We want to position ourselves together with Unisys to create a cyber vault that is secured in a way that our customers will never have to pay the ransom.

If we have a protected set of data, and it’s protected in a vault secured by zero trust, to be able to get it back into play — that’s the best answer. It means not paying the ransom.

If we have a protected set of data that is the most important data to the firm – the stuff that they have to have tomorrow morning to actually run the business — and it’s in a protected vault secured by zero trust, through Unisys Stealth software, to be able to secure it and get it back out and put it back into play, that’s the best answer.

So that means not paying the ransom and still having the data available to bring the business back into play the next day. A lot of these attacks, as we know, are not only stealing data, like they did recently with FireEye, but also encrypting, deleting, and destroying the data.

Gardner: Another threat vector these days is that more people are working remotely, so there are more networks involved and more vulnerable endpoints. People are having to be their own IT directors in their own homes, in many cases. How does the COVID-19 work-from-home (WFH) trend impact this, Andrew? 

Work from home opens doors 

Peters: There are far more points of entry. Whereas you might have had anywhere from 10 percent to 15 percent of your workforce remotely accessing the network, and that access was fairly controllable, now you have up to 100 percent of your knowledge workers working remotely and accessing the network. There are more points of entry. From a security perspective, more rules need to be addressed to control access into the network and into operations. 

Then one of the challenges an organization has is that once they are on the inside of these big, flat networks the bad guys can map that network. They learn the systems that are there and they learn the operations extremely well and manipulate them, taking advantage of zero-day vulnerabilities in the systems and so operate within that environment without even being discovered. Once again, going back to the SolarWinds, they were operating for about eight months before they were eventually discovered.

No alt text provided for this image

Gardner: And so are we at a point going on 30 years of using wide area networks (WANs), and we are still under a false sense of security. David, do we not understand the threats around us?

Finley: There is the notion within our organizations and within the public sector that we believe what we have done is good enough. And good enough can be our enemy. I can’t tell you the number of times I have spoken with folks during incident response or after incident response from a cyberattack where they said, “We thought we were secured. We didn’t know that this could happen to us, but it did happen to us.”

That false sense of security is very real, evidenced by these high-level attacks on firms that we never thought it would happen to. It’s not just FireEye and it’s not just SolarWinds. We have had attacks on COVID-19 clinical trial providers, we have had attacks on our own government entities. Some of these attacks have been successful. And a lot of these attacks don’t even get publicized.

The most dangerous thing is a false sense of security. A lot of times these attacks happen and get swept under the rug. They quietly get cleaned up. That leads to a false sense of security.

Here is the most dangerous thing in this false sense of security we are talking about. I ask customers what percentage of the attacks do you actually believe you have visibility into within your own region? And the answer, the honest answer, is usually probably less than 20 percent.

But because I do this every day for a living, as does Andrew, and we probably have visibility to maybe 50 percent, because a lot of times these attacks happen and they get swept under the rug. They quietly get cleaned up, right? So we don’t know what’s happening. That also leads us to a false sense of security.

So again, I believe that we do everything we can upfront to secure our systems, but in the event that something does get through, we need to make sure that we have a secure offline copy of these backups and of our data.

Be prepared to resist ransom

Peters: An interesting dynamic I have noticed since the pandemic is that organizations, while they recognize it’s important to have that cyber recovery third copy to bring themselves back from the brink of extinction, say they can’t afford to do it right now. The pandemic has squeezed them so much. 

Well, we know that they are invested in backup. We know they are invested in DR, but they say, “Okay, we may table this one because it’s something that is a bit too expensive right now.”

However, on the other side, there are organizations that are picking up on this at this time, saying, “You know what? We see this is way more critical because we know the attacks are picking up.”

No alt text provided for this image

But the challenge here is the organizations that are feeling squeezed, that they can’t afford to invest in a solution like this, the question is, can they afford not to invest in this given all the exposure of the threats to their organizations. And we keep going back to SolarWinds, which is a big wake-up call.

But if we go back to other attacks that happened to organizations in the recent past — such as the WastedLocker backdoor and the procedures the bad guys are using to get into organizations to learn how they operate, to find additional backdoors and operate within that environment, and to even learn to avoid the security technologies that were put in there specifically to detect such breaches – they can operate with impunity within that environment. Then they eventually learn that environment well enough to shut them down enough so that the company has two choices. That company can either pay the ransom or go out of business. 

And if you are a bad guy, what would be your goal? Do you want to expose the company’s information and embarrass them? No, you want to make money. And if they are in the process of making money, how do they do it? You have to squeeze an organization as much as possible. And that’s what ransomware and these backdoors are designed to do — squeeze an organization enough to where they are forced to pay the ransom.

Gardner: So we need a better, fuller digital insurance policy. Yet many organizations have insurance in the form of DR designed for business continuity, but that might not be enough.

So what are we talking about when we make this shift from business continuity to cyber recovery, David? What are the fundamental challenges organizations need to overcome to make that transition? 

Cyber more likely than natural disaster

Finley: The number-one challenge I have seen over the past four or five years is that we need to realize that DR — and all the tenets of DR — will not cover us in the event of a cyber disaster. So those are two very different things, right? 

Oftentimes I challenge people with the notion of how they differ. And just to paint a picture, we have been doing DR basically the same way for many decades. The way it normally works is we have our key systems and their data connected to another site outside of a disaster radius, such as for earthquakes, floods, tornados, and hurricanes. We copy that data through a wide-open pipe to the other side on a regular basis. It’s an always-open circuit to the other side, and we have been doing it that way for 40 years.

What I often ask customers is based on that, how much do you spend every year to do DR? What does it really cost? Do you test? What are the real costs for DR for you? And there is usually a tangible answer.

The probability of cyber events is much higher than disaster events.The IT infrastructure and security groups have been making cyber recovery part of DR planning — and it’s taken a long time to get there. We have to change how we approach this.

With that in mind, the next question is, “If you look at the probability of something happening in the future to you, what do you think is more probable — a natural disaster event or a cyber disaster? What’s more probable?” And the answer is unanimously, it’s been 100 percent in recent years, it’s going to be a cyber disaster.Of course, the next question is, “How do you deal with cyber recoveries and is it a function of DR within your organization?” And the answer usually is, “Well, we don’t deal with it very well.”

So the IT infrastructure and security groups have in the last year been making cyber recovery part of DR planning — and it’s taken a long time to get there. When you think about that, if the probability of cyber events is much higher than disaster events — and we spend $1 million a year on DR — how much do we spend for cyber recovery? The answer historically has been that they spend very little on true cyber recovery.

That’s what has to change. We have to change how we approach this. We have to bring the security and risk folks into those decisions on protecting data. We need to look at it through the lens of a cyber event destroying all of the data, just as a hurricane may destroy all of the data. 

Peters: You know, Dave, in talking to a lot of organizations on what exactly they are going to do if they have a ransomware meltdown, we ask, “How are you going to recover?” They say, “We are going to go to our DR.” 

Hmm, okay. But what if you discover in your recovery process those files are polluted? That’s going to be a bad situation. Then they may go find some tapes and stuff. I ask, “Okay, do you have a runbook for this?” They say, “No.” Then how will they know exactly what to do?

No alt text provided for this image

And then the corollary to that is, how long is this recovery going to take? How long can you sustain your operations? How long can you sustain your company, and what kinds of losses are you prepared to sustain? 

Wow, and you are going to figure this all out when you are going through the process of trying to bring your organization back after a meltdown? That’s usually the tipping point where you are going to say, like other organizations have said, “You know what? We are just going to have to pay the ransom.”

Finley: Yes, and that also begs the question that we often see folks miss. And that is, “Do you believe that your CEO and/or your board of directors — the folks who don’t do IT as an everyday job, the folks who are running the business — do they understand the difference between DR and cyber recovery?”

If I were to ask people on the board of any organization if they were secure in their DR plans, most of them would say, “Yes, that’s what we pay our teams to do.”

If I were to ask them, “Well, do you believe that being able to recover from cyber disasters is included in that and done well?” The answer would also be, “Yes.” But oftentimes that is simply not the truth.

They don’t understand the difference between DR and cyber recovery. The data can all be gone from a cyber event just as easily as it can be gone from a hurricane or a flood. We have to approach it from that perspective and start thinking through these things.

We have to take that to our boards and have them understand, “You know what? We’ve spent a lot of money for 40 years on DR, but we really need to start spending money on cyber recovery.”

Yet we still get a lot of pushback from customers saying, “Well, yes, of course making a third copy and storing it somewhere secure in a way that we can always get it back — that’s a great idea — but that costs money.”

Well, you have been spending millions of dollars on DR, so make cyber recovery part of that effort.

Gardner: To what degree are the bad guys already targeting this discrepancy? Do they recognize a capability to go in and compromise the backups, the DR, in such a way that there is no insurance policy? How clever have the bad guys become at understanding this vulnerability?

Bad guys targeting backups

Peters: What would you do if you were the bad guy and you wanted to extort money from an organization? If you know they have any way of quickly recovering, then it’s going to be pretty hard to extort from them. It’s going to be hard to squeeze them. 

These guys are not broke, they are often professional organizations. There’s a lot of focus on the GRU, the former KGB operation that’s in Russia, and Cozy Bear and a number of these different organizations are well-funded. They have very clever people there. They are able to obtain technologies, reverse engineer them, understand how the security technologies operate, and understand how to build tools to avoid them. They want to get inside of organizations and learn how the operation runs and learn specifically what’s key and critical to an organization. 

No alt text provided for this image

The second thing, while they want to take out the primary systems, they also want to make sure you are not able to restore them. This is not rocket science. 

So, of course they are going to target backups. Are they going to pollute the files that you are going to actually put in your backups so if an organization tries to recover, they can create a situation that is bad, if not worse, than it was previously? What would you do? You have to figure that this is exactly what the bad guys are doing in organizations — and they are getting better at it. 

Finley: Andrew, they are getting better at it. We have been watching this pretty closely for the last year now. If you go out to any of the pundits or subscribe to folks like Bleeping ComputerSecurity TodayCIO.com, or CISO, you see the same thing. They talk about it getting worse. It’s getting worse on a regular basis. 

They are targeting backups. We are finding it actually written in the code. The first part of what they are going to do when they drop this on the network is they are going to go seek out security tools to disable them. Then they are going to seek out shadow copies to link to them and seek out backup catalogs and link to them. 

And this is the one that a lot of people miss. I just read this recently, by the FDIC, and they are publishing this to their member banks. They said DR has been done well for a number of decades. You copy information from one bank to another or from one banking location to another and you are able to recover from disasters and spin up applications and data in a secondary location. That’s all great. 

But realize that if you have malware attacking you in your primary location, it very often will make its way to your DR location, too. The FDIC said this pointblank, they said, “And you will get infected in both locations.”

A lot of people don’t think about that. I had a conversation last year with a CISO who said that if an attack gets to your production environment they can manage to move laterally and get to your DR site. And then the date is gone. And this particular CISO said, “You know, we call that an ‘Oh, crap’ moment because there is nothing we can do.”

That’s what we now need to protect against. We have to have a third copy. I can’t stress it nearly enough.

Gardner: We have talked about this third copy concept quite a bit. Let’s hear more about the Dell-Unisys partnership. What’s the technology and strategy for getting in front of this so that cyber recovery becomes your main insurance policy, not your afterthought insurance policy?

Essential copy keeps data dynamic

Finley: We want everyone to understand the reality. The bad guys can get in, they can destroy DR data, we have seen it too many times. It is real. These backups can be encrypted, deleted, or exfiltrated. And that is the fact, so why not have that insurance policy of a third copy?

There’s only way to truly protect this information. If the bad guys can see it, get to the machines that hold it, and get to the data – whether the data is locked on disk or not – they can destroy it. It’s a real simple proposition. 

No alt text provided for this image

We identified many years ago that the only way to really, truly protect against that is to make a copy of the data and get it offline. That is evidenced today by the guidance being given to us by the US federal governmentHomeland Security agency, and FBI. Everybody is giving us the same guidance. They are saying take the backups, the copies of your data, and store them somewhere away from the data that you are protecting – and ideally on the other side of an air gap and offline. 

When we create this third copy from our Dell solution for cyber recovery we take the data that we backup every day and move that key data to another site, across an air gap. The idea is the connection between the two locations is dark until we run a job to actually move the data from production to a cyber recovery vault

With that in mind, there is no way in until we bring up that connection. Now, that connection is secured through Unisys Stealth and through key exchanges and certificate exchanges to where the bad guys can’t get across that connection. They can’t get in. In other words, if you have a vault that’s going to hold all your important data, the bad guys can’t get in. They can’t get through the door. Even though we open a connection, they can’t use that connection to ride into our vault. 

And with that in mind we can take that third copy and store it in this cyber vault and keep it safe. Now, getting the data there and having the systems outside the vault communicate to the machines inside the vault – to make sure that all of that is secure – is something we partnered with Unisys on. I will let Andrew tell you about how that works.

Secure data swiftly in cyber vault

Peters: Okay. First off, Dave, you are not talking about putting all of the data into the vault, right? Specifically people are looking at only the data that’s critical to an operation, right?

Finley: Yes. And a quick example of that, Andrew, is an unnamed company in the paint industry. They create paint around the world and one of their key assets is their color-matching databases. That’s the data they put into the cyber vault, because they have determined that if that proprietary data is gone, they can lose $1 million per day.

We can take a third copy and store it in the cyber vault and keep it safe. We have partnered with Unisys on getting the data there and making the communication with all of the machines secure. 

Another example is an investment firm we work with. This investment firm puts their trade databases inside of the cyber vault because they have discerned that if their trade databases are infected, affected, or deleted or encrypted – and they go down – then they lose multiple millions of dollars per hour. So, to your point, Andrew, it’s usually about the critical business systems and essential information, things like that. But we also have to be concerned with the critical IT materials on your networks, right?

Peters: That’s right, other key assets like your Active Directory and your domain servers. If you are a bad guy, what are you going to attack? If they want to cripple you so much that even if you had that essential data, you couldn’t use it. They are going to try and stop you in your tracks. 

From a security perspective, there are a few things that are important – and one is data efficacy. First is knowing what I am going to protect. Next, how best am I going to securely move that critical data to a cyber vault? There is going to be automation so I am not depending on somebody to do this. This should happen automatically. 

So, to be clear, I am going to move it into the secure vault, and I want that vault to be air gapped. I want it to be abstracted from the network and the environment so bad guys can’t find it. Even if they could find it, they can’t see anything, and they can’t talk to it. 

The second thing I want is to make sure that the data I’m moving has high efficacy. I want to know that it’s not been polluted because bad guys are going to want to pollute that data. Typically, the things you put into the backup – you don’t know, is it good, is it bad, has it been corrupted? So if it’s going to be moved into the vault, we want to know if it’s good or if it’s bad. That way, if we are going to be going into a recovery, I can select the files that I know are good and I can separate them from the bad.

This is really important. That’s one of the critical things when you’re going into any form of cyber recovery. Typically you aren’t going to know what’s good data unless you have a system designed to discern good from bad.

No alt text provided for this image

You don’t want to be rebuilding your domain server and have the thing find out that it’s been polluted, that it’s locked, and that it has ransomware embedded in it. Bad guys are clever. You have to ask, “What would I do if I were a clever bad guy?” Sometimes it’s hard to think like that unless you put your bad guy hat on. 

There’s another important element here, too. The element of time. How quickly am I going get to this protected data? I have all of this data, these files and these applications, and they’re in my protected vault. Now, how am I going to move them back into my production environment?

But my production environment actually might still be polluted. I might still have IT and security personnel trying to clean up that environment. At the same time, I have to get my services back up and running, but I have a compromised network. And what’s the problem? The problem is time.

Ultimately, all of this comes down to business continuity and time. How quickly can I continue my critical operations? How quickly am I going to be able to get them up and running – despite the fact that I still have a lot of issues with ransomware and with hackers inside my IT operations?

From a security and rapid recovery perspective, there are some unique things that we can do with a cyber recovery approach. A cyber recovery solution automates the movement of your critical data into a secure vault, then analyzes it for data efficacy to determine if the data has been compromised. It also provides you with a runbook so you know how you’re going to get that data back out and get those systems operating so you can get users back online.

So even with a zero-day attack, by being able to use things like cryptography, cloaking, and basically hiding things from the rest of the network, I can get cryptographic micro-segmentation to restore the operations of critical services and get users back up on those services. Even if my network is compromised, I can start doing that very, very quickly.

When you put the whole cyber recovery solution that we have together – with automation, the security built in, to get to the critical data on a daily basis, move it into a vault, analyze it, and then obtain a runbook capability – you can quickly move it all back out and get those critical services back up and running. 

Manage, monitor, and restore data

Finley: One of the things that I hope everyone understands is that we can create a secure vault, put information in it, and do that all securely. But as Andrew was saying, most folks also want the ability to monitor, manage, and update that secure vault from their security operations center (SOC) or from their network operating system (NOS).

When we first began our relationship with Unisys, around the Stealth software, I was very excited. For a couple years before that, we were working with folks to show them how to use firewalls to protect information going in and out of our cyber vault, or how to configure virtual private networks (VPNs) to make that happen.

But when we got together and I looked at the Unisys Stealth software a few years ago, from a zero trust networks perspective – instead of just agents on the machines – it becomes invisible.

When I saw the tunnels that Unisys creates to our Dell vault I realized it not only allows us to have a new way to manage everything from the outside, it allows us to take clean data inside the vault and restore it quickly through the secure tunnels back to the outside.

When I first saw that those tunnels Unisys creates to our Dell vault are as secure as they are, I quickly realized that not only did it allow us to have a new way to manage everything from outside – we can also monitor everything from outside. It allows us to take what we know is clean data inside the vault and be able to restore it quickly through one of those secure Stealth tunnels back out to the outside.That is hugely important. We all know there are various ways to secure communications like this. Probably the least secure nowadays are VPNs, or remote access, if you will. The next secure, quite frankly, is viral access, or import access, and then the most secure is, I believe, zero trust software like we get with Unisys Stealth.

Peters: It’s not that I want to beat down on firewalls, because firewalls and ancillary technologies are very effective in protecting organizations – but they’re not 100 percent effective. If they were, we wouldn’t be talking about ransomware at all. The reason that we are is because breaches occur. The bad guys go after the low-hanging fruit, and they’re going to hit those organizations first. Then they’re going to get better at their craft and they’re going to go after more-and-more organizations.

Even when organizations have excellent security, you can’t always prevent against the things that people do. Or now, with SolarWinds, you can’t even trust the software that you’re supposed to trust. There are more avenues into an organization. There are more means to compromise. And the bad guys can monetize what they are doing through Bitcoin in these demands for ransoms.

So, at the end of the day, the threats to organizations are changing. They’re evolving, and even with the best defenses an organization has, you’re probably going to have to plan on being compromised. When the compromise happens, you have to ask, “What do we do now?”

Gardner: Are there any examples that you can point to and show how well recovery can work? Do we have use cases or actual customer stories that we can relate to show how zero trust cyber recovery works when it’s done properly?

Get educated on recovery processes

Finley: Sure, one happened not too long ago. It was a school system in California. And that particular school system worked with us to procure the cyber recovery solution, created a cyber vault, the third copy, and secured all of that. We installed it and got it all up and running and moved data into the vault on a Thursday of a particular week. And then they had a cyber event happen to the school system. This is one of the biggest school systems in that part of California. They had a cyber event over the weekend in that school system, and they had just gotten the vault up and running and had copied all of the critical data into it.

The data in the vault was secure. They were able to recover it as soon as they forensically could, according to the FBI, because the data was secure. It saved a bunch of time and a lot of effort and money.

Now, I contrast that to a couple other major attacks on other companies that happened in the last 120 days. One where they had no cyber vault, the customer data was attacked in production and a lot of DR was attacked. That particular set of events was done through a whole series of social engineering, but they were taken down encrypted and a lot of the data was destroyed.

No alt text provided for this image

It took them days, if not weeks, to begin the recovery process because of a lot of things that we all need to be aware of that happen. If you don’t have data that you know is secured somewhere else and that is clean, you’re going to have to verify that it’s clean before you can recover it. You’re going to have to do test recoveries to systems and make sure you’re not restoring malware. That’s going to take a long period of time. You’re not even going to be able to do that until law enforcement tells you that you can.

Also, when you’re in the middle of an incident response, regardless of who you are, the last thing you’re going to do is connect to the Internet. So if your data is stuck somewhere in a public cloud or clouds, you’re not going to be able to get it while you’re in the middle of an incident response.

The FBI characterizes your systems as a crime scene, right? They put up yellow tape around the crime scene, which is your network. They are not going to allow anybody in or out until they’re satisfied they’ve gathered all the date to be able figure out what happened. A lot of folks don’t know that, but it is simply true.

So having your critical data accessible offline, on the other side of the crime area, having it scrubbed every day do make sure it is absolutely clean, is very important. 

In a case of a second company, it took days if not weeks before they could recover information.

There is a third example. The IT people there told me the cyber vault saved their company, and “saved our butts,” they said. In this particular case, the data was encrypted in all of their systems. They were using backup software to write to a virtual client and they were copying that day from virtual clients into our cyber vault.

They also had our physical clients, called Data Domain from Dell, in production and writing into the cyber vault. They did not have our analytics software to scrub and make sure it was clean because it was an older implementation. But at the end of the day, everything in production was gone. But they went to the vault data and realized that the data there was all still good.

The bad guys couldn’t get there. They couldn’t see the cyber vault, didn’t know how to get there, and so there was no way they could get to that information. In this case, they were able to spin up and restore it rather quickly.

In another incident example, in the cyber vault, they had our CyberSense software, which does cyber analytics on the data being stored. We can verify the data is clean at a 99.7 percent effective level to tell the customer the data is restorable and clean. In this case the FBI got involved.

The FBI actually used the information from our CyberSense software to help them to ascertain the who, what, when, and where of what happened. Once they knew who, what, when, and where, they knew the stored data was clean and we were able to do a more rapid rescue.

Plan ahead with precise processes

Peters: What’s important too is knowing what to do. For example, what applications are you going to recover first? What do you need to do to get your operations running? Where are you going to find the needed files? Who’s going to actually do the work? What systems you are going to recover them onto?

Have a plan of action versus, “Okay, we’re going to figure this out right now.” Have a pre-prescribed runbook that’s going to take you through the processes, procedures, and decisions that need to be made. Where is the data going to be recovered from? What’s going to be determined? How is it recovered? Who’s going to get access to it?

This is different than DR. This is different than backup, it’s way different. It’s its own animal. You can define the runbook so that you can recover fully.

All of these things. There’s a whole plan that goes into this. This is different than DR. This is different than backup, it’s way different, it’s its own animal. And this is another place where Dell expertise comes in, being able to do the consulting work with an organization to define the plan or the runbook so that they can recover.

Finley: I wanted to also point out a consideration about ransomware payments. It’s not always a clean option to actually make the payment because of the U.S. Treasury Office of Foreign Assets’ controls. If an organization pays the ransom, and the recipients of that payoff are considered a threat to the United States, they may be breaking another law if you pay them the ransom.

So that needs to be taken into consideration if an organization is breached for ransom. If they pay the ransom off, they may be breaking a federal law.

Gardner: Do the Dell cyber recovery vault and Unisys Stealth technologies enable a crawl, walk, and run approach to cyber recovery? Can you identify those corporate jewels and intellectual property assets, and then broaden it from there? Is there a way to create a beachhead and then expand?

Build the beachhead first

Finley: Yes, we like to protect what we call critical rebuild materials first. Build the beachhead around those critical materials first, then get those materials Active Directory and DNS zone tables in the vault.

Next put the settings for networks, security logs, and event logs into the vault — the stuff in your production environment that you could get out of the vault and make everything work again.

If you have studied the Maersk attack in 2017, they didn’t have any of that, and that was a very bad day. They finally found those copies in Africa, but if they hadn’t found them it would’ve been a very bad month or year. So with that kind of a thing in mind, it has happened to many folks besides just them where this had to be most publicized.

So with that in mind, get those materials into the vault as a beachhead, if you will. Let’s build together the notion of this third location, let’s secure it with Unisys Stealth, and let’s secure it with an air gap that’s engulfed in Stealth, and with all of the connections in and out of the vaults protected by Stealth using zero trust. Let’s take those critical materials and build that beachhead there. Ideally, I’ve seen great success when I was doing that, and then gathering maybe total of three to five of the most critical business applications that a firm may have and concentrating on them first.

No alt text provided for this image

Here’s what we don’t want to do. I see no success in sitting down and saying, “Okay, we’re going to go through 150 different applications, with all of their dependencies, and we’re going to decide which of those pieces go into the cyber vault.”

It can be done, it has been done, and we have consulting that can help do that between Dell and Unisys, but let’s not start that way. Let’s instead start like we did recently with a big, big company in the U.S. We started with critical materials, we chose five major applications first, and for the first six months that’s what we did.

We protected that environment and those five major applications. And as time goes on, we will move other key applications into that cyber vault. But we decided not to boil the ocean, not look at 2,000 different applications and put all that data into the vault.

I recently talked to a firm that does pharmaceuticals. Intellectual property is huge for them. Putting their intellectual property into the cyber vault is really key. It doesn’t mean all of their systems. It means they want intellectual property in the vault, those critical materials. So build the beachhead and then you can move any number of things into it over time.

Peters: We have a demonstration to show what this whole thing looks like. We can show what it looks like to make things disappear on your network through cloaking, moving data from a production environment into a vault, and in-retention locking that, analyzing the data, and finding out if something is bad on it, and being able to select the last known good copy of data and start to rebuild systems in your production environment. 

If somehow you had an environment you’re recovering and malware manages to slip inside of that we can detect that and we can shut it down in about 10 to 15 seconds. For organizations interested in seeing this working in real-time, we have a real live demo.

Finley: That’s a powerful, powerful demo for all of the folks who are listening. You can see this thing work from beginning to end to see how the buttons are put in and how the data essentially moves out of scrubbing of the data to make sure it’s clean. It was fascinating for me the first time I saw this. It was great. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Dell Technologies.

YOU MAY ALSO BE INTERESTED IN: 

Posted in Cloud computing, Cyber security, Dell, disaster recovery, Identity, Security, Software, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

Rethinking employee well-being means innovative new support for the digital work-life balance

The tumultuous shift over the past year to nearly all-digital and frequently at-home work has amounted to a rapid-fire experiment in human adaptability.

While there are many successful aspects to the home-exodus experiment, as with all disruption to human behavior, there are also some significant and highly personal downsides.

The next BriefingsDirect work-from-home strategies discussion explores the current state of employee well-being and examines how new pressures and complexity from distance working may need new forms of employer support, too.

 Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about coping in an age of unprecedented change in blended work and home life, we’re now joined by Carolina Milanesi, Principal Analyst at Creative Strategies and founder at The Heart of TechAmy Haworth, Senior Director, Employee Experience at Citrix, and Ray Wolf, Chief Executive Officer at A2K Partners. The panel discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Amy, how are predominantly digital work habits adding to employee pressures and complexities? And at this point, is it okay not to be okay with all of these issues and new ways of doing things?

Haworth: Thanks, Dana. It’s such an important question. What we have witnessed in the last 12 months is an unfolding of the humanness of a very powerful transformational experience in the world. It is absolutely okay not to be okay. To be able to come alongside those who are courageous enough to admit it is one of the most important roles that organizations are being called upon to play in the lives of our employees. 

Oftentimes, I think about what’s happened in 2020 and 2021. It’s as if the tide went out. It exposes fissures in our connectedness in the way organizations operate — even in the support systems we have in place for employees.

We’ve learned that unless employees are okay, our organizational health is at risk, too. Taking care of employees and enabling employees to take care of themselves shifts the conversation to new, innovative ways of doing that.

The last 12 months have shown us that we’ve never faced something like this before, so it’s only natural that we lacked a lot of the support systems and mechanisms to enable us to get through it.

There has been some amazing innovation to help close that gap. But it’s also been as if we’ve been flying the plane, while also figuring out how to do this all better. So, absolutely, yes, there are new challenges — but also a lot of growth. Being able to come alongside and being able to raise the white flag when needed makes it worth doing.

Gardner: Carolina, the idea for corporations of where their responsibility is has shifted a great deal. It used to be that employees would drive out of the parking lot — and they’d be off on their way, and there was no further connection. But when they’re working at home and facing new forms of fatigue or emotional turmoil, the company is part of that process, too. Do you see companies recognizing that?

Milanesi: Absolutely. To be honest with you, it’s been a long time in coming because although I might drive away from the parking lot — for a lot of employees — that’s not when the work stops.

Either because you’re working across different time zones or because you’re on call, if you’re a knowledge worker, chances are that your days are not a nine-to-five kind of experience. That had not been fully understood. The balance that people have to find in working and their private life has been under strain for quite some time.

Now that we’ve been at home, there’s no escape [from work]. That’s the realization companies have come to — that we are in this changed world and we are all at home. It’s not just that I decided to be a remote worker, and it’s just me. It’s me and whoever else is living with me — a partner, or maybe parents that I’m looking after, and children, all co-sharing apartments and all of that.

So, the stress is not just mine. It’s the stress of all of the people living with me. That is where more attentiveness needs to come in, to understand the personal situations that individuals are in — especially for under-represented groups.

For example, if you think about women and how they feel about talking — or not talking — about their children or caregiver responsibilities, they often shy away from talking about it. They may think it reflects badly on them.

All of those stresses were there before, but they became exacerbated during the pandemic. This has made organizations realize how much weight is on the shoulders of their employees, who are human beings after all.

Gardner: Ray at A2K Partners, you probably find yourself between the companies and their employees, helping with the technology that joins them and makes them productive. How are you seeing the reaction of both the employees and the businesses? Are they coming together around this — or are we just starting that process?

Wolf: I think we’re only in the second inning here, Dana. In our conversations with chief human resources officers (CHROs), they come to the conversation saying, “Ray, is there a better way? Do we really need to live with the way things are for our employees, particularly with the way they interface with technology and the applications that we give them to get their jobs done?”

We’re able to reassure them that, yes, there is a better way. The level of dissatisfaction and anxiety that employees have working with technology doesn’t have to be there anymore. What’s different now is that people are not accepting the status quo. They’re asking for a better way forward. The great news — and we’ll get into this a little bit later — is there are a lot of things that can be done.

The concept of work-life balance, right? It’s no longer two elements at the end of a see-saw that’s in balance. It looks more like a puzzle, where you’re shifting in and out — often in 15-minute or 30-minute intervals — between your personal life and your work life.

So how can technology better facilitate that? How can we put people into their flow state so they have a clear cognitive view of what they need to get done, set the priorities, and lead them into a good state when they need to return to their family activities and duties?

Gardner: Amy, what hasn’t changed is the fundamental components of this are people, process, and technology. The people part, the human resources (HR) part, perhaps needs to change because of what we’ve seen in the last year.

Do you see the role of HR changing? Is it even being elevated in importance within the organization?

Empowered employees blend life, work 

Haworth: The role of HR really has elevated. I see it as an amplification of employee voice. HR is the employee advocate and the employee’s voice into the organization.

It’s one thing to be the voice when no one’s listening. It’s much more interesting to be the voice when people are listening and to steer the organization in the direction that puts talent at the center, with talent first.

We’re having discussions and dialog about what’s needed to create the most powerful employee experience, one where employees are seen or heard and feel included in the path forward. One thing that’s so clear is we are shaping this all together, collectively. We are together shaping the future in which we will all live.

Being able to include that employee voice as we craft what it means to go to work or to do work in the years ahead means in many ways that it’s an open canvas. There are many ways to do hybrid work.

Being able to include that employee voice as we craft what it means to go to work or to do work in the years ahead means in many ways that it’s an open canvas. There are many ways to do hybrid work, which clearly seems to be the direction most organizations are going. Hybrid is quite possibly the future direction education is heading, too.

A lot of rethinking is happening. As we harness that collective voice, HR’s leadership is bringing that to the table, bringing it into decisions, and entering into a more experimental mindset. Where we are looking to in the future and how we find ways to innovate around hybrid work is increasingly important.

Gardner: Carolina, when we look at the role of technology in all of this, how should an HR organization such as Amy’s use technology to help — rather than contribute to the problem?

Milanesi: That’s the key question, right? Technology cannot come as another burden that I have to deal with when it comes to employees.

I love Ray’s analogy of the puzzle of the life we live. I stopped talking about work-and-life balance years ago and started talking instead about working-life-blend because if you blend there’s room to maneuver and change. You can compromise and put less stress on one area versus the other.

So, technology needs to come in to help us create that blend – and it has to be very personal. The most important thing for me is that one size doesn’t fit all. We’re all individuals, we’re all different. And although we might share some commonalities, the way that my workflow is setup is very different from yours. It has to speak to me because otherwise it becomes another burden.

So, one part is helping with that blend. Another part for technology to play is not making me feel that the tool I’m using is an overseer. There are a lot of concerns when it comes to remote working, that organizations are giving you tools to manage you — versus help you. That’s where the difference lies, right? For me, as an employee, I need to make sure that the tool is there to just help me do my work.

No alt text provided for this image

It doesn’t have to be difficult. It has to be straightforward. It keeps me in the flow, and helps me with my blended life. I also think that the technology needs to be context-aware. For example, what I need in the office is different from what I need when I’m at home or when I’m at the airport — or wherever I might be to doing work.

The idea that your task is dependent or is influenced by the context you’re in is important as well. But simplicity, security, and my privacy are all three components that are important to me and should be important to my organization.

Gardner: Ray, Carolina just mentioned a few key words: context, feelings, and the idea of an experience rather than fitting into what the coder had in mind. It wasn’t that long ago that applications pretty much forced people to behave in certain ways in order to fit set processes. 

What I’m hearing, though, is that we have to have more adaptable processes and technologies to account for a person’s experiences and feelings. Is that not possible? Or is it pie-in-the-sky to bring the human factor and the technology together?

Technology helps workers work better

Wolf: Dana, the great news is the technology is here today with the capability to that. The sad part is the benchmark is still pretty low. The fact is when it comes to providing technology to enable workers to get their jobs done, there is really very little forethought as to how it’s architected and orchestrated.

People are often simply given login information to the multiple applications that they need to use to get things done during the day. The most that we do in terms of consideration for these employees is create a single sign-on. So, for the first five minutes of your day, we have a streamlined, productive, and secure way to login — but then it’s a free for all. Processes are standard across employee types. There’s no consideration for how the individual employee wants to get work done, of what works best for them.

We subject very highly talented and creative people to a lot of low-value, repetitive tasks. Citrix Workspace allows you to automate out those mundane tasks, allowing workers to contribute more to critical business needs.

In addition, we subject very highly talented and creative people to a lot of low-value, repetitive tasks. One of the things that CHROs bring up to me all the time is, “How can I get my employees working at the top of their skills range, as opposed to the bottom of their skills range?”

Today there are platforms such as Citrix Workspace that allow you to automate out those mundane tasks, take into consideration where the employees should be spending their time, and allowing them to contribute more to the critical business needs of an organization.

Gardner: Amy, to that point of the way employees perceive of their work value, are you seeing people mired in doing task-based work? Or are you seeing the opportunity for people to move past that and for the organization to support them so that they can do the work they feel most empowered by? How are organizations helping them move past task to talent?

Haworth: Great question, and I love how you phrase that move from task to talent. So a couple things come to mind. Number one, organizations are looking to take friction out of the work-day. That friction is energy, and that energy could be better spent for an employee doing something they love to do — something that is their core skill set or why they were hired into that organization to start with.

A recent statistic I heard was that average workflow tends to involve at least four different stops along an application’s path. Think about what it takes to submit an expense report.

As much as possible, we’re looking for ways that take friction out of those interactions so employees get a sense of progress at the end of the day. The energy they’re expending in their jobs and roles should feel like it’s coming back threefold.

No alt text provided for this image

Ray touched on the idea of flow, but the conversation in 2021, based on the data we’ve seen, shows that employees feel fatigued because of the workload. What emerged from a lot of the survey work across multiple research firms last year was this sense of fatigue. You know, “My workload doesn’t match the hours that I have in the day.”

So, in HR circles, we’re beginning to think about, “Well, what do we do about that?” Is this a conversation more about energy and energetic spend? Initially [in the pandemic] there was a lot of energy spent just transforming how things were done. And now we get to think about when things are done. When do I have the most energy to do that hard thing? And then, “How is the technology helping me to do it? And is it telling me when it’s probably time to take a break?”

At Citrix we’ve recently introduced some really interesting notifications to help with this idea of well-being so that integration of technology into the workday helps as an employee manages their energy – to take, for example, a five-minute meditation break because they have been working solid for three hours. That might be a really good idea rather than that cup of coffee, for example.

So we’re starting to see a combination of the helpfulness of technology in a way that’s invited by employees. Carolina makes a great point about the privacy concerns, and so it comes in a way that’s invited by employees. That ultimately enables a state of flow and that feeling of progress and good use of the talent that each employee brings into the organization.

Gardner: Carolina, when we think about technology of 10 or more years ago, oftentimes developers would generate a set of requirements, create the application, and throw it over the wall. People would then use it. 

But what I just heard from Amy was much more about the employee having agency in how they use the technology, maybe even designing the flow and processes based on what works for them.

Have we gotten to the point where the technology is adaptive and people have a role in how services — maybe micro-services — are assembled? Are people becoming more like developers, rather than waiting for developers to give them the technology to use?

Optimize app workflows

Milanesi: Absolutely. Not everybody is in that kind of no-code environment yet to create their own applications from scratch, but certainly a lot of people are using micro-apps that come together into a workflow in both their private and work lives. 

Smartphone growth marked the first time that each of us started to be more in control of the applications that create workflows in a private way. The arrival of your own device into enterprise also meant bringing your own applications into enterprise.

As you know, it was a bit of the Wild West for a while, and then we harnessed that. Organizations that are most successful are the ones that stopped fighting this change and actually embraced it. To Amy’s point, there are ways to diminish and lower the friction that we feel as employees when we want to work in a certain way and to use all of the applications and tools, even ones that an IT department may not want us to. 

There is more friction and time loss in someone trying to go around that problem and creating back doors that bypass IT than for IT to empower me to do that work, as long as my assets and data are secure. As long as it’s secure, I should have a list of applications and tools that I can choose from and create my own best workflows.

Gardner: Ray, how do you see that balance between employee-agency and -agility and what the IT department allows? How do we keep the creativity flowing from the user, but at the same time put in the necessary guardrails?

Wolf: You can achieve both. This is not employee workflow at the sacrifice of security. That’s the state of technology today. Just in terms of where to get started with the idea of employees designing their workflows, this is exactly how we’re going about it with many customers today.

I mean, what an ironic thought: To actually ask the people involved in the day-to-day work what’s working for them and what’s not. What’s causing you frustration and is high-value to the company? So you can easily identify five places to go get started to automate and streamline.

What an ironic thought: To actually ask the people in the day-to-day work what’s working for them and what’s not. What’s causing you frustration and is high-value to the company? 

And the beautiful thing about it is when you ask the worker where that frustration is, and you solve it, two things happen. One, they have ownership and the adoption is very high as opposed to leadership-driven decisions. And we see this happening everyday today. It’s kind of the “smart guy in the room” syndrome where the people who don’t actually have to do the work are telling everybody what and how the workers actually want to get things done. It doesn’t work that way. 

The second is, once employees see — with as little as two to three changes in their daily workflow — what’s possible, their minds open up in terms of all the automation capabilities, all the streamlining that can occur, and they feel invigorated and energized. They become a much more productive and engaged member of the team. And we’re seeing this happen. It’s really an amazing process overall.

No alt text provided for this image

We used to think of work as 9 am to 5 pm — eight hours out of your awake hours. Today, work occurs across every waking hour. This is something that remote workers have known for a long time. But now some 45 percent to 50 percent of the workforce is remote. Now it’s coming to light. Many more people are feeling like they need to do something about it.

So we need to sense what’s going on with those employees. Some of the technology that we’re working on is evaluating and looking at someone’s schedule. How many back-to-back meetings have they had? And then enforcing a cognitive break in their schedules so people can take a breather — maybe take care of something in their personal lives.

And then, even beyond that — with today’s technology such as smart watches — we could look at things such as blood pressure and heart rates and decide if the anxiety level is too high or if an employee is in their proper flow. Again, we can then make adjustments to schedules, block out times on their calendars — or, you know, even schedule some well-being visits with someone who could help them through the stresses in their lives.

Gardner: Amy, building on Ray’s point of enhancing well-being, if we begin using technology to allow employees to be productive, in their flow, but also gain inference information to support them in new ways — how does that change the relationship between the organization and the employee? And how do you see technology becoming part of the solution to support well-being?

Trust enhances well-being

Haworth: There’s so much interesting data coming out over the last year about how the contract between employees and the organization is changing. There has been, in many cases, a greater level of trust. 

According to the research, many employees have trusted what their organizations have been telling them about the pandemic — more than they trusted state and local governments or even national governments. I think that’s something we need to pay attention to.

Trust is that very-hard-to-quantify organizational benefit that fuels everything else. When we think about attraction, retention, engagement, and commitment — some in HR believe that higher organizational commitment is the real driver to discretionary effort, loyalty, and tenure.

No alt text provided for this image

As we think about the role of the organization when it comes to well-being and how we build on trust where it’s healthy — how can we uphold that with high regard? How can we better bridge that into a different employer-employee relationship — perhaps one that’s better than we’ve ever seen before?

If we stand up and say, “Our talent is truly the human capital that will be front-and-center to helping organizations achieve their goals,” then we need to think about this. What is our new role? According to Maslow’s hierarchy of needs, it’s hard to think about being a high-performing employee if things are falling apart on the home front, and if we’re not able to cope.

For our organization, at Citrix, we are thinking about not only our existing programs and bolstering those, but we’re also looking for other partners who are truly experts in the well-being space. We can perhaps bring that new information into the organization in a way that integrates with and intersects into an employee’s day.

For us at Citrix, that is done through Citrix Workspace, and in many cases with the rapport of a managerial capability. That’s because we know so much of the trust relationship is between the employee and the manager, and it is human first and foremost.

Then we also need to think about how we continue to evolve and learn as we go. So much of this is uncharted. We want to make sure we’re open to learning. We’re continuing to invest. We’re measuring how things are working. And we’re inviting that employee voice in — to help co-create.

Gardner: Carolina, from what we just heard from Amy, it sounds like there’s a richer, more intertwined relationship between the talent pool and the organization. And that they are connected at the hip, so to speak, digitally. It sounds like there’s a great opportunity for new companies and a solutions ecosystem to develop around this employee well-being category.

Do you see this as a growth opportunity for new companies and for organizations within existing enterprises? It strikes me that there’s a huge new opportunity.

Tech and the human touch

Milanesi: I do think there’s a huge opportunity. And that’s good and bad in my view because obviously, when there’s a lot of opportunity, there also tends to be fragmentation.

Many different things are going to be tried. And not everybody has the expertise to help. There needs to be an approach from the organization’s perspective so that these solutions are vetted.

But what is exciting is the role that companies like Citrix are taking on to become a platform for that. So there might be a start-up that has a great idea and then leverages the Citrix Workspace platform to deliver that idea.

Then you have the opportunity to use the expertise that Citrix brings to the table. They have been focused on workflows and employee empowerment for many years. What I’m excited to see is organizations that come out and offer that platform to make the emerging ecosystem even richer.

No alt text provided for this image

I also love what Amy said about human trust as first-and-foremost. That’s what I caution people to make it all about. Technology should not be a crutch, where technology comes in to try and make you suffer less, but still does not solve the problem. And technology should not be the only solution you adopt.

I might have a technological check-in that tells me that I’m taking on too many meetings or that I should take a break, but there is nothing better than a manager giving you a call or sending you an email to let you know you are seen as a human, that your work is seen by other humans.

I love what you were saying earlier about the difference between the task and the talent. That’s another part where — if we have more technology that helps us with the mundane stuff and we can focus on what we enjoy doing — that also helps us showcase the value that we bring as an employee and then the value of the task, not just the output.

A lot of times, some of these technology solutions that are delivered are about making me more productive. I don’t know about you guys, but I don’t wake up in the morning and say, “I want to be more productive today.” I wake up and want to get through the day. I want to enjoy myself; I want to make a contribution and to feel that I make a difference for the company I’m working for.

And that’s what technology should be able to do: Come in and take away the mundane, take away the repetitive, and help me focus on what makes a difference — and what makes me feel like I’m contributing to the success within my company.

Gardner: Ray, I would like to visit the idea of consequences of the remote-work era. Just as people can work from anywhere, that also means they can work for just about anyone.

If you’re working for a company that doesn’t seem to have your well-being as a priority and doesn’t seem to be interested in your talents as much as your tasks, you can probably find a few other employers quite easily from the very same spot that you’re in.

How has the competitive landscape shifted here? Do companies do this because it’s a nice thing to do? Or will they perhaps find themselves lacking the talent if the talent wants to work for someone who better supports them?

Employees choose work support

Wolf: Dana, that ultimately is the consequence. Once we get through this immediate situation from the pandemic, and digest the new learning about working remote, we will have choices.

Employers are paying attention to this in a number of ways. For example, I was just on the phone with a CHRO from a Fortune 50 company. They have added a range of well-being applications that help in the taking care of the employees there.

But there are also some cultural changes that need to occur. This CHRO was explaining to me that even though they have all these benefits — including 12 hours off a month or more so-called mental health days – they are struggling with some of the managers. They are having trouble getting managers, some of whom may be later on in their careers, to actually model these new behaviors and give the employees and workers permission to take advantage of the benefits from these well-being applications.

The ones who evolve culturally, and who pay attention to this change, are ultimately going to be the winners. It may be another 6 or 18 months, but we’ll get there.

So we have a way to go. But the ones who evolve culturally, and who pay attention to this change, are ultimately going to be the winners. It may be another 6 or 18 months, but we’ll definitely get there. In the interim, though, workers can do something for themselves.

There are a lot of ways to stay in-tune with how you’re feeling and give yourself a break and better scheduling of time. I know we would like to have technology that forces that into the schedule, but you can do that for yourself now as an interim step. And I think there are a lot of possibilities here — and more not that far in the future.

There are things that could be done immediately to bring a little bit of relief, help people see what’s possible, and then encourage them to continue working down this path of the intersection of well-being and employee workflow.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

YOU MAY ALSO BE INTERESTED IN:

Posted in Citrix, digital transformation, Enterprise transformation, Security, User experience | Tagged , , , , , , , , , , , , , , , , | Leave a comment

How HPE Pointnext Tech Care changes the game for delivering enhanced IT solutions and support

The next BriefingsDirect Voice of Innovation discussion explores how services and support for enterprise IT have entered a new era.

For IT technology providers, the timing of the news couldn’t be better. That’s because those now consuming tech support are demanding higher-order value — such as improved worker productivity from hybrid services delivered across many remote locations.

At the same time, the underlying technologies and intelligence to enhance traditional help desk support are blossoming to deliver proactive — and even consultative — enhancements.

Stay with us here to examine how Hewlett Packard Enterprise (HPE) Pointnext Services has developed new solutions to satisfy those enhanced expectations for the next era of IT support. HPE’s new generation of readily-at-hand IT expertise, augmented remote services, and ongoing product-use guidance together propel businesses to exploit their digital domains — better than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the Pointnext vision for the future of advanced IT operational services are Gerry Nolan, Director of Operational Services Portfolio at HPE Pointnext Services, and Rob Brothers, Program Vice President, Datacenter and Support Services, at IDC. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. 

Here are some excerpts:

Gardner: Rob, what are enterprise IT leaders and their consumers demanding of tech support in early 2021? How are their expectations different from just a year or two ago?

Brothers: It’s a great question, Dana. I want to jump back a little bit further than just a year or so ago. That’s because support has really evolved so much over the past five, six, or seven years.

If you think about product support and support in general back in the day, it was just that. It was an add-on. It was great for fix services. It was about being able to place a phone call to get something fixed.

But that evolved over the past few years due to the fact that we have more intelligent devices and customers are looking for more proactive, predictive capabilities, with direct access to experts and technicians. And now that all has taken a fast-track trajectory during the pandemic as we talk about digital transformation.

During COVID-19, customers need new ways to work with tech-support organizations. They need even more technical assistance. So, we see that a plethora of secure, remote-support capabilities have come out. We see more connected devices. We see that customers look for expertise over the phone — as well as via chat or via augmented reality. Whatever the channel, we see a trajectory and growth that has spurred on a lot of innovation — and not just the innovation itself, but the consumption of that innovation.

Those are a couple of the big differences I’ve seen in just the past couple of years. It’s about the need for newer support models, and a different way of receiving support. It’s also about using a lot of the new, proactive, and predictive capabilities built inside of these newer systems — and really getting connected back to the vendor.

Those enterprises that connect back to their vendors are getting that improved experience and can then therefore pass that better experience to their customers. That’s the important part of the whole equation.

Those enterprises that connect back to their vendors are getting that improved experience and can then therefore pass that better experience to their customers. That’s the important part of the whole equation — making sure that better IT experiences translate to those enterprise customers. It’s a very interesting time.

Gardner: I sense this is also about more collective knowledge. When we can gather and share how IT systems are operating, it just builds on itself. And now we have the tools in place to connect and collaborate better. So this is an auspicious time — just as the demand for these services has skyrocketed.

Brothers: Yes, without a doubt. I find the increased use of augmented reality (AR) to deliver support extremely interesting, too, and a great use case during a pandemic.

If you can’t send an engineer to a facility in-person, maybe you can give that engineer access to the IT department using Google Glass or some other remote-access technology. Maybe you can walk them through something that they may not have been able to do otherwise. With all of the data and information the vendor collects, they can more easily walk them through more issues. So that’s just one really cool use case during this pandemic.

Gardner: Gerry, do you agree that there’s been auspicious timing when it comes to the need for these innovative support services and the capability to deliver them technically?

Pandemic accelerates remote services

Nolan: Yes, there’s no question. I totally agree with Rob. We saw a massive spike with the pandemic in terms of driving to remote access. We already had significant remote capabilities, but many of our customers all of a sudden have a huge remote workforce that they have to deal with.

They have to keep their IT running with minimal on-site presence, and so you have to start quickly innovating and delivering things such as AR and virtual reality (VR), which is what we did. We already have that solution.

But it’s amazing how something like a pandemic can elevate that use to our thousands and thousands of technical engineers around the world who are now using that technology and solution to virtually join customer sites and help them triage, diagnose, and even do installations. It’s allowing them to keep their systems and their businesses running during a very tough period.

Another insight is we’ve seen customers struggling, even before the pandemic, with having enough technical personnel bandwidth. You know, how they need more people resources and skills as more new technologies hit the streets.

To Rob’s point, it’s difficult for customers to keep pace with the speed of change in IT. There’s more hunger for partners who can go deep on expertise across a wide plethora of technologies. So, there’s a variety of new support activities going on.

Brothers: Yes, around those technical capabilities, one of the biggest things I hear from enterprises is just trying to find that talent pool. You need to get employees to do some of the technical pieces of the equation on a lot of these new IT assets. And they’re just not out there, right?

They need programmers and big data data scientists. Getting folks to come in to assist on that level is more and more difficult. Hence, working with the vendor for a lot of these needs and that technical expertise really comes in handy now.

Gardner: Right, when you can outsource — people do outsource. That’s been a trend for 10 or 15 years now.

What are the challenges enterprises — as the IT vendors and providers — have in closing that skills gap? 

DX demands collaboration

Brothers: I actually did a big study around digital transformation. One of the big issues I’ve seen within enterprises is a lot of siloed structures. The networking team is not talking to the storage team, or not talking to the server team, and protecting their turf.

As an alternative, you can have the vendor come in and say, “Look, we can do this for you in a simpler fashion. We can do it a little bit faster, too, and we can keep downtime out of your environment.”

But trying to get the enterprise convinced [on the outsourcing] can sometimes be tricky and difficult. So I see that as one of the inhibitors to getting some of these great tech services that the vendors have into these environments.

A lot of these legacy systems are mixed in with the newer systems. This is where you see a struggle within enterprises. It’s still the stovepipe silos in enterprises that can make transitions very difficult. 

A second big challenge I see is around the big, legacy IT environments. This goes back to that connectedness piece I talked about. A lot of these legacy systems are mixed in with the newer systems. This is where you see a struggle within enterprises. They are asking, “Okay, well, how do I support this older equipment and still migrate to this new platform that I want to do a lot of cloud-based computing with and become more operationally efficient?” The vendors can assist with that, but it’s still the stovepipe silos you sometimes see in enterprises that can make transitions very difficult.

Gardner: Right. The fact is we have hybrid everything, and now we have to adjust our support and services to that as well.

Gerry, around these challenges, it seems we also have some older thinking around how you buy these tech services. Perhaps it has been through a warranty or a bolt-on support plan. Do we need to change the way we think about acquiring these services?

Customer experience choice 

Nolan: Yes, customers are all about experiences these days. Think about pretty much every part of your life — whether you’re going to the bank, booking a vacation, or even buying an electric car. They’ve totally transformed the experience in each of those areas.

IT is no different. Customers are trying to move beyond, as Rob was saying, that legacy IT thinking. Even if it’s contacting a support provider for a break-fix issue, they want the solution to come with an end-to-end experience that’s compelling, engaging, and in a way that they don’t need to think about all the various bits and pieces. The fewer decisions a customer has to make and the more they can just aim for a particular outcome, the more successful we’re going to be.

Brothers: Yes, when a customer invested $1 million in a solution set, the old mindset was that after three or four years it would be retired and they would buy a new one — but that’s completely changed.

Now, you’re looking at this technology for a longer term within your environment. You want to make sure you’re getting all the value out of it, so that support experience becomes extremely important. What does the system look like from a performance perspective? Did I get the full dollar value out of it?

No alt text provided for this image

That kind of experience is not just between the vendor and with my own internal IT department, but also in how that experience correlates out to my end-user customer. It becomes about bringing that whole experience circle around. It’s really about the experience for everybody in the environment — not just for the vendor and not just for the enterprise. But it’s for the enterprise’s customers. 

Gardner: Rob, I think it behooves the seller of the IT goods if they’ve moved from a CapEx to an OpEx model so that they can make those services as valuable as possible and therefore also apply the right and best level of support over time. It locks the customer in on a value basis, rather than a physical basis.

Brothers: Yes, that’s one great mindset change I’ve seen over the past five years. I did a study about six years ago, and I asked customers how they bought support. Overwhelmingly they said they just bought a blanket support contract. It was the same contract for all of the assets within the environment.

But just recently, in the past couple of years, that’s completely changed. They are now looking at the workloads. They’re looking at the systems that run those workloads and making better decisions as to the best type of support contract on that system. Now they can buy that in an OpEx- or CapEx-type manner, versus that blanket contract they used to put on it.

It’s really great to see how customers have evolved to look at their environments and say, “I need different types of support on the different assets I have, and which provide me different experiences.” That’s been a major change in just the past couple of years.

Nolan: We’re also seeing customers seek the capability to evolve and move from one support model to another. You might have a customer environment where they have some legacy products where they need help. And they’re implementing some new technologies and new solutions, and they’re developing new apps.

It’s really helpful for that customer if they can work with a single vendor — even if they have multiple, different IT models. That way they can get support for their legacy, deploy new on-premises technologies, and integrate that together with their legacy. And then, of course, having that consumption-as-a-service model that Rob just talked about, they also have a nice easy way of transitioning workloads over to hybrid models where appropriate.

I think that’s a big benefit, and it’s what the customers seem to be looking for more and more these days.

Gardner: Gerry, what’s the vision now behind HPE to deliver on that? What’s Pointnext Services doing to provide a new generation of tech support that accommodates these new and often hybrid environments?

Tech Care’s five steps toward support

Nolan: We’re very excited to launch a new support experience called HPE Pointnext  Tech Care. It’s all about delivering on much of what’s just been said in terms of moving beyond a product break-fix experience to helping customers get the most out of that product — all the way from purchasing through its lifecycle to end-of-life.

Our main goal for HPE Pointnext Tech Care is to help customers maximize and expose all the value from that product. We’re going to do that with HPE Pointnext Tech Care through five key elements.

Products are going to be embedded with a support experience called HPE Pointnext Tech Care. It’s a very simple experience. It has some choices on the SLA side, but it’s going to dramatically simplify the buying and owing experience at HPE.

The first is to make it a very simple experience. Today, we have four different choices when you’re buying a product as to which experience you want to go with. Now with HPE Pointnext, products are going to be sold embedded with a support experience called HPE Pointnext Tech Care. It’s a very simple experience. It has some choices on the service-level-agreement (SLA) side, but it’s going to dramatically simplify the buying and owning experience for our HPE customers.

The second aspect is the digital-transformation component that we see everywhere in life. That means we’re embedding a lot of data telemetry into the products. We have a product called HPE InfoSight that’s now embedded in our technology being deployed.

InfoSight collects all that data and sends it back to the mother ship, which allows our support experts to gain all of those insights and provide help with the customer in mitigating, predicting, planning capacity, and helping to keep that system running and optimized at all times. So, that’s one element of the digital component.

The other aspect is a very rich support portal, a customer engagement platform. We’ve already redone our support center on hpe.com and customers will see it’s completely changed. It has a new look and feel. Over the coming quarters, there will be more and more new capabilities and functionality added. Customers will be able to see dashboards, personalized views of their environments, and their products. They’ll get omni-channel access to our experts, which is the third element we are providing.

We have all this great expertise. Traditionally, you would connect with them over the telephone. But going forward, you’re going to have the capability, as Rob mentioned, for customers to do chat. They may also want to watch videos of the experts. They may want to talk to their peers. So we have a moderated forum area where customers can communicate with each other and with our experts. There’s also a whole plethora of white papers and Tech Tip videos. It’s a very rich environment.

No alt text provided for this image

Then the fourth HPE Pointnext Tech Care element touches on a key trend that Rob mentioned, which goes beyond break-fix. With HPE Pointnext Tech Care, you’ll have the capability to communicate with experts beyond just talking about a broken part of your system. This will allow you to contact us and talk about things such as using the product, or capacity planning, or configuration information that you may have questions about. This general tech guidance feature of HPE Pointnext Tech Care, we believe, is going to be very exciting for customers, and they’re going to really benefit from it. 

And lastly, the fifth component is about a broader spectrum of full lifecycle help that our customers want. They don’t just want a support experience around buying the product, they want it all the way through its lifetime. The customer may need help with migration, for example, or they may need help with performance, training their people, security, and maybe even retiring or sanitizing that asset. 

With HPE Pointnext Tech Care, they will have a nice, easy mechanism where you have a very robust, warm-blanket-type of support that comes with the product and can easily be augmented with other menu choices. We’re very excited about launch of HPE Pointnext Tech Care and it comes with those five key elements. It’s going to transform the support experience and help customers get the most from their HPE products.

Gardner: Rob, how much of a departure do you sense the HPE Pointnext Tech Care approach is from earlier HPE offerings, such as HPE Foundation Care? Is this a sea change or a moderate change? How big of a deal is this?

Proactive, predictive capabilities

Brothers: In my opinion, it’s a pretty significant change. You’re going to get proactive, predictive capabilities at the base level of the HPE Pointnext Tech Care service that a lot of other vendors charge a real premium for.

I can’t stress enough how important it is for those proactive, predictive capabilities to come with environments. A survey that I did not long ago supported a cost-downtime study. In that study, customers saw approximately 700 or so hours of downtime per year across their environments. These are servers, storage, networking, and security, and take human error into account. If customers enabled proactive, predictive capabilities, they saw approximately 200 hours of saved downtime. That’s because of what those corrective, predictive capabilities can do at that base layer. They allow you to do the one big thing that prevents downtime — and that’s patch management and patch planning.

Now, those technical experts that Gerry talked about can access all of this beautiful, feature-rich information and data. They can feed it back to the customer and say, “Look, here’s how your environment looks. Here’s where we see some areas that you can make improvements, and here’s a patch plan that you can put in place.”

Now technical experts can access all of this beautiful, feature-rich information and data. They can feed it back to the customer to make improvements. That’s precious information and data.

Then all of the data comes back from enterprises, saying, “If I do a better job of that patching and patch planning that just saves a copious amount of unplanned and planned downtime out of my environment because I now do a better job of that.” That’s precious information and data.

That’s the big fundamental change. They’re showing the real value to the customer so they don’t have to buy some of those premium levels. They can get that kind of value in the base level, which is extremely important and provides that higher-order experience to end-user customers. So I do think that’s a huge fundamental shift, and definitely a new value for the customers.

Gardner: Rob, correct me if I’m wrong, but having this level of proactive, baked-in-from-the-start support comes at an auspicious time, too, because people are also trying to do more automation with their security operations. It seems to me that we’re dovetailing the right approaches for patching and proactive maintenance along with what’s needed for security. So, there’s a security benefit here as well?

Brothers: Oh, massive. Especially if you look at this day-and-age with a lot of the security breaches we just had just over the past year due to new security remote access to a lot of systems. Yes, it definitely plays a major factor in how enterprises should be thinking about how they’re patching and patch planning.

Gardner: Gerry, just to pull on that thread again about data and knowledge sharing, the more you get the relationship that you’re describing with HPE Pointnext Tech Care — the more back and forth of the data and learning what the systems are doing — and you have a virtuous cycle. Tell us how the machine learning (ML) and data gathering works in aggregate and why that’s an auspicious virtuous cycle.

Nolan: That’s an excellent question and, of course, you’re spot-on. The combination is of the telemetry built into the actual products through HPE InfoSight, our back-end experts, and the detailed knowledge management processes. We also have our experts who are watching, listening, and talking to customers as they deal with issues.

That means you have two things going on. You have the software learning over time and we have rules being built in there so that when it spots an issue it can go and look for all the other similar environments and then help those customers mitigate and predict ahead of time.

That means that customers will immediately get the benefit of all of this knowledge. It might be a Tech Tip video. It might be a white paper. It might be an item or an article in a moderated forum. There’s this rich back-and-forth between what’s available in the portal and what’s available in the knowledge that the software will build over time. And all of this just comes to bear in a richer experience for the customer, where they can help either self-solve or self-serve. But if they want to engage with our experts, they’re available in multiple different channels and in multiple different ways.

Gardner: Rob, another area where 2+2=5 is when we can take those ML and data-driven insights that Gerry described across a larger addressable market of installed devices. And then, we can augment that with MyRoom-type technologies and the VR and AR capabilities that you described earlier.

What’s the new sum value when we can combine these insights with the capability to then deliver the knowledge remotely and richly? 

Autonomous IT reduces human error 

Brothers: That’s a really great point. The whole idea is to attain what we call autonomous IT. That means to have IT systems that are more on the self-repair side, and that have product pieces shipped prior to things going wrong. 

One of the biggest and most-costly pieces of downtime is from human error. If we can pull the human touch and human interaction out of the IT environment, we save each company hundreds of thousands of dollars a year. That’s what all this data and information will provide to the IT vendors. They can then say, “Look, let’s take the human interactions out of it. We know that’s one of the most-costly sides of the equation.”

If we can pull the human touch and interaction out of the IT environment we save money and reduce human error. We can optimize systems. It gets to the point where we’re relying on the intelligence of the systems to do more. That’s the direction we’re heading in. 

If we can do that in an autonomous fashion — where we’re optimizing systems on a regular basis, equipment is being shipped to the facility prior to anything breaking, we can schedule any downtime during quiet times, and make sure that workloads are moved properly — then that’s the endgame. It gets to the point where the human factor gets more removed and we’re relying more on the intelligence of the systems to do more.

That’s definitely the direction we’re moving in, and what we’re seeing here is definitely heading in that direction.

Gardner: Yes, and in that case, you’re not necessarily buying IT support, your buying IT insurance.

Brothers: Yes, exactly. That gets back to the consumption models. HPE is one of the leaders in that space with HPE GreenLake. They were one of the pioneers to come up with a solution such as that, which takes the whole IT burden off of IT’s plate and puts it back on the vendor.

Nolan: We have a term for that concept that one of my colleagues uses. They call it invisible IT. That’s really what a lot of customers are looking for. As Rob said, we’re still some ways from that. But it’s a noble goal, and we’re all in to try and achieve it.

Gardner: So we know what the end-goal is, but we’re still in the progression to it. But in the meantime, it’s important to demonstrate to people value and return on investment (ROI).

Do we have any HPE Pointnext Tech Care examples, Gerry? Rob already mentioned a few of his studies that show dramatic improvements. But do we have use cases and/or early-adoption patterns? How do we measure when you do this well and you get?

Benefits already abound

Nolan: There are a ton of benefits. For example, we already have extensive Tech Tip video libraries. We have chat implemented. We have the moderated forums up and running. We have lots of different elements of the experience already live in certain product areas, especially in storage.

Of course, many HPE products are already connected through HPE InfoSight or other tools, which means those systems are being monitored on a 24 x 7 basis. The software already monitors, predicts, and mitigates issues before they occur, as well as provides all sorts of insights and recommendations. This allows both the customer and our support experts to engage and take remediation action before anything bad happens. 

Customers seem to love this more-rich experience approach. Yes, there’s a lot more data and a lot more insights. But to have those experts on-hand, to be able to gain or build an action plan from all of that data, is really important.

Now, in terms of some of the benefits that we’re seeing in the storage space, those customers that are connected are seeing 73 percent fewer trouble tickets and 69 percent faster time-to-resolution. To date, since InfoSight was first deployed in that storage environment alone, we’ve measured about 1.5 million hours of saved productivity time.

So there are real benefits when you combine being connected with ML tools such as InfoSight. When the rich value available in HPE Pointnext Tech Carecomes together, it further reduces downtime, improves performance, and helps reach the end-goal that Rob talked about, the autonomous IT or invisible IT. 

Gardner: Rob, we started our conversation about what’s changed in tech support. What’s changed when it comes to the key performance indicators (KPIs) for evaluating tech support and services?

Brothers: The big, new KPIs that we’re seeing do not just evaluate the experience that the enterprise has with the IT vendors. Although that’s obviously extremely important, it’s also about how does that correlate to the experiences my end-users are receiving?

No alt text provided for this image

You’re beginning to see those measurements come to the fore. An enterprise has its own SLAs and KPIs with its end-users. How is that matching to the KPIs and SLAs I have back to my IT vendors? You’re beginning to see those merge and come together. You’re beginning to see new matrices put in place where you can evaluate the vendor through how well you’re delivering user experiences to your own end-users. 

It takes a bit of time and energy to align that because it’s a fairly complex measurement to put in place. But we’re beginning to see that from enterprises, to seek that level of value from the vendors. And the vendors are stepping up, right? They’re beginning to show these dashboards back to the enterprise that say, “Hey, here’s the SLA, here are the KPIs, here are the performance matrices that we’re collecting and that should correlate fairly well to what you’re providing to your end-user customers.”

Gardner: Gerry, if we properly align these values, it better fits with digital transformation because people have to perceive the underlying digital technologies as an enabler, not as a hurdle. Is HPE Pointnext Tech Care an essential part of digital transformation when we think about that change of perception?

Incident management transforms

Nolan: It totally is. One of our early Pointnext customers is a large, US retailer. They’ve gone through a situation where they had a bunch of technology. Each one had its own individual support contract. And they’ve moved to a more centralized and simpler approach where they have one support experience, which we actually deliver across each of their different products — and they’re seeing huge benefits.

They’ve gone from firefighting and having their small IT team predominantly focused on dealing with issues and support calls regarding hardware- and update-type issues and all of a sudden, they were measuring themselves on incidents — how many incidents — and they were trying to keep that at a manageable level.

One large, US retailer has moved to a more centralized and simpler approach where they have one support experience — and they’re seeing huge benefits. 

Well, now, they’ve totally changed. The incidents have almost disappeared — and now they’re focused on innovation. How fast can they get new applications to their business? How fast can they get new projects to market in support of the business?

They’re just one customer who has gone through this transformation where they’re using all of the things we just talked about and it’s delivering significant benefits to them and to their IT group. And the IT group, in turn, are now heroes to their business partners around the US.

Gardner: I want to close with some insights into how organization should prepare themselves. Rob, if you want to gain this new level of capability across your IT organization, you want the consumers of IT in your enterprise to look to IT for solutions and innovation, what should you be thinking about now? What should you put in place to take advantage of the offerings that organizations such as HPE are providing with HPE Pointnext Tech Care?

Evaluating vendor experiences

Brothers: It all starts with the deployment process. When you’re looking and evaluating vendors, it’s not just, “Hey, how is the product? Is the product going to perform and do its task?” 

Some 99 percent of the time, the stand-alone IT system you’re procuring is going to solve the issue you’re looking to solve. The key is how well is that vendor going to get that system up and running in your environment, connected to everything it needs to be connected to, and then supports it optimizes it for the long run.

It’s really more about that life cycle experience. So, as an enterprise, you need to think differently on how you want to engage with your IT vendor. You need to think about all the different performance KPIs, and match that back to your end-user customer.

The thought process of evaluating vendors, in my opinion, is shifting. It’s more about the type of experience I get with this vendor versus the product and its job. That’s one of the big transitional phases I’m seeing right now. Enterprises are thinking about more the experience they have with their partners, more so then if the product is doing the job. 

Gardner: Gerry, what do you recommend people do in order to get prepared to take advantage of such offerings as HPE Pointnext Tech Care?

Nolan: Following on from what Rob said, customers can already decide what experience they would like. HPE Pointnext Tech Care will be the embedded support experience that comes with their HPE products. It’s going to be very easy to buy because it’s going to be right there embedded with the product when the product is being configured and when the quote is being put together. 

HPE Pointnext Tech Care is a very simple, easy, and fully integrated experience. They’re buying a full product experience, not a product — and then choose their support experience on the side. If they want something broader than just a product experience — what I call the warm blanket around their whole enterprise environment — we have another experience called Datacenter Care that provides that.

We also have other experiences. We can, for example, manage the environment for them using our management capabilities. And then, of course, we have our HPE GreenLake as-a-service on-premises experience. We’ve designed each of these experiences so they can totally live together and work together. You can also move and evolve from one to the other. You can buy products that come with HPE Pointnext Tech Care and then easily move to a broader Datacenter Care to cover the whole environment.

We can take on and manage some of that environment and then we can transition workloads to the as-a-service model. We’re trying to make it as easy and as fast as possible for customers to onboard through any and all of these experiences.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise Pointnext Services.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, artificial intelligence, Cloud computing, contact center, Cyber security, data center, Data center transformation, DevOps, digital transformation, disaster recovery, Enterprise transformation, Help desk, Hewlett Packard Enterprise, managed services, professional services, Security, storage, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Work from anywhere unlocks once-hidden productivity and creativity talent gems

Image for post

Now that hybrid work models have been the norm for a year, what’s the long-term impact on worker productivity? Will the pandemic-induced shift to work from anywhere agility translate into increased employee benefits — and better business outcomes — over time?

The next BriefingsDirect workspace innovation discussion explores how a bellwether UK accounting services firm has shown how consistent, secure, and efficient digital work experiences lead to heightened team collaboration and creative new workflows.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the ways that distributed work models fuel innovation, please welcome our guests, Chris Madden, Director of IT and Operations for Kreston Reeves, LLP in the UK, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, we’ve been in a work-from-anywhere mode for a year. Is this panning out as so productive and creative that people are considering making it a permanent feature of their businesses?

Minahan: Dana, if there’s one small iota of a silver lining in this global crisis we’ve all been going through together it’s that it has shone a light on the importance of flexible and remote work models.

Image for post

Companies are now rethinking their workforce strategies and work models — as well as the role the office will play in this new world of work. And employees are, too. They’re voting with their feet, moving out of high-cost, high-rent districts like San Francisco and New York because they realize they can not only do their work effectively remotely, but they can also be more productive and have a better work life.

A few data points that are important: This isn’t a temporary shift. The pandemic has opened folks’ eyes to what’s possible with remote work. In fact, in a recent Gartner study, 82 percent of executives surveyed plan to make remote work and flexible work a more permanent part of their workforce and cost-management strategies — and it’s for very good business reasons.

As the pandemic has proven, this distributed work model can significantly lower real estate and IT costs. But more importantly, the companies that we talk to, the most forward-looking ones, are realizing that flexible work models make them more attractive as an employer. And that prompts them to rethink their staffing strategies because they have access to new pools of talent and in-demand skills of workers that live well beyond commuting distance to one of their work hubs.

Businesses work from anywhere

Such flexible work models can also advance other key corporate initiatives like sustainability and diversity, which are increasingly becoming board-level priorities at most companies. Those companies that remain laggards — that are still somewhat reluctant to embrace remote work or flexible work as a more permanent part of their strategies — may soon be forced to change as their employees look for more flexible work approaches.

We’ve heard about the mass exodus from some of those large metropolitan areas to more suburban — and even rural locales. At Citrix, our own research of thousands of workers and IT and business executives finds that more than three-quarters of workers now prefer to shift to a more remote and flexible work model — even if it means taking a pay cut. And 80 percent of workers say that flexible work arrangements will be a top selection criterion when evaluating employers in the future.

Gardner: Chris, based on your experience at Kreston Reeves, do you agree that these changes to a more flexible and hybrid work location model are here to stay?

Learn How Digital Workspaces Help

Companies Support Hybrid Work Models

Madden: I would. At Kreston Reeves, we are expecting to move permanently to a three- or two-days a week in an office with the remaining time working from home and away from the office. That’s for many of the reasons already covered, such as reduced commuter time, reduced commuting cost, more time at home with family, best work-life balance, and a lot better for the environment as well because of people travelling less and all those greenhouse gases not going up into the atmosphere.

Gardner: We certainly hear how there are benefits to the organization. But how about the end users, the customers? Have your experiences at Kreston Reeves led you to believe that you can maintain the quality of service to your customers and consumers?

Madden: It’s probably ultimately going to be a balance. I don’t think it will shift totally one way or go back to how it was. I think for our customers and clients, there are distinct advantages, depending on the type of work. There isn’t always a need to go and have a face-to-face meeting that can take a lot of time for people, time that they could spend elsewhere in their business.

Depending on the nature of the interactions, quite a lot will shift to video calling, which has become the norm overt the last year even as in the past people may have thought it impersonal.

Depending on the nature of the interactions, quite a lot will shift to video calling, which has become the norm over the last year even as in the past people may have thought it impersonal. So I think that will become a lot more accepted, and face-to-face meetings will be then kept for those meetings that really require everybody to sit down together.

Gardner: It sounds like we’re into a more fit-for-purpose approach. If it’s really necessary, that’s fine, we can do it. But if it’s not necessary, there are benefits to alleviating the pressure on people.

Tell us, Chris, about how your organization operates and how you reacted to the pandemic.

Madden: Yes, we began best part of 10 years ago, when we moved on to Citrix as the platform to distribute computer services to our users. Over the years, we have upgraded that and added on the remote-access solutions. And so, when it came to early 2020 and the pandemic, we were ready to take off. We could see where we were heading in terms of lockdowns and the pandemic, so we closed two or three of our offices — just to see how the system coped.

Image for post

It was designed to do that, but would it really work when we actually closed the offices and everybody worked from home? Well, it worked brilliantly, and was very easy to deal with. And then a few days after that, the UK government announced the first national lockdown and everybody had to work from home within a day.

From our point of view, it worked really well. The only wrinkles in the whole process were to get everybody the appropriate apps on their phones to make sure they could have remote access using multifactor authentication. But otherwise, it was very seamless; the system was designed to cope with everybody working from anywhere — and it did.

Gardner: Chris, we often hear that there is a three-legged stool when it comes to supporting business process — as in people, technology, and process. Did you find that any of those three was primary? What led you to succeed in making such as rapid transition when it comes to the three pillars?

A new world of flexible work

Madden: I think it’s all three of those things. The technology is the enabler, but the people need to be taken with you, and the processes have to adapt for new ways of working. I don’t think any one of those three would lead. You have to do all three together.

Gardner: Tim, how does Citrix enable organizations to keep all three of those plates in the air spinning, if you will, especially on that point about the right applications on the right device at the right time?

Learn to Deliver Superior

Employee Work Environments

Minahan: What’s clear in our research — and what we’re seeing from our customers — is that we’re accelerating to a new world of work. And it’s a more hybrid and flexible world where that employee experience become a key differentiator.

To the point Chris was making, success is going to go to those organizations that can deliver a consistent and secure work experience across any and all work channels — all the Slacks, all of the apps, all the Teams, and in any work location.

Whether work needs to be done in the office, on the road, or at home, delivering that consistent and secure work experience — so employees have secure and reliable access to all their work resources — needs to come together to service end customers regardless of where they’re at.

Kreston Reeves is not alone in what they have experienced. We’re seeing this across every industry. In addition to the change in work models, we are also seeing a rapid acceleration of their digitization efforts, whether it is in the financial services sector, or other areas such as retail and healthcare. They may have had plans to digitize their business, but over the past year they’ve out of necessity had to digitize their business.

Kreston Reeves is not alone in what they have experienced. We’re seeing this across every industry. In addition to the change in work models, we are also seeing a rapid acceleration of digitization efforts. Over the past year out of necessity they have had to digitize their businesses.

For example, there’s the healthcare provider in your neck of the woods, up in the Boston area, Dana, that has seen a 27-times increase in monthly telemedicine visits. During the COVID crisis, they went from 9,000 virtual visits per month to over 250,000 per month — and they don’t think they’re ever going to go back.

In the financial services sector, we hear consistently customers hiring thousands of new advisors and loan officers in order to handle the demand — all in a remote and digital environment. What’s so exciting, as I said earlier, is as companies begin to use these approaches as key enablers, it becomes a liberator for them to rethink their workforce strategies and reach for new skills and new talent that’s well beyond commuting distance to one of their work hubs.

It’s not just about, “Should Sam or Suzy come back and work in the office full time?” That’s a component of the equation. It’s not even about, “Do Sam and Suzy perform at their best even when they’re working at home?” It’s about, “Hey, what should our workforce look like? Can we now reach skills and talent that were previously inaccessible to us because we can empower them with a consistent work experience through a digital workspace strategy?”

Gardner: How about that, Chris? Have you been simply repaving work-in-the-office paths with a different type of work from home? Or are you reinventing and exploring new business processes and workflows as a result of the flexibility?

Remote work retains trust, security

Madden: There is much more willingness amongst businesses and the people working in businesses to move quickly with technology. We’re past being cautious. With the pandemic, and the pressure that that brings, people are more willing to move faster — and be less concerned about understanding everything that they may want to know before embracing technology.

The other thing is with relationships with clients. There is a balance, to not go as far as some industries. Some never see their clients any longer because everything is done remotely, and everything is automated through apps and technology.

Image for post

And the correct balance that we will be mindful of as we embrace remote working — and as we have more virtual meetings with clients — is that we still need to maintain the relationship of being a trusted advisor to the client — rather than commoditizing our product.

Gardner: I suppose one of the benefits to the way the technology is designed is that you can turn the knobs. You can experiment with those relationships. Perhaps one client will require a certain value toward in-person and face-to-face engagements. Another might not. But the fact is the technology can accommodate that dynamic shift. It gives us, I think, some powerful tools.

Image for postMadden: Absolutely. The key is that for those clients who really want to embrace the modern world and do everything digitally, there is a solution. If a client would still like to be very traditional and have lots of invoices and things on paper and send those into their accountant, that, too, can be accommodated.

But it is about moving the industry forward over time. And so, gradually I can see that technology will become a bigger contributor to the overall service that we provide and will probably do the basic accountancy work, producing an end result that a human then looks at provides the answer back to the client.

Gardner: Now, of course, the materials that you’re dealing with are often quite sensitive and there are business regulations. How did the reaction of your workforce and your customer base come down on the issues of privacy, control, and security?

Madden: The clients trust that we will get it right and therefore look to us to provide the secure solution for them. So, for example, there are clients who have an awful lot of information to send us and cannot come into an office to hand over whatever that is.

We can get them new technologies that they haven’t used in the past such as Citrix ShareFile to share those documents with us securely and efficiently, but in a way that allow us to bring those documents into our systems and into the software we need to use to produce the accounts and the audits for the clients.

Gardner: Tim, you mentioned earlier that sometimes when people are forced into a shift in behavior, it’s liberating. Has that been the case with people’s perceptions around privacy and security as well?

Learn How Digital Workspaces Help

Companies Support Hybrid Work Models

Minahan: If you’re going to provide a consistent and secure work experience, the other thing folks are beginning to see as they embrace hybrid and more distributed work models is that their security posture needs to evolve too. People aren’t all coming into the office every day to sit at their desk on the corporate network, which had much better-defined parameters and arguably was easier to secure.

Now, in a truly distributed work environment, you need to not only provide a digital workspace that gives employees access to all the work resources they need — and that is not just their virtual desktops, but all of their software-as-a-service (SAAS) apps or web apps or mobile apps — it needs to be all in one unified experience that’s accessible across any location.

That is another dynamic we’re seeing. Companies are accelerating their embrace of new more contextual zero trust access security models as they look forward to a post-pandemic world.

It also needs to be secure. It needs to be wrapped in a holistic and contextual security model that fosters not just zero trust access into that workspace, but ongoing monitoring and app protection to detect and proactively remediate any access anomalies, whether initiated by a user, a bot, or another app.

And so, that is another dynamic we’re seeing. Companies are accelerating their embrace of new more contextual zero trust access security models as they look forward to preparing themselves for how they’re going to operate in a post-pandemic world.

Gardner: Chris, I suppose another challenge has been the heterogeneity of the various apps and data across the platforms and sources that you’re managing. How has working with a digital Workspace environment helped you provide a singular view for your employees and end customers? How do workspace environments help mitigate what had been a long-term integration issue for IT consumption?

Madden: For us, whether we are working from home remotely or are in an office, we are consuming the same desktop with the same software and apps as if we were sitting in an office. It’s really exactly the same. From a colleague’s point of view, whether they are working from home in a pandemic or sitting in their office in Central London, they are getting exactly the same experience with exactly the same tools.

And so for them, it’s been a very easy transition. They’re not having to learn the technology and different ways to access things. They can focus instead on doing the client work and making sure that their home arrangement is sorted out.

Gardner: Tim, regardless of whether it’s a SaaS app, cloud app, on-premises data — as long as that workspace is mobile and flexible — the complexity is hidden?

Workspace unifies and simplifies tasks

Minahan: Well, there is another challenge that the pandemic has shone a light on, which is this dirty little secret of the business world. And that is our work environment is too complex. For the past 30 years, we’ve been giving employees access to new applications and devices. And more recently, chat and collaboration tools — all with the intent to help get work done.

While on an independent basis, each of these tools adds value and efficiency, collectively they’ve created a highly fragmented and complex work environment that oftentimes interrupts, distracts, and stresses out employees. It keeps them possibly from getting their actual work done.

Just to give you a sense, with some real statistics: On any given workday, the typical employee uses more than 30 critical apps to get their work done, oftentimes needing to navigate four or more just to complete a single business process. They spend more than 20 percent of their time searching across all of these apps and all of these collaboration channels to find the information they need to make decisions to do their jobs.

Learn to Deliver Superior

Employee Work Environments

To make matters worse, now we’ve empowered these apps and these communication and collaboration channels. They’re all vying for our attention throughout the day, shouting at us about things we need to get done, and oftentimes distracting us from our core work. By some estimates, all of these notifications, chats, texts, and other disruptions interrupt us from our core work about every two minutes. That means the typical employee gets interrupted and forced to switch context between apps, emails, and other chat channels more than 350 times each day. Not surprisingly, what we are seeing is a huge productivity gap — and it is turning our top talent into task rabbits.

As companies think through this next phase of work, how do they provide a consistent and secure work experience and a digital workspace environment for employees no matter where they’re working? It not only be needs to be unified — giving them access to everything they need and security, ensuring that corporate information, applications, and networks remain secure no matter where employees are doing the work — but it also needs to be intelligent.

Image for post

Leveraging intelligent capabilities such as machine learning (ML), artificial intelligence (AI) assistance, bots, and micro apps personalize and simplify work execution. It’s what I call adding an experience layer between an employee and their work resources. This simplifies their interactions and work execution across all of the enterprise apps, content, and other resources so employees are not overwhelmed and can perform at their best no matter where work needs to get done.

Gardner: Chris, are you interested in elevating people from task rabbits to a higher order of value to the business and their end users and customers? And is the digital environment and workplace a part of that?

Madden: Absolutely. There are lots of processes, many firms, and across multiple campuses. They have grown up over the years and they’ve always been done that way. This is a perfect time to reappraise how we do those things smarter digitally using some robotic process automation (RPA) tools and AI to take a lot of the rework and data from one system into another to produce the end result for the client.

We want to free up our people to do more value-added work — and it would be more interesting work for those people. It will give a better quality role for people, which will help us to attract better talent.

There is a lot of that on our radar for the coming year or two. We want to free our people up to do more value-added work — and it would be more interesting work for those people. It will give a better quality of role for people, which will help us to attract better talent. And given the fact that people now have a taste of a different work-life balance, there will be a lot of pressure on new recruits to our business to continue with that.

Gardner: Chris, now that your organization has been at this for a year — really thrust into much more remote flexible work habits — were there any unexpected and positive spins? Things that you didn’t anticipate, but you could only find out with 20–20 hindsight?

Virtual increases overall efficiency

Madden: Yes. One is the speed at which our clients were happy to switch to video meetings and virtual audits. Previously, on audits, we would send a team of people to a client’s premises and they would look through the paperwork, look at the stock in a warehouse, et cetera, and perform the audit physically. We were able to move quickly to doing that virtually.

For example, if we’re looking in a warehouse to check that a certain amount of stock is actually present, we can now do that by a video call and walk around the warehouse and explain what we’re looking for and see that on the screen and say, “Yes, okay, we know that that stock is actually available.” It was a really big shift in mindset for our regulators, for ourselves, and for our clients, which is a great positive because it means that we can become much more efficient going forward.

The other one that sticks out in my mind is the efficiency of our people. When you’re at home, focusing on the work and without the distractions of an office, the noise, and the conversations, people are generally more efficient. There is still the need for a balance because we don’t want everybody just sitting at home in silence staring at a screen. We miss out on some of the richness of business relationships and conversations with colleagues, but it was interesting how productivity generally increased during the lockdown.

Gardner: Tim, is that what you’re finding more generally around the globe among the Citrix installed base, that productivity has been on the uptick even after a 20- or 30-year period where, in many respects and measurements, productivity has been flat?

Minahan: Yes, that is a trend we have been seeing for decades. Despite the introduction of more technology, employee productivity continued to trend down, ironically, until the pandemic. We talked with employees, executives, and through our own research and it shows that more than 80 percent of employees feel that they’re as, if not more, productive when working from home — for a lot of the reasons that Chris mentions. What they’ve seen at Kreston Reeves has continued to be sustained.

Image for post

It’s introduced the need for more collaborative work management tools in the work environment in order to foster and facilitate that higher level of engagement and that more efficient execution that we mentioned earlier. But overall, whether it’s the capability to avoid the lengthy commute or the ability to avoid distractions, employees are indeed seeing themselves as more productive.

In fact, we’re seeing a lot of customers now talk about how they need to rethink the very role of the office. Where it’s not just a place where people come to punch their virtual time cards, but is a place that’s more purpose-built for when you need to get together with a client or with other teammates to foster collaboration. You still keep the flexibility to work remotely to focus on innovation, creativity, and work execution that oftentimes, as Chris indicated, can be distracting or difficult to achieve strictly in an office environment.

Gardner: Chris, what’s interesting to me about your business is you’re in a relationship with so many client companies. And you were forced to go digital very rapidly — but so were they. Is there a digital transformation accelerant at work here? Because they all had to go digital at the same is there a network effect?

Because your customers have gone digital, Chris, could you then be better digital providers in your relationships together?

Collaborative communication

Madden: To an extent. It depends on the type of client industry that they’re in. In the UK, certain industries have been shut for a long time and therefore, they are not moving digitally. They are just stuck waiting until they are able to reopen. In the meantime, there’s probably very little going on in those businesses.

Those businesses that are open and working are very much embracing modern technology. So, one of the things that we’ve done for our audit clients, particularly, is providing different ways in which they can communicate with us. Previously, we probably had a straightforward, one-way approach. Now, we are giving clients three or four different ways they can communicate and collaborate with us, which helps everybody and moves things along a lot more quickly.

It is going to be interesting post-pandemic. Will people intrinsically go back to what they were always doing? Will what drove us forward keep us creating and becoming more digital or will the instinct be to go back to how it was because that’s how people are more comfortable?

Gardner: Yes, it will be interesting to see if there’s an advantage for those who embrace digital methods more and whether that causes a competitive advantage that the other organizations will have to react to. So we’re in for an interesting ride for a few more years yet.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

YOU MAY ALSO BE INTERESTED IN:

Posted in application transformation, artificial intelligence, BYOD, Citrix, Cloud computing, Cyber security, Data center transformation, digital transformation, enterprise architecture, Security, Software, User experience, Virtualization | Tagged , , , , , , , , , , , , , , | Leave a comment

How to gain advanced cyber resilience and recovery across the IT ops and SecOps divide

Cyber attacks are on the rise, harming brands and supply chains while fomenting consumer and employee distrust — as well as leading to costly interruptions and service blackouts.

At the same time, more remote workers and extended-enterprise processes due to the pandemic demand higher levels of security across all kinds of business workflows.https://www.linkedin.com/embeds/publishingEmbed.html?articleId=8874013840541958622

Stay with us now as the next BriefingsDirect discussion explores why comprehensive cloud security solutions need to go beyond on-premises threat detection and remediation to significantly strengthen extended digital business workflows.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about ways to shrink the attack surface and dynamically isolate process security breaches, please join Karl Klaessig, Director of Product Marketing for Security Operations, at ServiceNow, and E.G. Pearson, Security Architect at Unisys. The discussion is moderated byDana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Karl, why are digital workflows so essential now for modern enterprises, and why are better security solutions needed to strengthen digital businesses?

Klaessig: Dana, you touched on cyber attacks being on the rise. It’s a really scary time if you think about MGM Resorts and some of the really big attacks in 2020 that took us all by surprise. And 23 percent of consumers have had their email or social media accounts hacked, taken over, or used. These are all huge threats to our everyday life as businesses and consumers.

And when we look at so many of us now working from home, this huge new attack surface space is going to continue. In a recent Gartner chief financial officer (CFO) survey, 74 percent of companies have the intent to shift employees to work from home (WFH) permanently.

These are huge numbers indicating a mad dash to build and scale remote worker infrastructures. At the end of the day, the teams that E.G. and I represent, as vendors, we strive hard to support these businesses as they seek to scale and address an explosive impact for cyber resilience and cyber operations in their enterprises.

Gardner: E.G., we have these new, rapidly evolving adoption patterns around extended digital businesses and workflows. Do the IT and security personnel, who perhaps cut their teeth in legacy security requirements, need to think differently? Do they have different security requirements now?

IT security requirements rise

Pearson: As someone who did cut their teeth in the legacy parts, I say, “Yes,” because things are new. Things are different.

The legacy IT world was all about protecting what they know about, and it’s hard to change. The new world is all about automation, right? It impacts everything we want to do and everything that we can do. Why wouldn’t we try to make our jobs as simple and easy as possible?

When I first got into IT, one of my friends told me that the easiest thing you can do is script everything that you possibly can, just to make your life simpler. Nowadays, with the way digital workflows are going, it’s not just automating the simple things — now we’re able to easily to automate the complex ones, too. We’re making it so anybody can jump in and get this automation going as quickly as possible.

Gardner: Karl, now that we’re dealing with extended digital workflows and expanded workplaces, how has the security challenge changed? What are we up against?

Klaessig: The security challenge has changed dramatically. What’s the impact of Internet of things (IoT) and edge computing? We’ve essentially created a much larger attack surface area, right?

What’s changed in a very positive way is that this expanded surface has driven automation and the capability to not only secure workflows but to collaborate on those workflows.

We have to have the capability to quickly detect, respond, and remediate. Let’s be honest, we need automated security for all of the remote solutions now being utilized – virtually overnight – by hundreds of thousands of people. Automation is going to be the driver. It’s what’s really rises to the top to help in this.

Gardner: E.G., one of the good things with the modern IT landscape is that we can do remote access for security in ways that we couldn’t before. So, for IoT, as Karl mentioned, we’re talking about branch offices — not just sensors or machines.

We increasingly have a very distributed environment, and we can get in there with our security teams in a virtual sense. We have automation, but we also have the virtual capability to reach just about everywhere. 

Pearson: Nowadays, IoT is huge. Operational technology (OT) is huge. Data is huge. Take your pick, it’s all massive in scope nowadays. Branch offices? Nowadays, all of us are our own branch office sitting at our homes.

Now, everybody is a field employee. The world changed overnight. And the biggest concern is how do we protect every branch office and every individual who’s out there? It used to be simpler, you used to create a site-to-site virtual private network (VPN) or you had communications that could be easily taken care of.

Everybody is now a field employee. The world changed overnight. And the biggest concern is how do we protect every branch office and every individual who’s out there? The world is different.

Now the communication is open to everybody because your kids want to watch Disney in the living room while you’re trying to work in your office while your wife is doing work for her job three rooms down. The world is different.

The networks that we have to work through are different. Now, instead of trying to protect an all-encompassing environment, it’s about moving to more individual or granular levels of security, of protecting individual endpoints or systems.

I now have smart thermostats and a smart doorbell. I don’t want anybody attaching to those. I don’t need somebody talking to my kids through those things. In the same vein, I don’t need somebody attaching to my company’s OT environment and doing something silly inside of there. So, in my opinion, it’s less about the overarching IT environment, and more about how to protect the individuals.

Gardner: To protect all of those vulnerable individuals then, what are the new solutions? How are the Unisys Stealth and ServiceNow Platformcoming together to help solve these issues?

Collaborate to protect individuals

Klaessig: Well, there are a couple of areas I’ll touch on. One is that Unisys has an uncanny capability to do isolation and initially contain a breach or threat. That is absolutely paramount for our customers. We need to get a very quick handle on how to investigate and respond. Our teams are all struggling to scale faster and faster with higher volume. So, every minute bought is a huge minute gained. Right out of the gate, between Unisys and ServiceNow, that buys us time — and every second counts. It’s invaluable.

Another thing that’s driving our solutions are the better ties between IT and security; there’s much more collaboration. For a long time they tended to be in separate towers, so to speak. But the codependences and collaborative drivers between Unisys and ServiceNow mean that those groups are so much more effective. The IT and security teams collaborate thanks to the things we do in the workloads and the automation between both of our solutions. It becomes extremely efficient and effective.

Gardner: E.G., why is your technology, Unisys Stealth for Dynamic Isolation a good fit with ServiceNow? Why is that a powerful part of this automation drive?

Pearson: The nice part about dynamic isolation is it’s just a piece of what we can do as a whole with Unisys Stealth. Our Stealth core product is doing identity-based microsegmentation. And, by nature, it flows into software-defined networking, and it’s based on a zero trust model.

The reason that’s important is, in software-defined networking, we’re gathering tons of information about what’s happening across your network. So, in addition to what’s happening at the perimeter with firewalls, you are able to get really good, granular information about what’s happening inside of your environment, too.

We’re able to gather that and send all of that fantastic information over the ServiceNow Platform to your source, whatever it may be. ServiceNow is a fantastic jumping point for us to be able to get all that information into what would have been separate systems. Now they can all talk together through the ServiceNow Platform. 

Klaessig: To add to that, this partnership solves the issues around security data volume so you can prioritize accurately because you’re not inundated. E.G. just described the perfect scenario, which is that the right data gets into the right solution to enable effective assessment and understanding to make prioritizations on threat responses and threat actions based on business impact.

That huge but managed amount of data that comes in is invaluable. It’s what drives everything to get to prioritizing the right incidents. 

Gardner: The way you’re describing how the solutions work together, it sounds like the IT people can get better awareness about security priorities. And the security people can perhaps get insights into making sure that the business-wide processes remain safe.

Critical care for large communities

Klaessig: You’re absolutely right because the continuous threat prioritization and breach protection means that the protective measures have to go through both IT and security. That collaboration and automation enables not just the operational resilience that IT is driving for, but also the cyber resilience that the security teams want. It is a handshake.

That shared data and workloads are part of security but they reflect actual IT processes, and vice versa. It makes both more effective. 

Gardner: E.G., anything more to offer on this idea of roles, automation, and how your products come together?

Pearson: I wholeheartedly agree with Karl. IT and security can’t be siloed anymore. They can’t be separate organizations.

IT relies on what security operations puts in play, and security operations can’t do anything unless IT mitigates what security finds. So they can’t act individually any more. Otherwise, it’s like telling a football player to lace up their ice skates and go score a couple of goals.

IT relies on what security operations puts in play, and security operations can’t do anything unless IT mitigates what security finds. So they can’t act individually any more. Otherwise, it’s like telling a football player to lace up their ice skates and go score a couple of goals.

Gardner: As we use microsegmentation and zero trust to attend to individual devices and users, can we provide a safer environment for sets of users or applications?

Pearson: Yes, we have to do this in smaller and smaller groups. It’s about being able to understand what those communities need and how to dynamically protect them. 

As we adjust to the pandemic and the humungous security breaches like we found at the end of 2020, protecting large communities can’t be done as easily. It’s so much easier to break those down into smaller chunks that can be best protected.

We can group things out based on use and the impact to the business. And again, this all contributes to the prioritization and the response when we coordinate between the two solutions, Unisys and ServiceNow.

Gardner: So it’s an identity-driven model but on steroids. It’s not just individual people. It’s critical groups.

Klaessig: Well said.

Pearson: Yes.

Gardner: How can people consume this, whether you’re in IT, security personnel, or even an end user? If you’re trying to protect yourself, how do you avail yourself of what ServiceNow and Unisys have put together?

Speed for bad-to-worse scenarios

Klaessig: The key is we target enterprises. That’s where we work together and that’s where ServiceNow workflows go. But to your point, nowadays I’m essentially a lone, solo office person, right? With that in mind, we need to remember those new best practices.

The appropriate workflows and processes within our collective solutions must reflect the actual individual users and processes. It goes back to our comments a couple of minutes ago, which is what do you use most? How often do you use it? When do you use it, and how critical is it? Also, who else is involved?

That’s something we haven’t touched on up until now — who else will be impacted? At the end of the day, what is the impact? In other words, if someone just had a credential stolen, I need the quick isolation from Unisys based on the areas of IT impacted. I can do that in ServiceNow, and then the appropriate response puts a workflow out and it’s automated into IT and security. That’s critical. And that’s the starting point for the other processes and workflows.

Gardner: We now need to consider what happens when you inevitably face some security issues. How does the ServiceNow Security Incident Response Platform and Unisys Stealth come together to help isolate, reduce, and stifle a threat rapidly?

Pearson: The reason such speed is important is that many of you all have already been impacted by ransomware. How many of you all have actually seen what ransomware will do if left unchecked for even just 30 minutes inside of a network? It’s horrible. That to me, that is your biggest need.

Whether it is just a regular end-user or if it’s a full-scale, enterprise-level-type workflow, speed is a huge reason that we need a solution to work and to work well. You have to be fast to keep bad things from going really, really wrong.

One of the biggest reasons we have come together with Stealth doing microsegmentation and building small communities and protecting them is to watch the flow of what happens with whom across ports and protocols because it is identity based. Who’s trying to access certain systems? We’re able to watch those things.

As we’re seeing that information, we’re able to say if something bad is happening on a specific system. We’re able to show that weird or bad traffic flow is occurring, send that to ServiceNow and allow the automated operations to protect an end point or a server.

Because the process is automated, it brings the response down to less than 10 seconds, using automated workflows within ServiceNow. With dynamic isolation, we’re able to isolate that specific system and cut if off from doing anything else bad within a larger network.

That’s huge. That gives us the capability to take on something fast that could bring down an entire system. I have seen ransomware go 30 minutes unchecked, and it will completely ravage an entire file server, which brings down an entire company for three days until everything can be brought back up from the backups. Nobody has time for that. Nobody has time for the 30 minutes it took to do something silly to cost you three days of extra work, not to mention what else may come from that.

With our combined capabilities, Unisys Stealth provides the information we’re able send to the ServiceNow platform to have protection put in place to isolate and start to remediate within 10 seconds. That’s best for everybody because 10 seconds worth of damage is a whole lot easier to mitigate than 30 minutes’ worth.

Klaessig: Really well-said, E.G.

Gardner: I can see why 2+2=6 when it comes to putting your solutions together. ServiceNow gets the information from Stealth that something is wrong, but then you could put the best of what you do together to work.

Resolve to scale with automation

Klaessig: We do. And this leads us to do even more automation. How can you get to that discovery point faster, and what does that mean to resolve the problem?

And there’s another angle to this. Our listeners and readers are probably saying, “I know we need to respond quickly, and, yes, you’re enabling me to do so. And, yes, you’re enabling me to isolate and do some orchestration that ties things up to buy me time. But how do I scale the teams that are already buried beyond belief today to go ahead and address that?”

That’s a bit overwhelming. And here’s another added wrinkle. E.G. mentioned ransomware, and the scary part is in 2020 ransomware was paid 50 percent of the time versus one-third of the time in 2019. Even putting aside the pandemic and natural disasters, this is what our teams our facing.

It again goes back to what you heard E.G. and I touch on, which is automation of security and IT is what’s critical here. Not only can you respond consistently quicker, but you’ll be able to scale your teams and skills — and that’s where the automation further kicks in.

Businesses can’t take on this type of volume around security management with the teams they have in place today. That’s why automation is so critical. As attacks escalate, they can’t just go and add more people in time, right?

In other words, businesses can’t take on this type of volume around security management with the teams they have in place today. That’s why automation is so critical. Comprehensive tooling increases detection on the Unisys side, and that gives them not only more time to respond but allows them to be more effective as well. As attacks escalate, they can’t just go ahead and add more people in time, right? This is where they need that automation to be able to scale with what they have.

It really pays off. We’ve seen customers benefit from a dollars and cents prospective, where they saw a 74 percent improvement in time-to-identify. And now 46 percent of their incidents are handled by automation, saving more than 8,700 hours annually for their teams. Just wrap your head around that. I mean, that’s just a huge advantage from putting these pieces together and automating and orchestration like E.G. has been talking about.

Gardner: Is it too soon, Karl, to talk about bots and more automation where the automation is a bit more proactive? What’s going to happen when the data and the speed get even more useful, but more compressed when it comes to the response time? How smart are these systems going to get?

Get people to do the right thing

Klaessig: The reality is, we’re already going there. When you think of machine learning (ML) and artificial intelligence (AI), we’re already doing a certain amount of that in the products.

As we leverage more of the great data from Unisys, it drives who can resolve those vulnerabilities because they have a predetermined history of dealing with those types of vulnerabilities. That’s just an example of being able to use ML to align the right people to the right resolution. Because, at the end of the day, it still comes down to certain people doing certain things and it always will. But we can use that ML and AI to put those together very quickly, very accurately, and very efficiently. So, again, it takes that time to respond down to seconds, as E.G. mentioned.

Gardner: Are we going to get to a point where we simply say, “J.A.R.V.I.S., clean up the network”?

Pearson: I hope so! Going back to my old days of being an admin, I was an extremely lazy admin. If I could have just said, “J.A.R.V.I.S., remediate my servers,” I would have been all over it.

I don’t think there’s any way we can’t move toward more automation and ML. I don’t necessarily want us to get to the point where Skynet is not going to delete the virus, saying, “I am the virus.” We don’t need that.

But being able to automate helps overcome the mundane, such as resetting somebody’s password and being able to pull a system offline that’s experiencing some sort of weird whatever it may be. Automating those types of things helps everybody go faster through their day because if you’re working a helpdesk, you’ve already gotten 19 people with their hair on fire begging for your attention.

If you could cut off five of those people by automating and very easily allowing some AI to do the work for you, why wouldn’t you? I think their time is more valuable than the few dollars it’s going to cost to automate those processes.

Klaessig: That’s going to be the secret to success in 2021 and going forward. You can scale, and the way you’re going to scale is to take out those mundane tasks and automate all of those different things that can be automated.

As I mentioned, 46 percent of the security incidents became automated for our customer. That’s a huge advantage. And at the end of the day, putting J.A.R.V.I.S. aside, the more ML we can get into it, the better and more repeatable the processes and the workflows will be — and that much faster. That’s ultimately what we’re driving toward as well.

Gardner: Now that we understand the context of the problem, the challenges organizations face, and how these solutions come together, I’m curious at how this actually gets embedded into organizations? Is this something that security people do, that the IT people do, that the helpdesk people do? Is it all of the above? 

Everybody has role to reap benefits

Pearson: The way we usually get this going is there needs to be buy-in from everybody because it’s going to touch a lot of folks. I’m willing to bet Karl’s going to say similar things. It’s nice to have everybody involved and to have everybody’s buy-in on this.

It usually starts for us at Unisys with what we’re doing with microsegmentation and with a networking and security group. They need to talk to be able to get this rolled out. We also need the general IT folks because they’re going to have to install and get this rolled out to endpoints. And we need the server admins involved as well.

At the end of the day, this goes back to being a collaborative opportunity … for IT and security to join together. These solutions benefit both teams and can piggyback on investments they have already made elsewhere.

When it comes down to it, everybody’s going to have to be involved a little bit. But it generally starts with the security folks and the networking folks, saying, “How can I protect my environment just a little bit more than I was before?” And then it rolls from there.

And that’s a big advantage as well. Going forward, I strongly believe in — and I’ve seen the results of this — being a driver toward greater collaboration. It is that type of deployment and should be done in that manner. And then quite frankly, both organizations reap the benefits.

Pearson: Wholeheartedly.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and ServiceNow.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, Cloud computing, containers, Cyber security, digital transformation, Enterprise architect, Information management, machine learning, Security, ServiceNow, storage, Unisys | Tagged , , , , , , , , , , , , , , , | Leave a comment

How HPE Pointnext ‘Moments’ provide a proven critical approach to digital business transformation

The next edition of the BriefingsDirect Voice of Innovation video podcast series explores new and innovative paths for businesses to attain digital transformation.

Even as a vast majority of companies profess to be seeking digital business transformation, few proven standards or broadly accepted methods stand out as the best paths to take.

And now, the COVID-19 pandemic has accelerated the need for bold initiativesto make customer engagement and experience optimization an increasingly data-driven and wholly digital affair.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video.

Stay with us here to welcome a panel of experts as they detail a multi-step series of “Moments” that guide organizations on their transformations. Here to share the Hewlett Packard Enterprise (HPE) view on helping businesses effectively innovate for a new era of pervasive digital business are:

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Craig, while some 80 percent of CEOs say that digital transformation initiatives are under way, and they’re actively involved, how much actual standardization — or proven methods — are available to them? Is everyone going at this completely differently? Or is there some way that we can help people attain a more consistent level of success?

Partridge: A few things have emerged that are becoming commonly agreed upon, if not commonly executed upon. So, let’s look at those things that have been commonly agreed-upon and that we see consistently in most of our customers’ digital transformation agendas.

The first principle would be — and no shock here — focusing on data and moving toward being a data-driven organization to gain insights and intelligence. That leads to being able to act upon those insights for differentiation and innovation.

It’s true to say that data is the currency of the digital economy. Such a hyper-focus on data implies all sorts of things, not least of all, making sure you’re trusted to handle that data securely, with cybersecurity for all of the good things come that out of that data.

Another thing we’re seeing now as common in the way people think about digital transformation is that it’s a lot more about being at the edge. It’s about using technology to create an exchange value as they transact value from business-to-business (B2B) or business-to-consumer (B2C) activities in a variety of different environments. Sometimes those environments can be digitized themselves, the idea of physical digitization and using technology to address people and personalities as well. So edge-centric thinking is another common ingredient.

These may not form an exact science, in terms of a standardized method or industry standard benchmark, but we are seeing these common themes now iterate as customers go through digital transformation.

Gardner: It certainly seems that if you want to scale digital transformation across organizations that there needs to be consistency, structure, and common understanding. On the other hand, if everyone does it the same way, you don’t necessarily generate differentiation.

How do you best attain a balance between standardization and innovation?

Partridge: It’s a really good question because there are components of what I just described that can be much more standardized to deliver the desired outcomes from these three pillars. If you look, for example, at cloud-use-enablement, increasingly there are ways to become highly standardized and mobilized around a cloud agenda.

Moving toward containerization and leveraging microservices, or developing with an open API mindset, these are now pervasive principles in almost every industry. IT has to bring its legacy environment to play in all of that at high velocity and high agility.

And that doesn’t vary much from industry to industry. Moving toward containerization, for example, and leveraging microservices or developing with an open API mindset — these principles are pervasive in almost every industry. IT has to bring its legacy environment to play in that discussion at high velocity and high agility. So there is standardized on that side of it.

The variation kicks in as you pivot toward the edge and in thinking about how to create differentiated digital products and services, as well as how you generate new digital revenue streams and how you use digital channels to reach your customers, citizens, and partners. That’s where we’re seeing a high degree of variability. A lot of that is driven by the industry. For example, if you’re in manufacturing you’re probably looking at how technology can help pinpoint pain or constraints in key performance indicators (KPIs), like overall equipment effectiveness, and in addressing technology use across the manufacturing floor.

If you’re in retail, however, you might be looking at how digital channels can accelerate and outpace the four-walled retail experiences that companies may have relied on pre-pandemic.

Gardner: Craig, before we drill down into the actual Moments, were there any visuals that you wanted to share to help us appreciate the bigger picture of a digital transformation journey?

Partridge: Yes, let me share a couple of observations. As a team, we engage in thousands of customer conversations around the world. And what we’re hearing is exactly what we saw from a recent McKinsey report.

No alt text provided for this image

There are number of reasons why seven out of 10 respondents in this particular survey say they are stalled in attaining digital execution and gaining digital business value. Those are centered around four key areas. First of all, communication. It sounds like such a simple problem statement, but it is so hard to sometimes communicate what is a quite complex agenda in a way that is simple enough for as many people as possible — key stakeholders — to rally behind and to make real inside the organization. Sometimes it’s a simple thing of, “How do I visualize and communicate my digital vision?” If you can’t communicate really clearly, then you can’t build that guiding coalition behind you to help execute.

A second barrier to progress centers on complexity, so having a lot of suspended, spinning plates at the same time and trying to figure out what’s the relationship and dependencies between all of the initiatives that are running. Can I de-duplicate or de-risk some of what I’m doing to get that done quicker? That tends to be major barrier.

The third one you mentioned, Dana, which is, “Am I doing something different? Am I really trying to unlock the business models and value that are uniquely mine? Am I changing or reshaping my business and my market norms?” The differentiation challenge is really hard.

No alt text provided for this image

The fourth barrier is when you do have an idea or initiative agenda, then how to lay out the key building blocks in a way that’s going to get results quickly. That’s a prioritization question. Customers can get stuck in a paralysis-by-analysis mode. They’re not quite sure what to establish first in order to make progress and get to that minimum valuable product as quickly as possible. Those are the top four things we see.

To get over those things, you need a clear transformation strategy and clarity on what it is you’re trying to do. As I always say before the digital transformation — everything from edge, business model, how to engage with customers and clients, and through to a technology-as-assembly — to deliver those experiences and differentiation you have to have a distinctive transformation strategy. It leads to an acceleration capability, getting beyond the barriers, and planning the digital capabilities in the right sequence.

You asked, Dana, at the opening if there are emerging models to accomplish all of this. We have established at HPE something called Digital Next Advisory. That’s our joined customer engagement framework, through which we diagnose and pivot beyond the barriers that we commonly see in the customer digital ambitions. So that’s a high-level view of where we see things going, Dana.

Gardner: Why do you call your advisory service subsets “Moments,” and why have you ordered them the way you did?

Moments create momentum for digital

Partridge: We called them Moments because in our industry if you start calling things services then people believe, “Oh, well, that sounds like just a workshop that I’ll pay for.” It doesn’t sound very differentiated.

We also like the way it expresses co-innovation and co-engagement. A moment is something to be experienced with someone else. So there are two sides to that equation.

In terms of how we sequence them, actually they’re not sequenced. And that’s key. One of the things we do as a team across the world is to work out where the constraint points and barriers are. So think of it as a methodology.

And as with any good methodology, there are a lot of tools in the toolkit. The key for us as practitioners in the Digital Next Advisory service is to know what tool to bring at the right point to the customer.

As with any good methodology, there are a lot of tools in the toolkit. The key for us as practitioners in the Digital Next Advisory service is to know what tool to bring at the right point to the customer.

Sometimes that’s going to mean a communication issue, so let’s go solve for that particular problem first. Or, in some cases, it’s needing a differentiated technology partner, like HPE, to come in and create a vision, or a value proposition, that’s going to be different and unique. And so we would engage more specifically around that differentiation agenda.

There’s no sequencing; the sequencing is unique to each customer. And the right Moment is to make sure that the customer understands it is bidirectional. This is a co-engagement framework between two parties.

Gardner: All right, very good. Let’s welcome back Yara.

Schuetz: To reiterate what Craig mentioned, when we engage with a customer in a complex phenomenon such as digital transformation, it’s important to find common ground where we can and then move forward in the digital transformation journey specific to each of our customers.

Common core beliefs drive outcomes

We have three core beliefs. One is being edge-centric. And on the edge-centric core belief we believe that there are two business goals and business outcomes that our customers are trying to achieve.

No alt text provided for this image

In the top left, we have the human edge-centric journey, which is all about redefining customer experiences. In this journey, for example, the corporate initiative could mean the experiences of two personas. It could be the customer or the employees.

These initiatives are designed to increase revenues and productivity via such digital engagements as new services, such as mobile apps. And also to complement this human-to-edge journey we have the physical journey, or the physical edge. To gain insight and control means dealing with the physical edge. It’s about using, for example, Internet of things (IoT) technology for the environment the organization works in, operates in, or provide services in. So the business objective here in this journey consists of improving efficiency by means of digitizing the edge.

Complementary to the edge-centric side, we also have the core belief that the enterprise of the future will be cloud-enabled. By being cloud-enabled, we again separate the cloud-enabled capabilities into two distinct journeys.

The bottom right-hand journey is about modernizing and optimization. In this journey, initiatives address how IT can modernize its legacy environment with, for example, multi-cloud agility. It also includes, for example, optimization and management of services delivery, where different workloads should be best hosted. We’re talking about on-premises as well as different cloud models to focus the IT journey. That also includes software development, especially accelerating development.

This journey also involves the development improvement around personas. The aim is to speed up time-to-value with cloud-native adoption. For example, calling out microservices or containerization to shift innovation quickly over to the edge, using certain platforms, cloud platforms, and APIs.

The third core belief that the enterprise of the future should strive for is the data-driven, intelligence journey, which is all about analyzing and using data to create intelligence to innovate and differentiate from competitors. As a result, they can better target, for example, business analytics and insights using machine learning (ML) or artificial intelligence (AI). Those initiatives generate or consume data from the other journeys.

And complementary to this aspect is bringing trust to all of the digital initiatives. It’s directly linked to the intelligence journey because the data generated or consumed by the four journeys needs to be dealt with in a connected organization with resiliency and cybersecurity playing leading roles resulting in interest to internal as well as external stakeholders.

At the center is the operating model. And that journey really builds the center of the framework because skills, metrics, practices, and governance models have to be reshaped, since they dictate the outcomes of all digital transformation efforts.

These components build the enabling considerations that one must consider when you’re pursuing different business goals such as driving revenues, building productivity, or modernizing existing environments via multi-cloud agility. To put that all in the context of what many companies are really asking for right now is to put it in the context of everything-as-a-service.

Everything-as-a-service does not just belong to, for example, the cloud-enabled side. It’s not only about how you’re consuming technology. It also applies to the edge side for our customers, and in how they deliver, create, and monetize their services to their customers.

Gardner: Yara, please tell us how organizations are using all of this in practice. What are people actually doing?

Communicate clearly with Activate

Schuetz: One of the core challenges we’ve experienced together with customers is that they have trouble framing and communicating their transformation efforts in an easily understandable way across their entire organizations. That’s not an easy task for them.

Communication tension points tend to be, for example, how to really describe digital transformation. Is there any definition that really suits my business? And how can I visualize, easily communicate, and articulate that to my entire organization? How does what I’m trying to do with technology make sense in a broader context within my company?

So within the Activate Moment, we familiarize them with the digital journey map. This captures their digital ambition and communicates a clear transformation and execution strategy. The digital journey map is used as a model throughout the conversations. This tends to improve how an abstract and complex phenomenon like digital transformation can be delivered as something visual and simple to communicate.

Besides simplification, the digital journey map in the Activate Moment also helps describe an overview and gives a structure of various influencing categories and variables, as well as their relationship with each other, in the context of digital transformation.

Besides simplification, the digital journey map in the Activate Moment also helps describe an overview and gives a structure of various influencing categories and variables, as well as their relationship with each other in the context of digital transformation. It provides our customers guidance on certain considerations, and, of course, all the various possibilities of the application of technology in their business.

For example, at the edge, when we bring the digital journey map into the customer conversation in our Activate Moment, we don’t just talk about the edge generally. We refer to specific customer needs and what their edge might be.

In the financial industry, for example, we talk about branch offices as their edge. In manufacturing, we’re talking about production lines as their edges. If in retail, you have public customers, we talk about the venues as the edge and how — in times like this and the new normal — they can redefine experience and drive value there for their customers there.

Of course, this also serves as inspiration for internal stakeholders. They might say, “Okay, if I link these initiatives, or if I’m talking about this topic in the intelligence space, [how does that impact] the digitization of research and development? What does that mean in that context? And what else do I need to consider?”

Such inspiration means they can tie all of that together into a holistic and effective digital transformation strategy. The Activate Moment engages more innovation on the customer-centric side, too, by bringing insights into the different and various personas at a customer’s edge. They can have different digital ambitions and different digital aspirations that they want to prosper from and bring into the conversation.

Gardner: Thanks again, Yara. On the thinking around personas and the people, how does the issue of defining a new digital corporate culture fit into the Activate Moment?

Schuetz: It fits in pretty well because we are addressing various personas with our Activate Moment. For the chief digital officer (CDO), for example, the impact of the digital initiatives on the digital backbone are really key. She might ask, “Okay, what data will be captured and processed? And which insights will we drive? And how do we make these initiatives trusted?”

Gardner: We’re going to move on now to the next Moment, Align, and orchestrating initiatives with Aviviere. Tell us more about the orchestrating initiatives and the Align Moment, please.

Align with the new normal and beyond

Telang: The Align Moment is designed to help organizations orchestrate their broad catalog of digital transformation initiatives. These are the core initiatives that drive the digital agenda. Over the last few years, as we’ve engaged with customers in various industries, we have found that one of the most common challenges they encounter in this transformation journey is a lack of coordination and alignment between their most critical digital initiatives.

No alt text provided for this image

And, frankly, that slows their time-to-market and reduces the value realized from their transformation efforts. Especially now, with the new normal that we find ourselves in, organizations are rapidly scaling up and broadening out that their digital agenda.

As these organizations rapidly pivot to launching new digital experiences and business models, they need to rapidly coordinate their transformation agenda against an ever-increasing set of stakeholders — who sometimes have competing priorities. These stakeholders can be the various technology teams siting in an IT or digital office, or perhaps the business units responsible for delivering these new experience models to market. Or they can be the internal functions that support internal operations and supply chains of the organizations.

We have found that these groups are not always well-aligned to the digital agenda. They are not operating as a well-oiled machine in their pursuit of that singular digital vision. In this new normal, speed is critical. Organizations have to get aligned to the conversation and execute on all of the digital agenda quickly. That’s where the Align Moment comes in. It is designed to generate deep insights that help organizations evaluate a catalog of digital initiatives across organizational silos and to identify an execution strategy that speeds up their time-to-market.

No alt text provided for this image
Telang

So what does that actually look like? During the Align Moment, we bring together a diverse set of stakeholders that own or contribute to the digital agenda. Some of the stakeholders may sit in the business units, some may sit in internal functions, or maybe even on the digital office. But we bring them together to jointly capture and evaluate the most critical initiatives that drive the core of the digital agenda.

The objective is to jointly blend our own expertise and experience with that of our customers to jointly investigate and uncover the prerequisites and interdependencies that so often exist between these complex sets of enterprise-scale digital initiatives.

During the Align Moment, you might realize that the business units need to quickly recalibrate their business processes in order to meet the data security requirements coming in from the business unit or the digital team. For example, one of our customers found out during their own Align Moment that before they got too far down the path of developing their next generation of digital product, they needed to first build in data transparency and accessibility as a core design principle in their global data hub.

The methodology in the Align Moment significantly reduces execution risk as organizations embark on their multi-year transformation agendas. Quite frankly, these agendas are constantly evolving because the speed of the market today is so fast.

Our goal here is to drive a faster time-to-value for the entire digital agenda by coordinating the digital execution strategy across the organization. That’s what the Align Moment helps our customers with. That value has been brought to different stakeholders that we’ve engaged with.

The Align Moment has brought tremendous value to the CDO, for example. The CDO now has the ability to quickly make sense and — even in some cases — coordinate the complex web of digital initiatives running across their organizations, regardless of which silos they may be owned within. They can identify a path to execution that speeds up the realization of the entire digital agenda. I think of it as giving the CDO a dashboard through which they can now see their entire transformation on a singular framework.

We have found that the Align Moment delivers a lot of value for digital initiative owners. Because we work jointly across silos to de-risk, the execution pass implements that initiative whether it’s technology risk, process risk, or governance risk.

We’ve also found that the Align Moment delivers a lot of value for digital initiative owners. Because we jointly work across silos to de-risk, the execution pass implements that initiative whether it’s a technology risk, process risk, or governance risk. That helps to highlight the dependencies between these competing initiatives and competing priorities. And then, sequencing the work streams and efforts minimizes the risk of delays or mismatched deliverables, or mismatched outputs, between teams.

And then there is the chief information officer (CIO). This is a great tool for the CIO to take IT to the next level. They can elevate the impact of IT in the business, and in the various functions in the organization, by establishing agile, cross-functional work streams that can speed up the execution of the digital initiatives.

That’s in a nutshell what the Align Moment is about, helping our customers rapidly generate deep insights to help them orchestrate their digital agenda across silos, or break down silos, with the goal to speed up execution of their agendas.

Advance to the next big thing

Gardner: We’re now moving on to our next Moment, around stimulating differentiation, among other things. We now welcome back Christian to tell us about the Advance Moment.

Reichenbach: The train-of-thought here is that digital transformation is not only to optimize businesses by using technology. We also want to emphasize that technology is used to transform businesses by leveraging digital technology.

That means that we are using technology to differentiate the value propositions of our customers. And differentiation means, for example, new experiences for the customers of our customers, as well as new interactions with digital technology.

Further, it’s about establishing new digital business models, gaining new revenue streams, and expanding the ecosystem in a much broader sense. We want to leverage technology to differentiate the value propositions of our customers, and differentiation means you can’t do whatever one is doing by just copycatting, looking to your peers, and replicating what others are doing. That will not differentiate the value proposition.

Therefore, we specifically designed the Advance Moment where we co-innovate and co-ideate together with our customers to find their next big thing and driving technology to a much more differentiated value proposition.

Gardner: Christian, tell us more about the discreet steps that people need to do in order to get through that stimulating of differentiation.

Reichenbach: Differentiation comes from having new ideas and doing something different than in the past. That’s why we designed the Advance Moment to help our customers differentiate their unique value proposition.

No alt text provided for this image

The Advance Moment is designed as a thinking exercise that we do together with our customers across their diverse teams, meaning product owners, technology designers, engineers, and the CDO. This is a diverse team thinking about a specific problem they want to solve, but they shouldn’t think about it in isolation. They should think about what they do differently in the future to establish new revenue streams with maybe a new digital ecosystem to generate the new digital business models that we see all over the place in the annual reports from our customers.

Everyone is in the race to find the next big thing. We want to help them because we have the technology capabilities and experience to explain and discuss with our customers what is possible today with such leading technology as from HPE.

We can prove that we’ve done that. For example, we sit down with Continental, the second largest automotive part supplier in the world, and ideate about how we can redefine the experience of a driver who is driving along the road. We came up with a data exchange platform that helps our co-manufacturers to exchange data between each other so that the driver who’s sitting in the car gets new entertainment services that were not possible without a data exchange platform.

Our ideation and our Advance Moment are focused on redefining the experience and stimulating new ideas that are groundbreaking — and are not just copycatting what their peers are doing. And that, of course, will differentiate the value propositions from our customers in a unique way so that they can create new experiences and ultimately new revenue streams.

We’re addressing particular personas within our customer’s organization. That’s because today we see that the product owners in a company are powerful and are always asking themselves, “How can I bring my product to the next level?”

We’re addressing particular personas within our customer’s organization. That’s because today we see that the product owners in a company are powerful and are always asking themselves, “How can I bring my product to the next level? How can I differentiate my product so that it is not easily comparable with my peers?”

And, of course, the CDO in the customer organizations are looking to orchestrate these initiatives and support the product owners and engineers and build up the innovation engine with the right initiatives and right ideas. And, of course, when we’re talking about digital business transformation, we end up in the IT department because it has to operate somewhere.

So we bring in the experts from the IT department as well as the CIO to turn ideas quickly into realization. And for turning ideas quickly into something meaningful for our customers is what we designed the Accelerate Moment for.

Gardner: We will move on next to the Moment with Amos and learn about the Accelerate Moment, of moving toward the larger digital transformation value.

Accelerate from ideas into value

Ferrari: When it comes to realizing digital transformation, let me ask you a question, Dana. What do you think is the key problem our customers have?

Gardner: Probably finding ways to get started and then finding realization of value and benefits so that they can prove their initiative is worthwhile.

Ferrari: Yes. Absolutely. It’s a problem of prioritization of investment. They know that they need to invest, they need to do something, and they ask, “Where should I invest first? Should I invest in the big infrastructure first?”

But these decisions can slow things down. Yet time-to-market and speed are the keys today. We all know that this is what is driving the behavior of the people in their transformations. And so the key thing is the Accelerate Moment. It’s the Moment where we engage with our customers via workshops with them.

We enable them to extrapolate from their digital ambition and identify what will enable them to move into the realization of their digital transformation. “Where should I start? What is my journey’s path? What is my path to value?” These are the main questions that the Accelerate Moment answers.

No alt text provided for this image

As you can see, this is a part of the entire HPE Digital Next Advisory services, and it’s enabling the customer to move critically to the realization of benefits. In this engagement, you start with the decision about the use cases and the technology. There are a number of key elements and decisions that the customer is making. And this is where we’re helping them with the Accelerate Moment.

To deliver an Accelerate Moment, we use a number of steps. First, we frame the initiative by having a good discussion about their KPIs. How are you going to measure them? What are the benefits? Because the business is what is thriving. We know that. And we understand how the technology is the link to the business use case. So we frame the initiative and understand the use cases and scope out the use cases that advance the key KPIs that are the essential platform for the customer. That is a key step into the Moment.

Another important thing to understand is that in a digital transformation, a customer is not alone. No customer is really alone in that. It’s not successful if they don’t think holistically about their digital ecosystems. A customer is successful when they think about the complete ecosystem, including not only the key internal stakeholders but the other stakeholders surrounding them. Together they can enable them to build a new digital value and enable customer differentiation.

The next step is understanding the depth of technology across our digital journey map. And the digital journey map helps customers to see beyond just one angle. They may have started only from the IT point of view, or only from the developer point of view, or just the end user point of view. The reality is that IT now is becoming the value creator. But to be the value creator, they need to consider the entire technology of the entire company.

They need to consider edge-to-cloud, and data, as a full picture. This is where we can help them through a discussion about seeing the full technology that supports the value. How can you bring value to your full digital transformation?

The last step that we consider in the Accelerate Moment is to identify the elements surrounding your digital transformation that are the key building blocks and that will enable you to execute immediately. Those building blocks are key because they create what we call the minimal value product.

They should build up a minimum value product and surround it with the execution to realize the value immediately. They should do that without thinking, “Oh, maybe I need two or three years before realize that value.” They need to change to asking, “How can I do that in a very short time by creating something that is simple and straightforward to create by putting the key building blocks in place.”

This shows how everything is linked and how we need to best link them together. How? We link everything together with stories. And the stories are what help our key stakeholders realize what they needed to create. The stories are about the different stakeholders and how the different stakeholders see themselves in the future of digital transformation. This is the way we show them how this is going to be realized.

The end result is that we will deliver a number of stories that are used to assemble the key building blocks. We create a narrative to enable them to see how the applied technology enables them to create value for their company and achieve the key growth. This is the Accelerate Moment.

Gardner: Craig, as we’ve been discussing differentiation for your customers, what differentiates HPE Pointnext Services? Why are these four Moments the best way to obtain digital transformation?

No alt text provided for this image

Partridge: Differentiation is key for us, as well as for our customers across a complex and congested landscape of partners that the customers can choose. Some of the differentiation we’ve touched on here. There is no one else in the market, as far as I’m aware, that has the edge-to-cloud digital journey map, which is HPE’s fundamental model and allows us then to holistically paint the story of not only digital transformation and digital ambition — but also shows you how to do that at the initiative level and to how plug in those building blocks.

I’m not saying that anybody with just the maturity of an edge-to-cloud model can bring digital ambition to life, to visualize it through the Activate Moment, orchestrate it through the Align Moment, create differentiation through the Advance Moment, and then get to quicker value with the Accelerate Moment.

Gardner: Craig, for those organizations interested in learning more, how do they get started? Where can they go for resources to gain the ability to innovate and be differentiated?

Partridge: If anybody viewing this has seen something that they want to grab on to, that they think can accelerate their own digital ambition, then simply pick up the phone and call HPE and your sales rep. We have sales organizations from dedicated enterprise managers at some of that biggest customers around the world, on through to small- to medium-sized businesses with our inside-sales organization. Call your HPE sales rep and say the magic words “I want to engage with a digital adviser and I’m interested in Digital Next Advisory.” And that should be the flag that triggers a conversation with one of our digital advisers around the world.

Finally, there’s an email address, digitaladviser@hpe.com. If worse comes to worst, throw an email to that address and then we’d be able to get straight back to you. So, it should make it as easy as possible and just reach out to HPE advisors in advance.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video. Sponsor: Hewlett Packard Enterprise.

YOU MAY ALSO BE INTERESTED IN:

Posted in application transformation, artificial intelligence, Business intelligence, Cloud computing, containers, Data center transformation, DevOps, digital transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

How global data availability accelerates collaboration and delivers business insights

The next BriefingsDirect data strategy insights discussion explores the payoffs when enterprises overcome the hurdles of disjointed storage to obtain global data access.

By leveraging the latest in container and storage server technologies, the holy grail of inclusive, comprehensive, and actionable storage can be obtained. And such access extends across all deployment models – from hybrid cloud, to software-as-a-service (SaaS), to distributed data centers, and edge.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us here to examine the role that comprehensive data storage plays in delivering the rapid insights businesses need for digital business transformation with our guest, Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Denis, in our earlier discussions in this three-part series we learned about IBM’s vision for global consistent data, as well as the newest systems forming the foundation for these advances.

But let’s now explore the many value streams gained from obtaining global data access. We hear a lot about the rise of artificial intelligence (AI) adoption needed to support digital businesses. So what role does a modern storage capability — particularly with a global access function and value — play in that AI growth? 

Kennelly: As enterprises become increasingly digitally transformed, the amount of data they are generating is enormous. IDC predicts that something like 42 billion Internet of things (IoT) devices will be sold by 2025, and so the role of storage is not only centralized to data centers. It needs to be distributed across this entire hybrid cloud environment.

Discover and share AI data

For actionable AI, you want to build models on all of the data that’s been generated across this environment. Being able to discover and understand that data is critical, and that’s why it’s a key part of our storage capabilities. You need to run that storage on all of these highly distributed environments in a seamless fashion. You could be running anywhere — the data center, the public cloud, and at edge locations. But you want to have the same software and capabilities for all of these locations to allow for that essential seamless access.

That’s critical to enabling an AI journey because AI doesn’t just operate on the data sitting in a public cloud or data center. It needs to operate on all of the data if you want to get the best insights. You must get to the data from all of these locations and bring it together in a seamless manner.

Gardner: When we’re able to attain such global availability of data — particularly in a consistent context – how does that accelerate AI adoption? Are there particular use cases, perhaps around DevOps? How do people change their behavior when it comes to AI adoption, thanks to what the storage and data consistency can do for them?

Kennelly: First it’s about knowing where the data is and doing basic discovery. And that’s a non-trivial task because data is being generated across the enterprise. We are increasingly collaborating remotely and that generates a lot of extended data. Being able to access and share that data across environments is a critical requirement. It’s something that’s very important to us. 

Then — as you discover and share the data – you can also bring that data together into use by AI models. You can use it to actually generate better AI models across the various tiers of storage. But you don’t want to just end up saying, “Okay, I discovered all of the data. I’m going to move it to this certain location and then I’m going to run my analytics on it.”

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Instead, you want to do the analytics in real time and in a distributed fashion. And that’s what’s critical about the next level of storage.Coming back to what’s hindering AI adoption, number one is that data discovery because enterprises spent a huge amount of time just discovering the data. And when you get access, you need to have seamless access. And then, of course, as you build your AI models you need to infuse those analytics into the applications and capabilities that you’re developing.

And that leads to your question around DevOps, to be able to integrate the processes of generating and building AI models into the application development process so that we make sure the application developers can leverage those insights for the applications they are building.

Gardner: For many organizations, moving to hybrid cloud has been about application portability. But when it comes to the additional data mobility we gain from consistent global data access, there’s a potential greater value. Is there a second shoe to fall, if you will, Denis, when we can apply such data mobility in a hybrid cloud environment?

Access data across hybrid cloud 

Kennelly: Yes, and that second shoe is about to fall. The first part of our collective cloud journey was all about moving to the public cloud, moving everything to public clouds, and building applications with cloud-based data.

What we discovered in doing that is that life is not so simple, and we’re really now in a hybrid cloud world for many reasons. Because of that success, we now need the hybrid cloud approach.

The need for more cloud portability has led to technologies like containers to get portability across all of the environments — from data centers to clouds. As we roll out containers into production, however, the whole question of data becomes even more critical.

That need for more cloud portability has led to technologies like containers to get portability across all of these environments – from data centers to clouds. As we roll out containers and these workloads into production, the whole data question is more critical.

You can now build an application that runs in a certain environment, and containers allow you to move that application to other environments very quickly. But if the data doesn’t follow — if the data access doesn’t follow that application seamlessly — then you face some serious challenges and problems.

And that is the next shoe to drop, and it’s dropping right now. As we roll out these sophisticated applications into production, being able to copy data or get access to data across this hybrid cloud environment is the biggest challenge the industry is facing.

Gardner: When we envision such expansive data mobility, we often think about location, but it also impacts the type of data – be it file, block, and object storage, for example. Why must there be global access geographically — but also in terms of the storage type and across the underlying technology platforms? 

Kennelly: To the application developer, we really have to hide from them that layer of complexity of the storage type and platform. At the end of the day, the application developer is looking for a consistent API through which to access the data services, whether that’s file, block, or object. They shouldn’t have to care about that level of detail.

No alt text provided for this image

It’s important that there’s a focus on consistent access via APIs to the developer. And then the storage subsystem has to take care of the federated global access of the data. Also, as we generate data, the storage subsystem should scale horizontally.These are the design principles we have put into the IBM Storage platform. Number one, you get seamless actions and consistent access – be it file, object, or block storage. And we can scale horizontally as you generate data across that hybrid cloud environment.

Gardner: The good news is that global data access enablement can now be done with greater ease. The bad news is the global access enablement can be done anywhere, anytime, and with ease.

And so we have to also worry about access, security, permissions, and regulatory compliance issues. How do you open the floodgates, in a sense, for common access to distributed data, but at the same time put in the guardrails that allow for the management of that access in a responsible way?

Global data access opens doors

Kennelly: That’s a great question. As we introduce simplicity and ease of data access, we can’t just open it up to everybody. We have to make sure we have good authentication as part of the design, using things like two-factor authentication on the data-access APIs.

But that’s only half of the problem. In the security world, the unfortunate acceptance is that you probably are going to get breached. It’s in how you respond that really differentiates you and determines how quickly you can get the business back on its feet.

And so, when something bad happens, the third critical role for the storage subsystem to play is in the access control to the persistence storage. At the end of the day, that is what people are after. Being able to understand the typical behavior of those storage systems, and how data is usually being stored, forms a baseline against which you can understand when something out of the ordinary is happening.

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Clearly, if you’re under a malware or CryptoLocker attack, you see a very different input/output (IO) pattern than you would normally see. We can detect that in real time, understand when it happens, and make sure you have protected copies of the data so you can quickly access that and get back to business and back online quickly.Why is all of that important? Because we live in a world where it’s not a case of if it will happen, it’s really when it will happen. How we can respond is critical.

Gardner: Denis, throughout our three-part series we’ve been discussing what we can do, but we haven’t necessarily delved into specific use cases. I know you can’t always name businesses and reference customers, but how can we better understand the benefits of a global data access capability in the context of use cases?

In practice, when the rubber hits the road, how does global data storage access enable business transformation? Is there a key metric you look for to show how well your storage systems support business outcomes? 

Global data storage success

Kennelly: We’re at a point right now when customers are looking to drive new business models and to move much more quickly in their hybrid cloud environments.

There are enabling technologies right now facilitating that. There’s a lot of talk about edge with the advent of 5G networks, which enable a lot of this to happen. When you talk about seamless access and the capability to distribute data across these environments, you need the underlying network infrastructure to make that happen.

Customers are looking to drive new business models and to move much more quickly in their hybrid cloud deployments. There’s a lot of talk about edge with the advent of 5G networks. When you talk about seamless access and the capability to distribute data across these environments, you need the underlying network infrastructure to make that happen.

As we do that, we’re looking at a number of key business measures and metrics. We have done some independent surveys and analysis looking at the business value that we drive for our clients with a hybrid cloud platform and things like portability, agility, and seamless data access.

In terms of business value, we have four or five measures. For example, we can drive roughly 2.5 times more business value for our clients — everything from top-line growth to operational savings. And that’s something that we have tested with many clients independently.

One example that’s very relevant in the world we live in today is we have a cloud provider that needed to have more federated access to their global data. But they also wanted to distribute that through edge nodes in a consistent manner. And that’s just an example of why this is happening in action.

Gardner: You know, some of the major consumers of analytics in businesses these days are data scientists, and they don’t always want to know what’s going on underneath the covers. On the other hand, what goes on underneath the covers can greatly impact how well they can do their jobs, which are often essential to digital business transformation.

No alt text provided for this image

For you to address a data scientist specifically about why global access for data and storage modernization is key, what would you tell them? How do you describe the value that you’re providing to someone like a data scientist who plays such a key role in analytics?

Kennelly: Well, data scientists talk a lot about data sets. They want access to data sets so they can test their hypothesis very quickly. In a nutshell, we surface data sets quicker and faster than anybody else at a price performance that leads the industry — and that’s what we do every day to enable data scientists.

Gardner: Throughout our series of three storage strategy discussions, we’ve talked about how we got here and what we’re doing. But we haven’t yet talked about what comes next.

These enabling technologies not only satisfy business imperatives and requirements now but set up organizations to be even more intelligent over time. Let’s look to the future for the expanding values when you do data access globally and across hybrid clouds well. 

Insight-filled future drives growth

Kennelly: Yes, you get to critically look at current and new business models. At the end of the day, this is about driving business growth. As you start to look at these environments — and we’ve talked a lot about analytics and data – it becomes about getting competitive advantage through real-time insights about what’s going on in your environments.

You become able to better understand your supply chain, what’s happening in certain products, and in certain manufacturing lines. You’re able to respond accordingly. There’s a big operational benefit in terms of savings. You don’t have to have excess capacity in the environment.

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Also, in seeking new business opportunities, you will detect the patterns needed to have insights you hadn’t had before by doing analytics and machine learning into what’s critical in your systems and markets. If you move your IT environment and centralize everything in one cloud, for example, then that really hinders that progress.By being able to do that with all of the data as it’s generated in real time, you get very unique insights that provide competitive advantage.

Gardner: And lastly, why IBM? What sets you apart from the competition in the storage market for obtaining these larger goals of distributed analytics, intelligence, and competitiveness?

Kennelly: We have shown over the years that we have been at the forefront of many transformations of businesses and industries. Going back to the electronic typewriter, if we want to go back far enough, or now to our business-to-business (B2B) or business-to-employee (B2E) models in the hybrid cloud — IBM has helped businesses make these transformations. That includes everything from storage to data and AI through to hybrid cloud platforms, with Red Hat Enterprise Linux, and right out to our business service consulting.

IBM has the end-to-end capabilities to make that all happen. It positions us as an ideal partner who can do so much.

I love to talk about storage and the value of storage, and I spend a lot of time talking with people in our business consulting group to understand the business transformations that clients are trying to drive and the role that storage has in that. Likewise, with our data science and data analytics teams that are enabling those technologies.

The combination of all of those capabilities as one idea is a unique differentiator for us in the industry. And it’s why we are developing the leading edge capabilities, products, and technology to enable the next digital transformations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, application transformation, big data, Business intelligence, Cloud computing, containers, Cyber security, data analysis, data center, Data center transformation, digital transformation, enterprise architecture, IBM, machine learning, Software, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How consistent storage services across all tiers and platforms attains data simplicity, compatibility, and lower cost

This BriefingsDirect Data Strategies Insights discussion series, Part 2, explores the latest technologies and products delivering common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

New advances in storage technologies, standards, and methods have changed the game when it comes to overcoming the obstacles businesses too often face when seeking pervasive analytics across their systems and services. 

Stay with us now as we examine how IBM Storage is leveraging containers and the latest storage advances to deliver inclusive, comprehensive, and actionable storage.  

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the future of storage strategies that accelerate digital transformation, please welcome Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: In our earlier discussion we learned about the business needs and IBM’s large-scale vision for global, consistent data. Let’s now delve beneath the covers into what enables this new era of data-driven business transformation. 

In our last discussion, we also talked about containers — how they had been typically relegated to application development. What should businesses know about the value of containers more broadly within the storage arena as well as across other elements of IT?

Containers for ease, efficiency

Kennelly: Sometimes we talk about containers as being unique to application development, but I think the real business value of containers is in the operational simplicity and cost savings. 

When you build applications on containers, they are container-aware. When you look at Kubernetes and the controls you have there as an operations IT person, you can scale up and scale down your applications seamlessly. 

As we think about that and about storage, we have to include storage under that umbrella. Traditionally, storage was independently doing of a lot of the work. Now we are in a much more integrated environment where you have cloud-like behaviors. And you want to deliver those cloud-like behaviors end-to-end — be it for the applications, for the data, for the storage, and even for the network — right across the board. That way you can have a much more seamless, easier, and operationally efficient way of running your environment. 

Containers are much more than just an application development tool; they are a key enabler to operational improvement across the board.

Gardner: Because hybrid cloud and multi-cloud environments are essential for digital business transformation, what does this container value bring to bridging the hybrid gap? How do containers lead to a consistent and actionable environment, without integrations and complexity thwarting wider use of assets around the globe?

Kennelly: Let’s talk about what a hybrid cloud is. To me, a hybrid cloud is the ability to run workloads on a public cloud and on a private cloud traditional data center. And even right out to edge locations in your enterprise where there are no IT people whatsoever. 

Being able to do that consistently across that environment — that’s what containers bring. They allow a layer of abstraction above the target environment, be it a bare-metal server, a virtual machine (VM), or a cloud service – and you can do that seamlessly across those environments.

That’s what a hybrid cloud platform is and what enables that are containers and being able to have a seamless runtime across this entire environment.

Today, as an enterprise, we still have assets sitting on a data center. Yet typical horizontal business processes, such as HR or sales, want to move to a SaaS model while still retaining core differentiating business processes. 

And that’s core to digital transformation, because when we start to think about where we are today as an enterprise, we still have assets sitting on the data center. Typically, what you see out there are horizontal business processes, such as human resources or sales, and you might want to move those more to a software as a service (SaaS) capability while still retaining your core, differentiating business processes.

For compliance or regulatory reasons, you may need to keep those assets in the data center. Maybe you can move some pieces. But at the same time, you want to have the level of efficiency you gain from cloud-like economics. You want to be able to respond to business needs, to scale up and scale down the environment, and not design the environment for a worst-case scenario. 

That’s why a hybrid cloud platform is so critical. And underneath that, why containers are a key enabler. Then, if you think about the data in storage, you want to seamlessly integrate that into a hybrid environment as well.

Gardner: Of course, the hybrid cloud environment extends these days more broadly with the connected edge included. For many organizations the edge increasingly allows real-time analytics capabilities by taking advantage of having compute in so many more environments and closer to so many more devices.

What is it about the IBM hybrid storage vision that allows for more data to reside at the edge without having to move it into a cloud, analyze it there, and move it back? How are containers enabling more data to stay local and still be part of a coordinated whole greater than the sum of the parts?

Data and analytics at the edge

Kennelly: As an industry, we go from being centralized to decentralized — what I call a pendulum movement every number of years. If you think back, we were in the mainframe, where everything was very centralized. Then we went to distributed systems and decentralized everything.

With cloud we began to recentralize everything again. And now we are moving our clouds back out to the edge for a lot of reasons, largely because of egress and ingress challenges and to seek efficiency in moving more and more of that data. 

No alt text provided for this image

When I think about edge, I am not necessarily thinking about Internet of things (IoT) devices or sensors, but in a lot of cases this is about branch and remote locations. That’s where a core part of the enterprise operates, but not necessarily with an IT team there. And that part of the enterprise is generating data from what’s happening in that facility, be it a manufacturing plant, a distribution center, or many others.

As you generate that data, you also want to generate the analytics that are key to understanding how the business is reacting and responding. Do you want to move all that data to a central cloud to run analytics, and then take the result back out to that distribution center? You can do that, but it’s highly inefficient — and very costly. 

What our clients are asking for is to keep the data out at these locations and to run the analytics locally. But, of course, with all of the analytics you still want to share some of that data with a central cloud.

So, what’s really important is that you can share across this entire environment, be it from a central data center or a central cloud out to an edge location and provide what we call seamless access across this environment. 

With our technology, with things like IBM Spectrum Scale, you gain that seamless access. We abstract the data access as if you are accessing the data locally — or it could be back in the cloud. But in terms of the applications, it really doesn’t care. That seamless access is core to what we are doing.

Gardner: The IBM Storage portfolio is broad and venerable. It includes flash, disk, and tape, which continues to have many viable use cases. So, let’s talk about the products and how they extend the consistency and commonality that we have talked about and how that portfolio then buttresses the larger hybrid storage vision.

Storage supports all environments 

Kennelly: One of the key design points of our portfolio, particularly our flash line, is being able to run in all environments. We have one software code base across our entire portfolio. That code runs on our disk subsystems and disk controllers, but it can also run on your platform of choice. So, we absolutely support all platforms across the board. So that’s one design principle. 

Secondly, we embrace containers very heavily. And being able to run on containers and provide data services across those containers provides that seamless access that I talked about. That’s a second major design principle.

Yet as we look at our storage portfolio, we also want to make sure we optimize the storage and optimize the spend by the customer by tiered storage and being able to move data across those different tiers of storage.

As we look at our storage portfolio, we also want to make sure we optimize the storage and optimize the spend by the customer by tiered storage and being able to move data across those different tiers of storage.

You mentioned tape storage. And so, for example, at times you may want to move from fast, online, always-on, and high-end storage to a lower tier of less expensive storage such as tape, maybe for data retention reasons. You’ll then need an air gap solution and you’ll want to move to cold storage, as we call it, i.e. on tape. We support that capability and we can manage your data across that environment. 

There are three core design principles to our IBM Storage portfolio. Number one is we can run seamlessly across these environments. Number two, we provide seamless access to the data across those environments. And number three, we support optimization of the storage for the use case needed, such being able to tier the storage to your economic and workload needs.

Gardner: Of course, what people are also interested in these days is the FlashSystem performance. Tell us about some of the latest and greatestwhen it comes to FlashSystem. You have the new 5200, the high-end 9200, and those also complement some of your other products like ESS 3200

Flash provides best performance

Kennelly: Yes, we continue to expand the portfolio. With the FlashSystems, and some of our recent launches, some things don’t change. We’re still able to run across these different environments.

But in terms of price-performance, especially with the work we have done around our flash technology, we have optimized our storage subsystems to use standard flash technologies. In terms of price for throughput, when we look at this against our competitors, we offer twice the performance for roughly half the price. And this has been proven as we look at our competitors’ technology. That’s due to leveraging our innovations around what we call the FlashCore Module, wherein we are able to use standard flash in those disk drives and enable compression on the fly. That’s driving the roadmap in terms of throughput and performance at a very, very competitive price point.

Gardner: Many of our readers and listeners, Denis, are focused on their digital business transformation. They might not be familiar with some of these underlying technological advances, particularly end-to-end Non-Volatile Memory Express (NVMe). So why are these systems doing things that just weren’t possible before?

No alt text provided for this image

Kennelly: A lot of it comes down to where the technology is today and the price points that we can get from flash from our vendors. And that’s why we are optimizing our flash roadmap and our flash drives within these systems. It’s really pushing the envelope in terms of performance and throughput across our flash platforms.

Gardner: The desired end-product for many organizations is better and pervasive analytics. And one of the great things about artificial intelligence (AI) and machine learning (ML) is it’s not only an output — it’s a feature of the process of enhancing storage and IT.

How are IT systems and storage using AI inside these devices and across these solutions? What is AI bringing to enable better storage performance at a lower price point?

Kennelly: We continue to optimize what we can do in our flash technology, as I said. But when you embark on an AI project, something like 70 to 80 percent of the spend is around discovery, gaining access to the data, and finding out where the data assets are. And we have capabilities like IBM Spectrum Discover that help catalog and understand where the data is and how to access that data. It’s a critical piece of our portfolio on that journey to AI.

We also have integrations with AI services like Cloudera out of the box so that we can seamlessly integrate with those platforms and help those platforms differentiate using our Spectrum Scale technology.

But in terms of AI, we have some really key enablers to help accelerate AI projects through discovery and integration with some of the big AI platforms.

Gardner: And these new storage platforms are knocking off some impressive numbers around high availability and low latency. We are also seeing a great deal of consolidation around storage arrays and managing storage as a single pool. 

On the economics of the IBM FlashSystem approach, these performance attributes are also being enhanced by reducing operational costs and moving from CapEx to OpEx purchasing.

Storage-as-a-service delivers

Kennelly: Yes, there is no question we are moving toward an OpEx model. When I talked about cloud economics and cloud-like flexibility behavior at a technology level, that’s only one side of the equation. 

On the business side, IT is demanding cloud consumption models, OpEx-type models, and pay-as-you-go. It’s not just a pure financial equation, it’s also how you consume the technology. And storage is no different. This is why we are doing a lot of innovation around storage-as-a-service. But what does that really mean? 

It means you ask for a service. “I need a certain type of storage with this type of availability, this type of performance, and this type of throughput.” Then we as a storage vendor take care of all the details behind that. We get the actual devices on the floor that meet those requirements and manage that. 

As those assets depreciate over a number of years, we replace and update those assets in a seamless manner to the client.

We already have the technology to support all environments. Now we want to make sure we have a seamless consumption model and the business processes of delivering storage-as-a-service and being able to replace and upgrade that storage over time — all seamless to the client.

As the storage sits in the data center, maybe the customer says, “I want to move some of that data to a cloud instance.” We also offer a seamless capability to move the data over to the cloud and run that service on the cloud. 

We already have all the technology to do that and the platform support for all of those environments. What we are working on now is making sure we have a seamless consumption model and the business processes of delivering that storage-as-a-service, and how to replace and upgrade that storage over time — while making it all seamless to the client. 

I see storage moving quickly to this new storage consumption model, a pure OpEx model. That’s where we as an industry will go over the next few years.

Gardner: Another big element of reducing your total cost of ownership over time is in how well systems can be managed. When you have a common pool approach, a comprehensive portfolio approach, you also gain visibility, a single pane of glass when it comes to managing these systems.

Intelligent insights via storage

Kennelly: That’s an area we continue to invest in heavily. Our IBM Storage Insights platform provides tremendous insights in how the storage subsystems are running operationally. It also provides insights within the storage in terms of where you have space constraints or where you may need to expand. 

But that’s not just a manual dashboard that we present to an operator. We are also infusing AI quite heavily into that platform and using AIOps to integrate with Storage Insights to run storage operations at much lower costs and with more automation.

And we can do that in a consistent manner right across the environments, whether it’s a flash storage array, mainframe attached, or a tape device. It’s all seamless across the environment. You can see those tiers and storage as one platform and so are able to respond quickly to events and understand events as they are happening.

Gardner: As we close out, Denis, for many organizations hybrid cloud means that they don’t always know what’s coming and lack control over predicting their IT requirements. Deciding in advance how things get deployed isn’t always an option.

How do the IBM FlashSystems, and your recent announcements in February 2021, provide a path to a crawl-walk-run adoption approach? How do people begin this journey regardless of the type of organization and the size of the organization?

Kennelly: We are introducing an update to our FlashSystem 5200platform, which is our entry point platform. Now, that consistent software platform runs our storage software, IBM Spectrum Virtualize. It’s the same software as in our high-end arrays at the very top of our pyramid of capabilities. 

No alt text provided for this image

As part of that announcement, we are also supporting other public cloud vendors. So you can run the software on our arrays, or you can move it out to run on a public cloud. You have tremendous flexibility and choice due to the consistent software platform.

And, as I said, it’s our entry point so the price is very, very competitive. This is a part of the market where we see tremendous growth. You can experience the best of the IBM Storage platform at a low-cost entry point, but also get the tremendous flexibility. You can scale up that environment within your data center and right out to your choice of how to use the same capabilities across the hybrid cloud.

There has been tremendous innovation by the IBM team to make sure that our software supports this myriad of platforms, but also at a price point that is the sweet spot of what customers are asking for now.

Gardner: It strikes me that we are on the vanguard of some major new advances in storage, but they are not just relegated to the largest enterprises. Even the smallest enterprises can take advantage and exploit these great technologies and storage benefits.

Kennelly: Absolutely. When we look at the storage market, the fastest growing part is at that lower price point — where it’s below $50K to $100K unit costs. That’s where we see tremendous growth in the market and we are serving it very well and very efficiently with our platforms. And, of course, as people want to scale and grow, they can do that in a consistent and predictable manner.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

You may also be interested in:

Posted in Cloud computing, data analysis, data center, Data center transformation, digital transformation, IBM, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How storage advances help businesses digitally transform across a hybrid cloud world

The next BriefingsDirect data strategies insights discussion explores how consistent and global storage models can best propel pervasive analytics and support digital business transformation.

Decades of disparate and uncoordinated storage solutions have hindered enterprises’ ability to gain common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

Yet only a comprehensive data storage model that includes all platforms, data types, and deployment architectures will deliver the rapid insights that businesses need.

Stay with us to examine how IBM Storage is leveraging containers and the latest storage advances to deliver the holy grail of inclusive, comprehensive, and actionable storage.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the future promise of the storage strategies that accelerate digital transformation, please welcome Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Clearly the world is transforming digitally. And hybrid cloud is helping in that transition. But what role specifically does storage play in allowing hybrid cloud to function in a way that bolsters and even accelerates digital transformation?

Kennelly: As you said, the world is undergoing a digital transformation, and that is accelerating in the current climate of a COVID-19 world. And, really, it comes down to having an IT infrastructure that is flexible, agile, has cloud-like attributes, is open, and delivers the economic value that we all need.

That is why we at IBM have a common hybrid cloud strategy. A hybrid cloud approach, we now know, is 2.5 times more economical than a public cloud-only strategy. And why is that? Because as customers transform — and transform their existing systems — the data and systems sit on-premises for a long time. As you move to the public cloud, the cost of transformation has to overcome other constraints such as data sovereignty and compliance. This is why hybrid cloud is a key enabler.

Hybrid cloud for transformation

Now, underpinning that, the core building block of the hybrid cloud platform, is containers and Kubernetesusing our OpenShifttechnology. That’s the key enabler to the hybrid cloud architecture and how we move applications and data within that environment.

As the customer starts to transform and looks at those applications and workloads as they move to this new world, being able to access the data is critical and being able to keep that access is a really important step in that journey. Integrating storage into that world of containers is therefore a key building block on which we are very focused today.

Storage is where you capture all that state, where all the data is stored. When you think about cloud, hybrid cloud, and containers — you think stateless. You think about cloud-like economics as you scale up and scale down. Our focus is bridging those two worlds and making sure that they come together seamlessly. To that end, we provide an end-to-end hybrid cloud architecture to help those customers in their transformation journeys.

Gardner: So often in this business, we’re standing on the shoulders of the giants of the past 30 years; the legacy. But sometimes legacy can lead to complexity and becomes a hindrance. What is it about the way storage has evolved up until now that people need to rethink? Why do we need something like containers, which seem like a fairly radical departure?

Kennelly: It comes back to the existing systems. You know, I think storage at the end of the day was all about the applications, the workloads that we ran. It was storage for storage’s sake. You know, we designed applications, we ran applications and servers, and we architected them in a certain fashion.

When you get to a hybrid cloud world … If you’re in a digitally transformed business, you can respond rapidly. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity.

And, of course, they generated data and we wanted access to that data. That’s just how the world happened. When you get to a hybrid cloud world — I mean, we talk about cloud-like behavior, cloud-like economics — it manifests itself in the ability to respond.

If you’re in a digitally transformed business, you can respond to needs in your supply chain rapidly, maybe to a surge in demand based on certain events. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity that would ever be needed. That’s the benefit cloud has brought to the industry, and why it’s so critically important.

Now, maybe traditionally storage was designed for the worst-case scenario. In this new world, we have to be able to scale up and scale down elastically like we do in these workloads in a cloud-like fashion. That’s what has fundamentally changed and what we need to change in those legacy infrastructures. Then we can deliver more of our analysis-services-consumption-type model to meet the needs of the businesses.

Gardner: And on that economic front, digitally transformed organizations need data very rapidly, and in greater volumes — with that scalability to easily go up and down. How will the hybrid cloud model supported by containers provide faster data in greater volumes, and with a managed and forecastable economic burden?

Disparate data delivers insights

Kennelly: In a digitally transformed world, data is the raw material to a competitive advantage. Access to data is critical. Based on that data, we can derive insights and unique competitive advantages using artificial intelligence (AI) and other tools. But therein lies the question, right?

When we look at things like AI, a lot of our time and effort is spent on getting access to the data and being able to assemble that data and move it to where it is needed to gain those insights.

Being able to do that rapidly and at a low cost is critical to the storage world. And so that’s what we are very focused on, being able to provide those data services — to discover and access the data seamlessly. And, as required, we can then move the data very rapidly to build on those insights and deliver competitive advantage to a digitally transformed enterprise.

Gardner: Denis, in order to have comprehensive data access and rapidly deliver analytics at an affordable cost, the storage needs to run consistently across a wide variety of different environments — bare-metal, virtual machines (VMs), containers — and then to and from both public and private clouds, as well as the edge.

What is it about the way that IBM is advancing storage that affords this common view, even across that great disparity of environments?

Kennelly: That’s a key design principle for our storage platform, what we call global access or a global file system. We’re going right back to our roots of IBM Research, decades ago where we invented a lot of that technology. And that’s the core of what we’re still talking about today — to be able to have seamless access across disparate environments.

A key design principle for our storage platform, what we call global access or a global file system, goes back to our roots at IBM Research. We invented a lot of that technology. And that’s at the core of what we’re talking about — seamless access across disparate environments.

Access is one issue, right? You can get read-access to the data, but you need to do that at high performance and at scale. At the same time, we are generating data at a phenomenal rate, so you need to scale out the storage infrastructure seamlessly. That’s another critical piece of it. We do that with products or capabilities we have today in things like IBM Spectrum Scale.

But another key design principle in our storage platforms is being to run in all of those environments — bare-metal servers, to VMs, to containers, and right out to the edge footprints. So we are making sure our storage platform is designed and capable of supporting all of those platforms. It has to run on them and as well as support the data services — the access services, the mobility services and the like, seamlessly across those environments. That’s what enables the hybrid cloud platform at the core of our transformation strategy.

Gardner: In addition to the focus on the data in production environments, we also should consider the development environment. What does your data vision include across a full life-cycle approach to data, if you will?

Be upfront with data in DevOps

Kennelly: It’s a great point because the business requirements drive the digital transformation strategy. But a lot of these efforts run into inertia when you have to change. The development processes teams within the organization have traditionally done things in a certain way. Now, all of a sudden, they’re building applications for a very different target environment — this hybrid cloud environment, from the public cloud, to the data center, and right out to the edge.

The economics we’re trying to drive require flexible platforms across the DevOpstool chain so you can innovate very quickly. That’s because digital transformation is all about how quickly you can innovate via such new services. The next question is about the data.

As you develop and build these transformed applications in a modern, DevOps cloud-like development process, you have to integrate your data assets early and make sure you know the data is available — both in that development cycle as well as when you move to production. It’s essential to use things like copy-data-management services to integrate that access into your tool chain in a seamless manner. If you build those applications and ignore the data, then it becomes a shock as you roll it into production.

This is the key issue. A lot of times we can get an application running in one scenario and it looks good, but as you start to extend those services across more environments — and haven’t thought through the data architecture — a lot of the cracks appear. A lot of the problems happen.

You have to design in the data access upfront in your development process and into your tool chains to make sure that’s part of your core development process.

Gardner: Denis, over the past several years we’ve learned that containers appear to be the gift that keeps on giving. One of the nice things about this storage transition, as you’ve described, is that containers were at first a facet of the development environment.

Developers leveraged containers first to solve many problems for runtimes. So it’s also important to understand the limits that containers had. Stateful and persistent storage hadn’t been part of the earlier container attributes.

How technically have we overcome some of the earlier limits of containers?

Containers create scalable benefits

Kennelly: You’re right, containers have roots in the open-source world. Developers picked up on containers to gain a layer of abstraction. In an operational context, it gives tremendous power because of that abstraction layer. You can quickly scale up and scale down pods and clusters, and you gain cloud-like behaviors very quickly. Even within IBM, we have containerized software and enabled traditional products to have cloud-like behaviors.

We were able to quickly move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs such as when there’s a spike in demand and you need to scale up the environment. Containers are amazing in how quickly and how simple that is.

We have been able to move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs to scale up and down. Containers are amazing in how quickly and how simple that is.

Now, with all of that power and the capability to scale up and scale down workloads, you also have a storage system sitting at the back end that has to respond accordingly. That’s because as you scale up more containers, you generate more input/output (IO) demands. How does the storage system respond?

Well, we have managed to integrate containers into the storage ecosystem. But, as an industry, we have some work to do. The integration of storage with containers is not just the simple IO channel to the storage. It also needs to be able to scale out accordingly, and to be managed. It’s an area we at IBM are focused on working closely with our friends at Red Hat to make sure that’s a very seamless integration and gives you consistent, global behavior.

Gardner: With security and cyber-attacks being so prominent in people’s minds in early 2021, what impacts do we get with a comprehensive data strategy when it comes to security? In the past, we had disparate silos of data. Sometimes, bad things could happen between the cracks.

So as we adopt containers consistently is there an overarching security benefit when it comes to having a common data strategy across all of your data and storage types?

Prevent angles of attack

Kennelly: Yes. It goes back to the hybrid cloud platform and having potentially multiple public clouds, data center workloads, edge workloads, and all of the combinations thereof. The new core is containers, but you know that with applications running across that hybrid environment that we’ve expanded the attack surface beyond the data center.

By expanding the attack surface, unfortunately, we’ve created more opportunities for people to do nefarious things, such as interrupt the applications and get access to the data. But when people attack a system, the cybercriminals are really after the data. Those are the crown jewels of any organization. That’s why this is so critical.

Data protection then requires understanding when somebody is tampering with the data or gaining access to data and doing something nefarious with that data. As we look at our data protection technologies, and as we protect our backups, we can detect if something is out of the ordinary. Integrating that capability into our backups and data protection processes is critical because that’s when we see at a very granular level what’s happening with the data. We can detect if behavioral attributes have changed from incremental backups or over time.

We can also integrate that into business process because, unfortunately, we have to plan for somebody attacking us. It’s really about how quickly we can detect and respond very quickly to get the systems back online. You have to plan for the worst-case scenario.

That’s why we have such a big focus on making sure we can detect in real time when something is happening as the blocks are literally being written to the disk. We can then also unwind to when we seek a good copy. That’s a huge focus for us right now.

Gardner: When you have a comprehensive data infrastructure, can go global and access data across all of these different environments, it seems to me that you have set yourself up for a pervasive analytics capability, which is the gorilla in the room when it comes to digital business transformation. Denis, how does the IBM Storage vision help bring more pervasive and powerful analytics to better drive a digital business?

Climb the AI Ladder

Kennelly: At the end of the day, that’s what this is all about. It’s about transforming businesses, to drive analytics, and provide unique insights that help grow your business and respond to the needs of the marketplace.

It’s all about enabling top-line growth. And that’s only possible when you can have seamless access to the data very quickly to generate insights literally in real time so you can respond accordingly to your customer needs and improve customer satisfaction.

This platform is all about discovering that data to drive the analytics. We have a phrase within IBM, we call it “The AI Ladder.” The first rung on that AI ladder is about discovering and accessing the data, and then being able to generate models from those analytics that you can use to respond in your business.

We’re all in a world based on data. AI has a major role to play where we can look at business processes and understand how they are operating and then drive greater automation.That’s a huge focus for us — optimizing and automating existing business processes.

We’re all in a world based on data. And we’re using it to not only look for new business opportunities but for optimizing and automating what we already have today. AI has a major role to play where we can look at business processes and understand how they are operating and then, based on analytics and AI, drive greater automation. That’s a huge focus for us as well: Not only looking at the new business opportunities but optimizing and automating existing business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cloud computing, containers, data analysis, Data center transformation, DevOps, digital transformation, IBM, Information management, machine learning, Security, storage | Leave a comment

The future of work is happening now thanks to Digital Workplace Services

Businesses, schools, and governments have all had to rethink the proper balance between in-person and remote work. And because that balance is a shifting variable — and may well continue to be for years after the pandemic — it remains essential that the underlying technology be especially agile.

The next BriefingsDirect worker strategies discussion explores how a partnership behind a digital workplace services solution delivers a sliding scale for blended work scenarios. We’ll learn how Unisys, Dell, and their partners provide the time-proof means to secure applications intelligently — regardless of location.

We’ll also hear how an increasingly powerful automation capability makes the digital workplace easier to attain and support.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in cloud-delivered desktop modernization, please welcome Weston Morris, Global Strategy, Digital Workplace Services, Enterprise Services, at Unisys, and Araceli Lewis, Global Alliance Lead for Unisys at Dell Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Weston, what are the trends, catalysts, and requirements transforming how desktops and apps are delivered these days?

Morris: We’ve all lived through the hype of virtual desktop infrastructure (VDI). Every year for the last eight or nine years has supposedly been the year of VDI. And this is the year it’s going to happen, right? It had been a slow burn. And VDI has certainly been an important part of the “bag of tricks” that IT brings to bear to provide workers with what they need to be productive.

COVID sends enterprises to cloud

But since the beginning of 2020, we’ve all seen — because of the COVID-19 pandemic — VDI brought to the forefront in the importance of having an alternative way of delivering a digital workplace to workers. This has been especially important in environments where enterprises had not invested in mobility, the cloud, or had not thought about making it possible for user data to reside outside of their desktop PCs.

Those enterprises had a very difficult time moving to a work-from-home (WFH) model — and they struggled with that. Their first instinct was, “Oh, I need to buy a bunch of laptops.” Well, everybody wanted laptops at the beginning of the pandemic, and secondly, they were being made in China mostly — and those factories were shut down. It was impossible to buy a laptop unless you had the foresight to do that ahead of time.

And that’s when the “aha” moment came for a lot of enterprises. They said, “Hey, cloud-based virtual desktops — that sounds like the answer, that’s the solution.” And it really is. They could set that up very quickly by spinning up essentially the digital workplace in the cloud and then having their apps and data stream down securely from the cloud to their end users anywhere. That’s been the big “aha” moment that we’ve had as we look at our customer base and enterprises across the world. We’ve done it for our own internal use.

Gardner: Araceli, it sounds like some verticals and in certain organizations they may have waited too long to get into the VDI mindset. But when the pandemic hit, they had to move quickly.

What is about the digital workplace services solution that you all are factoring together that makes this something that can be done quickly?

Lewis: It’s absolutely true that the pandemic elevated digital workplace technology from being a nice-to-have, or a luxury, to being an absolute must-have. We realized after the pandemic struck that public sector, education, and more parts of everyday work needed new and secure ways of working remotely. And it had to become instantaneously available for everyone.

You had every C-level executive across every industry in the United States shifting to the remote model within two weeks to 30 days, and it was also needed globally. Who better than Dell on laptops and these other endpoint devices to partner with Unisys globally to securely deliver digital workspaces to our joint customers? Unisys provided the security capabilities and wrapped those services around the delivery, whereas we at Dell have the end-user devices.

You had every C-level executive across every industry in the U.S. shifting to the remote model within two weeks to 30 days, and it was also needed globally. Unisys provided the security capabilities and wrapped those services around delivery, whereas Dell had the end-user devices.

What we’ve seen is that the digitalization of it all can be done in the comfort of everyone’s home. You’re seeing them looking at x-rays, or a nurse looking into someone’s throat via telemedicine, for example. These remote users are also able to troubleshoot something that might be across the world using embedded reality, virtual reality (VR) embedded, and wearables.

We merged and blended all of those technologies into this workspaces environment with the best alliance partners to deliver what the C-level executives wanted immediately.

Gardner: The pandemic has certainly been an accelerant, but many people anticipated more virtual delivery of desktops and apps as inevitable. That’s because when you do it, you get other timely benefits, such as flexible work habits. Millennials tend to prefer location-independence, for example, and there are other benefits during corporate mergers and acquisitions and for dynamic business environments.

So, Weston, what are some of the other drivers that reward people when they make the leap to virtual delivery of apps and desktops?

Take the virtual leap, reap rewards

Morris: I’m thinking back to a conversation I had with you, Araceli, back in March. You were excited and energized around the topic of business continuity, which obviously started with the pandemic.

But, Dana, there are other forces at work that preceded the pandemic and that we know will continue after the pandemic. And mergers and acquisition are a very big one. We see a tremendous amount of activity there in the healthcare space, for example, which was affected in multiple ways by the pandemic. Pharmaceuticals and life sciences as well, there are multiple merger activities going on there.

One of the big challenges in a merger or acquisition is how to quickly get the acquired employees working as first-class citizens as quickly as possible. That’s always been difficult. You either give them two laptops, or two desktops, and say, “Here’s how you do the work in the new company, and here’s where you do the work in the old company.” Or you just pull the plug and say, “Now, you have to figure out how to do everything in a new way in web time, including human resources and all of those procedures in a new environment — and hopefully you will figure it all out.”

But with a cloud-based, virtual desktop capability — especially with cloud-bursting — you can quickly spin up as much capacity as you need and build upon the on-premises capabilities you already have, such as on Dell EMC VxRail, and then explode that into the cloud as needed using VMware Horizon to the Microsoft Azure cloud.

That’s an example of providing a virtual desktop for all of the newly acquired employees for them do their new corporate-citizen stuff while they keep their existing environment and continue to be productive by doing the job you hired them to do when you made the acquisition. That’s a very big use case that we’re going to continue to see going forward.

Gardner: Now, there were number of hurdles historically toward everyone adopting VDI. One of the major use cases was, of course, security and being able to control content by having it centrally located on your servers or on your cloud — rather than stored out on every device. Is that still a driving consideration, Weston? Are people still looking for that added level of security, or has that become passé?

Morris: Security has become even more important throughout the pandemic. In the past, to a large extent, the corporate firewall-as-secure-the-perimeter model has worked fairly well. And we’ve been punching holes in the firewall for several years now.

But with the pandemic — with almost everyone working from home — your office network just exploded. It now extends everywhere. Now you have to worry about how well secured any one person’s home network is. Do they have their password changed or default password changed on their home router? Have they updated the firmware on it? And a lot of these things are beyond the average worker to worry about and to be thinking about.

But if we separate out the workload and put it into the cloud — so that you have the digital workplace sitting in the cloud — that is much more secure than a device sitting on somebody’s desk connected to a very questionable home network environment.

Gardner: Another challenge in working toward more modern desktop delivery has been cost, because it’s usually been capital-intensive and required upfront investment. But when you modernize via the cloud that can shift.

Araceli, what are some of the challenges that we’re now able to overcome when it comes to the economics of virtual desktop delivery?

Cost benefits of partnering

Lewis: The beautiful thing here is that in our partnership with Unisys and Dell Financial Services (DFS), we’re able to utilize different utility models when it comes to how we consume the technology.

We don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. So, that’s extremely flexible.

You don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. It’s extremely flexible.

And by partnering with Unisys, they secure those VDI solutions across all of the three core components: The VDI portion within the data center, the endpoint devices, and of course, the software. By partnering with Unisys in our alliance ecosystem, we get the best of DFS, Dell Technology, VMware software, and Unisys security capabilities.

Gardner: Weston, another issue that’s dogged VDI adoption is complexity for the IT department. When we think about VDI, we can’t only think about end users. What has changed for how the IT department deploys infrastructure, especially for a hybrid approach where VDI is delivered both from on-premises data centers as well as the cloud?

Intelligent virtual agents assist IT

Morris: Araceli and I have had several conversations about this. It’s an interesting topic. There has always been a lot of work to stand up VDI. If you’re starting from scratch, you’re thinking about storage, IOPS, and network capacity. Where are my apps? What’s the connectivity? How are we going to run it at optimal performance? After all, are the end users happy with the experience they’re getting? And how can I even know that what their experience is?

And now, all that’s changed thanks to the evolving technology. One is the advent of artificial intelligence (AI) and the use of personal intelligent virtual assistance. At home, we’re used to that, right? We ask AlexaSiri, or Cortana what’s going on with the weather? What’s happening in the news? We ask our virtual assistants all of these things and we expect to be able to get instant answers and help. Why is that not available in the enterprise for IT? Well, the answer is it is now available.

As you can imagine on the provisioning side, wouldn’t it be great if you were able to talk to a virtual assistant that understood the provisioning process? You simply answer questions posed by the assistant. What is it you need to provision? What is your load that you’re looking at? Do you have engineers that need to access virtual desktops? What types of apps might they need? What is the type of security?

Then the virtual assistant understands the business and IT processes to provision the infrastructure needed virtually in the cloud to make that all happen or to cloud-burst from your on-premises Dell VxRail into the cloud.

That is a very important game changer. The other aspect of the intelligent virtual agent is it now resides on the virtual desktop as well. I, as an at-home worker, may have never seen a virtual desktop before. And now, the virtual assistant pops up and guides the home worker through the process of connecting, explaining how their apps work, and saying, “I’m always here. I’m ready to give you help whenever possible.” But I think I’ll defer to the expert here.

Araceli, do you want to talk about the power of the hybrid environment and how that simplifies the infrastructure?

Multiple workloads managed

Lewis: Sure, absolutely. At Dell EMC, we are proud of the fact that Gartner rates us number one, as a leader in the category for pretty much all of the products that we’ve included in this VDI solution. When Unisys and my alliances team get the technology, it’s already been tested from a hyper-converged infrastructure (HCI) perspective. VxRail has been tested, tried-and-true as an automated system in which we combine servers, storage, network, and the software.

That way, Weston and I don’t have to worry about what size are we going to use. We actually have T-shirt sizes already for the number of VDI users that are needed that have been thought out. We have the graphics-intensive portion of it thought out. And we can basically deploy quickly and then put the workloads on them as we need to spin them up or spin them down or to add more.

We can adjust on the fly. That’s a true testament of our HCI being the backbone of the solution. And we don’t have to get into all of the testing, regression testing, and the automation and self-healing of it. Because a lot of that management would have had to be done by enterprise IT or by a managed services provider but it’s done instead via the lifecycle management of the Dell EMC VxRail HCI solution.

That is a huge benefit, the fact that we deliver a solution from the value line and the hypervisor on up. We can then focus on the end users’ services and we don’t have to be swapping out components or troubleshooting because all of the refinement that Dell has done in that technology today.

Morris: Araceli, the first time you and your team showed me the cloud-bursting capability, it just blew me away. I know in the past how hard it was to expand any infrastructure. You showed me where, you know, every industry and every enterprise are going to have a core base of assumptions. So, why not put that under Dell VxRail?

Then, as you need to expand, cloud-burst into, in this case, Horizonrunning on Azure. And that can all be done now through a single dashboard. I don’t have to be thinking, “Okay, now I have to have the separate workload, it’s in the cloud, this other workload that’s on my on-premises cloud with VxRail.” It’s all done through one, single dashboard that can be automated on the back end through a virtual agent, which is pretty cool.

Gardner: It sure seems in hindsight that the timing here was auspicious. Just as the virus was forcing people to rapidly find a virtual desktop solution, you had put together the intelligence and automation along with software-defined infrastructure like HCI. And then you also gained the ease in hybrid by bursting to the cloud.

And so, it seems that the way that you get to a solution like this has never been easier, just when it was needed to be easy for organizations such as small- to medium-sized businesses (SMBs) and verticals like public sector and education. So, was the alliance and partnering, in fact, a positive confluence of timing?

Greater than sum of parts

Morris: Yes. The perfect storm analogy certainly applies. It was great when I got the phone call from Araceli, saying, “Hey, we have this business continuity capability.” We at Unisys had been thinking about business continuity as well.

We looked at the different components that we each brought. Unisys with its security around Stealth or capability to proactively monitor infrastructure and desktops and see what’s going on and automatically fix them via the intelligent virtual agent and automation. And realizing that this was really a great solution, a much better solution than the individual parts.

We could not make this happen without all of the cool stuff that Dell brings in terms of the HCI, the clients, and, of course, the very powerful VMware-based virtual desktops. And we added to that some things that we have become very good at in our digital workplace transformation. The result is something that can make a real difference for enterprises. You mentioned the public sector and education. Those are great examples of industries that really can benefit from this.

Gardner: Araceli, anything more to offer on how your solution came together, the partners and the constituent parts?

Lewis: Consistent infrastructure, operations, and the help of our partner, Unisys, globally, delivers the services to the end users. This was just a partnership that had to come together.

We were getting so many requests early during the pandemic, an overwhelming amount of demand from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking.

We at Dell couldn’t do it alone. We needed those data center spaces. We needed the capabilities of their architects and teams to deliver for us. We were getting so many requests early during the pandemic, an overwhelming amount of demand from every C-level suite across the country, and from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking. But we knew if we partnered with them, we could give our community what they needed to get through the pandemic.

Gardner: And among those constituent parts, how is important part is Horizon? Why is it so important?

Lewis: VMware Horizon is the glue. It streamlines desktop and app delivery in various ways. The first would be by cloud-bursting. It actually gives us the capability to do that in a very simple fashion.

Secondly, it’s a single pane of glass. It delivers all of the business-critical apps to any device, anywhere on a single screen. So that makes it simple and comprehensive for the IT staff.

We can also deliver non-persistent virtual desktops. The advantage here is that it makes software patching and distribution a whole lot easier. We don’t have all the complexity. If there were ever a security concern or issue, we simply blow away that non-persistent virtual desktop and start all over. It gets us to our first phase, square one, and we would otherwise have to spend countless hours of backups and restores to get us to where we are safe again. So, it pulls everything together for us and being a user have a seamless interface for the IT staff who don’t have the complexity, and it gives us the best of our world while we get out to the cloud.

Gardner: Weston, on the intelligent agents and bots, do you have an example of how it works in practice? It’s really fascinating to me that you’re using AI-enabled robotic process collaboration (RPA) tools to help the IT department set this up. And you’re also using it to help the end-user learn how to onboard themselves, get going, and then get ongoing support.

Amelia AI ascertains answers

Morris: It’s an investment we began almost 24 months ago, branded as the Unisys InteliServe platform, which initially was intended to bring AI, automation, and analytics to the service desk. It was designed to improve the service desk experience and make it easier to use, make it scalable, and to learn over time what kinds of problems people needed help solving.

But we realized once we had it in place, “Wow, this intelligent virtual agent can almost be an enterprise personal assistant where it can be trained on anything, on any business process.” So, we’ve been training it on fixing common IT problems … password resets, can’t log in, can’t get to the virtual private network (VPN), Outlook crashes, those types of things. And it does very well at those sorts of activities.

But the core technology is also perfectly suited to be trained for IT processes as well as business processes inside of the enterprise. For example, for this particular scenario of supporting virtual desktops. If a customer has a specific process for provisioning virtual desktops, they may have specific pools of types of virtual desktops, certain capacities, and those can be created ahead of time, ready to go.

Then it’s just a matter of communicating with the intelligent virtual assistant to say, “I need to add more users to this pool,” or, “We need to remove users,” or, “We need to add a whole new pool.” The agent is branded as Amelia. It has a female voice, through it doesn’t have to be, but in most cases, it is.

When we speak with Amelia, she’s able to ask questions that guide the user through the process. They don’t have to know what the process is. They don’t do this very often, right? But she can be trained to be an expert on it.

Amelia collects the information needed, submits it to the RPA that communicates with Horizon, Azure, and the VxRail platforms to provision the virtual desktops as needed. And this can happen very quickly. Whereas in the past, it may have taken days or weeks to spin up a new environment for a new project, or for a merger and acquisition, or in this case, reacting to the pandemic, and getting people able to work from home.

By the same token, when the end users open up their virtual desktops, they connect to the Horizon workspace, and there is Amelia. She’s there ready to respond to totally different types of questions: “How do I use this?” “Where’s my apps?” “This is new to me, what do I do? How do I connect?” “What about working from home?” “What’s my VPN connection working like, and how do I get that connected properly?” “What about security issues?” There, she’s now able to help with the standard end-user types issues as well.

Gardner: Araceli, any examples of where this intelligent process automation has played out in the workplace? Do we have some ways of measuring the impact?

Simplify, then measure the impact

Lewis: We do. It’s given us, in certain use cases, the predictability and the benefit of a pay-as-you-grow linear scale, rather than the pay-by-the-seat type of solution. In the past, if we had a state or a government agency where they need, for example, 10,000 seats, we would measure them by the seat. If there’s a situation like a pandemic, or any other type of environment where we have to adjust quickly, how could we deliver 10,000 instances in the past?

Now, using Dell EMC ready-architectures with the technologies we’ve discussed — and with Unisys’ capabilities — we can provide such a rapid and large deployment in a pay-as-you-grow linear scale. We can predict what the pricing is going to be as they need to use it for these public sector agencies and financial firms. In the past, there was a lot of capital expenditures (CapEx). There was a lot of process, a lot of change, and there were just too many unknowns.

These modern platforms have simplified the management of the backends of the software and the delivery of it to create a true platform that we can quantify and measure — not only just financially, but from a time-to-delivery perspective as well.

Morris: I have an example of a particular customer where they had a manual process for onboarding. Such onboarding includes multiple steps, one of which is, “Give me my digital workplace.”

But there are other things, too. The training around gaining access to email, for example. That was taking almost 40 hours. Can you imagine a person starting their job, and 40 hours later they finally get the stuff they need to be productive? That’s a lot of downtime.

After using our automation, that transition was down to a little over eight hours. What that means is a person starts filling out their paperwork with HR on day one, gets oriented, and then the next day they have everything they need to be productive. What a big difference. And in the offboarding — it’s even more interesting. What happens when a person leaves the company? Maybe under unfavorable circumstances, we might say.

In the past, the manual processes for this customer took almost 24 hours before everything was turned off. What does that mean? That means that an unhappy, disgruntled employee has 24 hours. They can come in, download content, get access to materials or perhaps be disruptive, or even destructive, with the corporate intellectual property, which is very bad.

Through automation, this offboarding process is now down to six minutes. I mean that person hasn’t even walked out of the room and they’ve been locked out completely from that IT environment. And that can be even be done more quickly if we’re talking about a virtual desktop environment, in which the switch can be thrown immediately and completely. Access is completely and instantly removed from the virtual environment.

Gardner: Araceli, is there a best-of-breed, thin-client hardware approach that you’re using? What about use cases such as graphics-intense or computer-aided design (CAD) applications? What’s the end-point approach for some of these more intense applications?

Viable, virtual, and versatile solutions

Lewis: Being Dell Technologies, that was a perfect question for us, Dana. We understand the persona of the end users. As we roll out this technology, let’s say it’s for an engineering team where they do CAD drawings as an engineering group. If you look at the persona, and we partner with Unisys and look at what each end-user’s needs are, you can determine if they need more memory, more processing power, and if they need a more graphics-intensive device. We can do that. Our Wyseend-clients that can do that, the Wyse 3000s and the 5000s.

But I don’t want to pinpoint one specific type of device per user because we could be talking about a doctor, or we could be talking about a nurse in an intensive care unit. She is going to need something more mobile. We can also provide end-user devices that are ruggedized, maybe in an oil field or in a construction site. So, from an engineering perspective, we can adopt the end-user device to their persona and their needs and we can meet all of those requirements. It’s not a problem.

Gardner: Weston, anything from your vantage point on the diversity and agility of those endpoint devices and why this solution is so versatile?

Morris: There is diversity at both ends. Araceli, you talked about being able to on the backend provision and scale up and down the capacity and capability of a virtual desktop to meet the personas’ needs.

Millennials want choice on how they connect. Am I connecting from home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day. They don’t want to lose work in between. That all is entirely possible with this infrastructure.

And then on the end-user side, and you mentioned, Dana, Millennials. They may want choice of how they connect. Am I connecting in through my own personal laptop at home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day? And they don’t want to lose work in between. That is all entirely possible with this infrastructure.

Gardner: Let’s look to the future. We’ve been talking about what’s possible now. But it seems to me that we’ve focused on the very definition of agility: It scales, it’s fast, and it’s automated. It’s applicable across the globe.

What comes next? What can you do with this technology now that you have it in place? It seems to me that we have an opportunity to do even more.

Morris: We’re not backing down from AI and automation. That is here to stay, and it’s going to continue to expand. People have finally realized the power of cloud-based VDI. That is now a very important tool for IT to have in their bag of tricks. They can respond to very specific use cases in a very fast, scalable, and effective way.

In the future we will see that AI continues to provide guidance, not only in the provisioning that we’ve talked about, not only in startup and use on the end-user side — but in providing analytics as to how the entire ecosystem is working. That’s not just the virtual desktops, but the apps that are in the cloud as well and the identity protection. There’s a whole security component that AI has to play a role in. It almost sounds like a pipe dream, but it’s just going to make life better. AI absolutely will do that when it’s used appropriately.

Lewis: I’m looking to the future on how we’re going to live and work in the next five to 10 years. It’s going to be tough to go back to what we were used to. And I’m thinking forward to the Internet of Things (IoT). There’s going to be an explosion of edge devices, of wearables, and how we incorporate all of those technologies will be a part of a persona.

Typically, we’re going to be carrying our work everywhere we go. So, how are we going to integrate all of the wearables? How are we going to make voice recognition more adaptable? VR, AI, robotics, drones — how are we going to tie all of that together?

Nowadays, we tie our home systems and our cooling and heating to all of the things around us to interoperate. I think that’s going to go ahead and continue to grow exponentially. I’m really excited that we’ve partnered with Unisys because we wouldn’t want to do something like this without a partner who is just so deeply entrenched in the solutions. I’m looking forward to that.

Gardner: What advice would give to an organization that hasn’t bitten off the virtual desktop from the cloud and hybrid environment yet? What’s the best way to get started?

Morris: It’s really important to understand your users, your personas. What are they consuming? How do they want to consume it? What is their connectivity like? You need to understand that, if you’re going to make sure that you can deliver the right digital workplace to them and give them an experience that matters.

Lewis: At Dell Technologies, we know how important it is to retain our top and best talent. And because we’ve been one of the top places to work for the past few years, it’s extremely important to make sure that technology and access to technology help to enable our workforce.

I truly feel that any one of our customers or end users that hasn’t looked at VDI, and hasn’t realized the benefits across savings, and keeping a competitive advantage in this fast-paced world, that they also need to retain their talent, too. To do that they need to give their employees the best tools and the best capabilities to be the very best. They have to look at VDI in some way, shape, or form. As soon as we bring it to them — whether technically, financially, or for competitive factors — it really makes sense. It’s not a tough sell at all, Dana.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Dell Technologies.

Posted in Cloud computing, Virtualization, VMware | Tagged , , , , , , , , , , , | Leave a comment

Customer experience management has never been more important or impactful

The next BriefingsDirect digital business innovation discussion explores how companies need to better understand and respond to their markets one subscriber at a time. By better listening inside of their products, businesses can remove the daylight between their digital deliverables and their customers’ impressions.

Stay with us now as we hear from a customer experience (CX) management expert at SAP on the latest ways that discerning customers’ preferences informs digital business imperatives.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the business of best fulfilling customer wants and needs, please welcome Lisa Bianco, Global Vice President, Experience Management and Advocacy at SAP Procurement Solutions. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What was the catalyst about five years ago that led you there at SAP Procurement to invest in a team devoted specifically to CX innovation?

Bianco: As a business-to-business (B2B) organization, we recognized that B2B was changing and it was starting to look and feel more like business-to-consumer (B2C). The days of leaders dictating the solutions and products that their end users were going to be leveraging for day-to-day business stuff — like procurement or finance — we found we were competing with what an end-user’s experience would be with the products or applications they use in their personal life.

No alt text provided for this image
Bianco

We all know this; we’ve all been there. We would go to work to use the tools, and there used to be those times we would use the printer for our kids’ flyers for their birthday because it was a much better tool than what we had at home. And that had shifted.

But then business leaders were competing with rogue employees using tools like Amazon.com versus SAP Ariba’s solution for procurement to buy things for their businesses. And so with that maverick spend, companies weren’t having the same insights that they needed to make decisions. So, we knew that we had to ensure that that end-user experience at work replicated what they might feel at home. It reflected that shift in persona from a decision-maker to that of a user.

Gardner: Whether it’s B2B or B2C, there tends to be a group of people out there who are really good at productivity and will find ways to improve things if you only take the chance to listen and follow their lead, right?

Bianco: That’s exactly right.

Gardner: And what was it about B2B in the business environment that was plowing new ground when it came to listening rather than just coming up with a list of requirements, baking it into the software, and throwing it over the wall?

Leaders listen to customer experience

Bianco: The truth is, better listening to B2B resulted in a centralized shift for leaders. All of a sudden, a chief procurement officer (CPO) who made a decision on a procurement solution, or a chief information officer (CIO) who made a decision on an enterprise resource planning (ERP) solution, they were beginning to get flak from cross-functional leaders who were end-users and couldn’t actually do their functions.

In B2B we found that we had to start understanding the feelings of employees and the feelings of our customers. And that’s not really what you do in B2B, right? Marketing and branding at SAP now said that the future of business has feelings. And that’s a shock. I can’t tell you how many times I have talked to leaders who say, “I want to switch the word empathy in our mission statement because that’s not strong leadership in B2B.”

The truth is we had to shift. Society was shifting to that place and understanding that feelings allow us to understand the experiences because experiences were that of people. We can only make so many decisions based on our operational data.

But the truth is we had to shift. Society was shifting to that place and understanding that feelings allow us to understand the experiences because the experiences were that of people. We can only make so many decisions based on our operational data, right? You really have to understand the why.

We did have to carve out a new path, and it’s something we still do to this day. Many B2B companies haven’t evolved to an experience management program, because it’s tough. It’s really hard.

Gardner: If we can’t just follow the clicks, and we can’t discern feelings from the raw data, we need to do something more. What do we do? How do we understand why people feel good or bad about what they are doing?

Bianco: We get over that hurdle by having a corporate strategy that puts the customer at the center of all we do. I like to think of it as having a customer-centric decision-making platform. That’s not to say it’s a product. It’s really a shift in mindset that says, “We believe we will be a successful company if our customers’ feelings are positive, if their experiences are great.”

If you look at the disruptors such as Airbnb or Amazon, they prioritize CX over their own objectives as a business and their own business success, things like net-new software sales or renewal targets. They focus on the experiences that their customers have throughout their lifecycle.

No alt text provided for this image

That’s a big shift for corporate America because we are so ingrained in producing for the board and we are so ingrained in producing for the investors that oftentimes putting that customer first is secondary. It’s a systemic shift in culture and thinking that tends to be what we see in the emerging companies today as they grab such huge market share. It’s because they shifted that thinking.

Gardner: Right. And when you shift the thinking in the age of social media — and people can share what their impressions are — that becomes a channel and a marketing opportunity in itself. People aren’t in a bubble. They are able to say and even demonstrate in real time what their likes are, what their dislikes are, and that’s obvious to many other people around them.

Customer feedback ecosystem

Bianco: Dana, you are pointing out risk. And it’s so true. And this year, the disrupter that COVID-19 has created is a tectonic shift in our digitalization of customer feedback. And now, via social media and Twitter, if you are not at the forefront of understanding what your customers’ feelings are — and what they may or may not say — and you are not doing that in a proactive way, you run the risk of it playing out socially in a public forum. And the longer that goes unattended to, you start to lose trust.

When you start to lose trust, it is so much harder to fix than understanding in the lifecycle of a customer the problems that they face, fixing those and making that a priority.

Gardner: Why is this specifically important in procurement? Is there something about procurement, supply chain, and buying that this experience focus is important? Or does it cut across all functions in business?

Bianco: It’s across all functions in business. However, if you look at procurement in the world today, it incorporates a vast ecosystem. It’s one of those functions in business that includes buyers and suppliers. It includes logistics, and it’s complex. It is one of the core areas of a business. When that is disrupted it can have drastic effects on your business.

No alt text provided for this image

We saw that in spades this year. It affects your supply chain, where you can have alternative opportunities to regain your momentum after a disruption. It affects your workforce and all of the tools and materials necessary for your company to function when it shifts and moves home. And so with that, we look from SAP’s perspective at these personas that navigate through a multitude of products in your organization. And in procurement, because that ecosystem is there for our customers, understanding the experience of all of those parties allows for customers to make better decisions.

A really good example is one of the world’s largest consulting firms. They took 500,000 employees in offices around the world and found that they had to immediately put them in their homes. They had to make sure they had the products they needed, like computers, green screens, or leisure wear.

They learned what looks good enough on a virtual Zoom meeting. Procurement had to understand what their employees needed within a week’s time so that they didn’t lose revenue deploying the services that their customers had purchased and rely on them for.

Understanding that lifecycle really helps companies, especially now. Seeing the recent disruption made them able to understand exactly what they need to do and quickly make decisions to make experiences better to get their business back on track.

Gardner: Well, this is also the year or era of moving toward automation and using data and analytics more, even employing bots and robotic process automation (RPA). Is there something about that tack in our industry now that can be brought to CX management? Is there a synergy between not just doing this manually, but looking to automation and finding new insights using new tools?

Automate customer journeys

Bianco: It’s a really great insight into the future of understanding the experiences of a customer. A couple of things come to mind. As you look at operational data, we have all recognized the importance of having operational data; so usage data, seeing where the clicks are throughout your product. Really documenting customer journey maps.

If you automate the way you get feedback you don’t just have operational data; you need to get that feelings to come through with experience data … to help drive to where automation needs to happen.

But if you automate the way you get feedback you don’t just have operational data; you need to get the feelings to come through with experience data. And that experience data can help drive where automation needs to happen. You can then embed that kind of feedback-loop-process in typical survey-type tools or embed them right into your systems.

And so that helps you understand some areas where we can remove steps from in the process, especially as many companies look to procurement to create automation. And so the more we can understand where we have those repetitive flows and we can automate, the better.

Gardner: Is that what you mean by listening inside of the product or does that include other things, too?

Bianco: It includes other things. As you may know, SAP purchased a company called Qualtrics. They are experts in experience management, and we have been able to move from and evolve from traditional net promoter score (NPS) surveys into looking at micro moments to get customer feedback as they are doing a function. We have embedded certain moments inside of our product that allow us to capture feedback in real time.

Gardner: Lisa, a little earlier you alluded that there are elements of what happens in the B2C world as individual consumers and what we can then learn and take into the B2B world. Is there anything top of mind for you that you have experienced as a consumer that you said, “Aha, I want to be able to do that or bring that type of experience and insight to my B2B world?”

Customer service is king in B2B

Bianco: Yes, you know what happened to me just this week as a matter of fact? There is a show on TV right now about chess. With all of us being at home, many of us are consuming copious amounts of content. And I went and ordered a chess set, it came, it was beautiful, it was from Wayfair, and one of the pieces was broken.

I snapped a little picture of the piece that had broken and they had an amazing app that allowed me to say, “Look, I don’t need you to replace the whole thing, it’s just this one little piece, and if you can just send me that, that would be great.”

And they are like, “You know what? Don’t worry about sending it back. We are just going to send you a whole new set.” It was like a $100 set. So I now have two sets because they were gracious enough to see that I didn’t have a great experience. They didn’t want me to deal with sending it back. They immediately sent me the product that I wanted.

No alt text provided for this image

I am, like, where is that in B2B? Where is that in the complex area of procurement that I find myself? How can we get that same experience for our customers when something goes wrong?

When I began this program, we would try to figure out what is that chess set. Other organizations use garlic knots, like at pizza restaurants. While you and your kids wait 25 minutes for the pizza to be made, a lot of pizza shops offer garlic knots to make you happy so the wait doesn’t seem so long. What is that equivalent for B2B?

It’s hard. What we learned early on, and I am so grateful for, is that in B2B many end users and customers know how difficult it is to make some of their experiences better, because it’s complex. They have a lot of empathy for companies trying to go down such a path, in this case, for procurement.

But with that, what their garlic knot is, what their free product or chess set is, is when we tell them that their voice matters. It’s when we receive their feedback, understand their experience against our operational data, and let them know that we have the resources and budget to take action on their feedback and to make it better.

Either we show them that we have made it better or we tell them, “We hear what you are saying, but that doesn’t fit into our future.” You have to be able to have that complete feedback loop, otherwise you alienate your customer. They don’t want to feel like you are asking for their feedback but not doing anything with it.

And so that’s one of the most important things we learned here. That’s the thing that I witnessed from a B2C perspective and tried to replicate in B2B.

Gardner: Lisa, I’m sensing that there is an opportunity for the CX management function to become very important for overall digital business transformation. The way that Wayfair was able to help you with the chess set required integration, cooperation, and coordination between what were probably previously siloed parts of their organization.

That means the helpdesk, the ordering and delivering, exception management capabilities, and getting sign-off on doing this sort of thing. It had to mean breaking down those silos — both in process, data, and function. And that integration is often part of an all-important digital transformation journey.

So are you finding that people like yourself, who are spearheading the experience management for your customers, are in a catbird seat of identifying where silos, breakdowns, and gaps exist in the B2B supplier organizations?

Feedback fuels cross-training

Bianco: Absolutely. Here is what I have learned: I am going to focus on cloud, especially in companies that are either cloud companies or had been an on-premises company and are migrating to being a cloud company. SAP Ariba did this over the last 20 years. It has migrated from on-premises to cloud, so we have a great DNA understanding of that. SAP is out doing the same thing; many companies are.

And what’s important to realize, at least from my perspective — it was an “Aha” moment — is that there is a tendency in the B2C world leadership to say, “Look, I am looking at all this data and feedback around customers. Can’t we just go fix this particular customer issue, and they are going to be happy?”

Most of the issues our customers were facing were systemic. There was consistent feedback about something that wasn’t working. We had to recognize that these systemic issues needed to be solved by a cross-functional group of people.

What we found in the B2B data was that most of the issues our customers were facing were systemic. It was broad strokes of consistent feedback about something that wasn’t working. We had to recognize that these systemic issues needed to be solved by a cross-functional group of people.

That’s really hard because so many folks have their own budgets, and they lead only a particular function. To think about how they might fix something more broadly took our organization quite a bit of time to wrap our heads around. Because now you need a center of excellence, a governance model that says that CX is at the forefront, and that you are going to have accountability in the business to act on that feedback and those actions. And you are going to compose a cross-functional, multilevel team to get it done.

It was funny early on, in our receiving feedback that customer support is a problem. Support was the problem. The support function was awful. I remember the head of support was like, “Oh, my gosh. I am going to get fired. I just hate my job. I don’t know what to do.”

When you look at the root cause you find that quality is a root-cause issue, but quality wasn’t just in one or another product — it was across many products. That broader quality issue led to how we enabled our support teams to understand how to better support those products. That quality issue also impacted how we went to market and we showed the features and functions of the product.

We developed a team called the Top X Organization that aggregated cross-functional folks, held them accountable to a standard of a better outcome experience for our customers, and then led a program to hit certain milestones to transform that experience. But all that is a heavy lift for many companies.

Gardner: That’s fascinating. So, your CX advocates — by having that cross-functional perspective by nature — became advocates for better processes and higher quality at the organization level. They are not just advocating for the customer; they are actually advocating for the betterment of the business. Are you finding that and where do you find the people that can best do that?

Responsibility of active listening

Bianco: It’s not an easy task, it’s for few and far between. Again, it takes a corporate strategy. Dana, when you asked me the question earlier on, “What was the catalyst that brought you here?” I oftentimes chuckle. There isn’t a leader on the planet who isn’t going to have someone come to them, like I did at the time, and say, “Hey, I think we should listen to our customers.” Who wouldn’t want to do that? Everyone wants to do that. It sounds like a really good idea.

But, Dana, it’s about active listening. If you watch movies, there is often a scene where there is a husband and wife getting therapy. And the therapist says, “Hey, did you hear what she said?” or, “Did you hear what he said?” And the therapist has them repeat it back. Your marriage or a struggle you have with relationships is never going to get better just by going and sitting on the couch and talking to the therapist. It requires each of you to decide internally that you want this to be better, and that you are going to make the changes necessary to move that relationship forward.

It’s not dissimilar to the desire to have a CX organization, right? Everyone thinks it’s a great idea to show in their org chart that they have a leader of CX. But the truth is you have to really understand the responsibility of listening. And that responsibility sometimes devolves into just taking a survey. I’m all for sending a survey out to our customers, let’s do it. But that is the smallest part of a CX organization.

No alt text provided for this image

It’s really wrapped up in what the corporate strategy is going to be: A customer-centric, decision-making model. If we do that, are we prepared to have a governance structure that says we are going to fund and resource making experiences better? Are we going to acknowledge the feedback and act on it and make that a priority in business or not?

Oftentimes leaders get caught up in, “I just want to show I have a CX team and I am going to run a survey.” But they don’t realize the responsibility that gives them when now they have on paper all the things that they know they have an opportunity to make better for their customers.

Gardner: You have now had five years to make these changes. In theory this sounds very advantageous on a lot of levels and solves some larger strategic problems that you would have a hard time addressing otherwise.

So where’s the proof? Do you have qualitative, quantitative indicators? Maybe it’s one of those things that’s really hard to prove. But how do you rate customer advocacy and CX role? What does it get you when you do it well?

Feelings matter at all levels

Bianco: Really good point. We just came off of our five-year anniversary this week. We just had an NPS survey and we got some amazing trends. In five years, we have seen an even greater improvement in the last 18 months — an 11-point increase in our customer feedback. And that not only translates into the survey, as I mentioned, but it also translates with influencers and analysts.

Gartner has noted the increase in our ability to address CX issues and make them better. We can see that in terms of the 11-point increase. We can see that in terms of our reputation within our analyst community.

And we also see it in the data. Customers are saying, “Look, you are much more responsive to me.” We see a 35-percent decrease in customers complaining in their open text fields about support. We see customers mentioning less the challenges they have seen in the area of integration, which is so incredibly important.

We see a 35-percent decrease in customers complaining in their open text fields about support. We see customers less challenged by integration, which is so incredibly important.

And we also hear less from our own SAP leaders who felt like NPS just exposed the fact that they might not be doing their job well, which was initially the experience we got from leaders who were like, “Oh my gosh. I don’t want you to talk about anything that makes it look like I am not doing my job.” We created a culture where we have been more open to feedback. We now relish in that insight, versus feeling defensive.

And that’s a culture shift that took us five years to get to. Now you have leaders chomping at the bit to get those insights, get that data, and make the changes because we have proof. And that proof did start with an organizational change right in the beginning. It started with new leadership in certain areas like support. Those things translated into the success we have today. But now we have to evolve beyond that. What’s the next step for us?

Gardner: Before we talk about your next steps, for those organizations that are intrigued by this — that want to be more customer-centric and to understand why it’s important — what lessons have you learned? What advice do you have for organizations that are maybe just beginning on the CX path?

Bianco: How long is this show?

Gardner: Ten more minutes, tops.

Bianco: Just kidding. I mean gosh, I have learned a lot. If I look back — and I know some of my colleagues at IBM had a similar experience — the feedback is this. We started by deploying NPS. We just went out there and said we are going to do these NPS surveys and that’s going to shake the business into understanding how our customers are feeling.

We grew to understand that our customers came to SAP because of our products. And so I think I might have spent more time listening inside of the products. What does that mean? It certainly means embedding micro-moments, of aggregating feedback, in the product to help understand — and allows our developers to understand what they need to do. But that need to be done in a very strategic way.

It’s also about making sure that any time anyone in the company wants to listen to customers, you ensure that you have the budget and the resources necessary to make that change — because otherwise you will alienate your customers.

Another area is you have to have executive leadership. It has to be at the root of your corporate objectives. Anything less than that and you will struggle. It doesn’t mean you won’t have some success, but when you are looking at the root of making experience better, it’s about action. That action needs to be taken by the folks responsible for your products or services. Those folks have to be incented, or they have to be looped in and committed to the program. There has to be a governance model that measures the experience of the customer based on how the customer interprets it — not how you interpret it.

If, as a company, you interpret success as net-new software sales, you have to shift that mindset. That’s not how your customers view their own success.

Gardner: That’s very important and powerful. Before we sign off, five years in, where do you go now? Is there an acceleration benefit, a virtuous adoption pattern of sorts when you do this? How do you take what you have done and bring it to a step-change improvement or to an even more strategic level?

Turn feedback into action

Bianco: The next step for us is to embed the experience program in every phase of the customer’s journey. That includes every phase of our engagement journey inside of our organization.

So from start to finish, what are the teams providing that experience, whether it’s a service or product? That would be one. And, again, that requires the governance that I mentioned. Because action is where it’s at — regardless of the feedback you are getting and how many places you listen. Action is the most important piece to making their experience better.

This requires governance because action is where it’s at — regardless of the feedback. Taking action is the most important piece to making the customer experience better.

Another is to move beyond just NPS surveys. Again, it’s not that this is a new concept, but as I watched the impact of COVID-19 on accelerating digital feedback, social forums, and public forums, we measured that advocacy. It’s not just the, “Will you recommend this product to a friend or colleague?” In addition it’s about, “Will you promote this company or not?”

That is going to be more important than ever, because we are going to continue in a virtual environment next year. As much as we can help frame what that feedback might be — and be proactive — is where I see success for SAP in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

Posted in Ariba, artificial intelligence, Business intelligence, Business networks, digital transformation, ERP, Help desk, machine learning, managed services, marketing, procurement, retail, SAP, SAP Ariba, social media, supply chain, User experience | Tagged , , , , , , , , , | Leave a comment

How to industrialize data science to attain mastery of repeatable intelligence delivery

Businesses these days are quick to declare their intention to become data-driven, yet the deployment of analytics and the use of data science remains spotty, isolated, and often uncoordinated.

To fully reach their digital business transformation potential, businesses large and small need to make data science more of a repeatable assembly line — an industrialization, if you will — of end-to-end data exploitation.

The next BriefingsDirect Voice of Analytics Innovation discussion explores the latest methods, tools, and thinking around making data science an integral core function that both responds to business needs and scales to improve every aspect of productivity.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the ways that data and analytics behave more like a factory — and less like an Ivory Tower — please welcome Doug Cackett, EMEA Field Chief Technology Officer at Hewlett Packard Enterprise. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Doug, why is there a lingering gap — and really a gaping gap — between the amount of data available and the analytics that should be taking advantage of it?

Cackett: That’s such a big question to start with, Dana, to be honest. We probably need to accept that we’re not doing things the right way at the moment. Actually, Forrester suggests that something like 40 zettabytes of data are going to be under management by the end of this year, which is quite enormous.

And, significantly, more of that data is being generated at the edge through applications, Internet of Things (IoT), and all sorts of other things. This is where the customer meets your business. This is where you’re going to have to start making decisions as well.

So, the gap is two things. It’s the gap between the amount of data that’s being generated and the amount you can actually comprehend and create value from. In order to leverage that data from a business point of view, you need to make decisions at the edge.

You will need to operationalize those decisions and move that capability to the edge where your business meets your customer. That’s the challenge we’re all looking for machine learning (ML) — and the operationalization of all of those ML models into applications — to make the difference.

Gardner: Why does HPE think that moving more toward a factory model, industrializing data science, is part of the solution to compressing and removing this gap?

Data’s potential at the edge

Cackett: It’s a math problem, really, if you think about it. If there is exponential growth in data within your business, if you’re trying to optimize every step in every business process you have, then you’ll want to operationalize those insights by making your applications as smart as they can possibly be. You’ll want to embed ML into those applications.

Because, correspondingly, there’s exponential growth in the demand for analytics in your business, right? And yet, the number of data scientists you have in your organization — I mean, growing them exponentially isn’t really an option, is it? And, of course, budgets are also pretty much flat or declining.

There’s exponential growth in the demand for analytics in your business. And yet the number of data scientists in your organization, growing them, is not exponential. And budgets are pretty much flat or declining.

So, it’s a math problem because we need to somehow square away that equation. We somehow have to generate exponentially more models for more data, getting to the edge, but doing that with fewer data scientists and lower levels of budget.

Industrialization, we think, is the only way of doing that. Through industrialization, we can remove waste from the system and improve the quality and control of those models. All of those things are going to be key going forward.

Gardner: When we’re thinking about such industrialization, we shouldn’t necessarily be thinking about an assembly line of 50 years ago — where there are a lot of warm bodies lined up. I’m thinking about the Lucille Ball assembly line, where all that candy was coming down and she couldn’t keep up with it.

Perhaps we need more of an ultra-modern assembly line, where it’s a series of robots and with a few very capable people involved. Is that a fair analogy?

Industrialization of data science

Cackett: I think that’s right. Industrialization is about manufacturing where we replace manual labor with mechanical mass production. We are not talking about that. Because we’re not talking about replacing the data scientist. The data scientist is key to this. But we want to look more like a modern car plant, yes. We want to make sure that the data scientist is maximizing the value from the data science, if you like.

We don’t want to go hunting around for the right tools to use. We don’t want to wait for the production line to play catch up, or for the supply chain to catch up. In our case, of course, that’s mostly data or waiting for infrastructure or waiting for permission to do something. All of those things are a complete waste of their time.

As you look at the amount of productive time data scientists spend creating value, that can be pretty small compared to their non-productive time — and that’s a concern. Part of the non-productive time, of course, has been with those data scientists having to discover a model and optimize it. Then they would do the steps to operationalize it.

But maybe doing the data and operations engineering things to operationalize the model can be much more efficiently done with another team of people who have the skills to do that. We’re talking about specialization here, really.

But there are some other learnings as well. I recently wrote a blog about it. In it, I looked at the modern Toyota production system and started to ask questions around what we could learn about what they have learned, if you like, over the last 70 years or so.

It was not just about automation, but also how they went about doing research and development, how they approached tooling, and how they did continuous improvement. We have a lot to learn in those areas.

For an awful lot of organizations that I deal with, they haven’t had a lot of experience around such operationalization problems. They haven’t built that part of their assembly line yet. Automating supply chains and mistake-proofing things, what Toyota called jidoka, also really important. It’s a really interesting area to be involved with.

Gardner: Right, this is what US manufacturing, in the bricks and mortar sense, went through back in the 1980s when they moved to business process reengineering, adopted kaizen principles, and did what Demingand more quality-emphasis had done for the Japanese auto companies.

And so, back then there was a revolution, if you will, in physical manufacturing. And now it sounds like we’re at a watershed moment in how data and analytics are processed.

Cackett: Yes, that’s exactly right. To extend that analogy a little further, I recently saw a documentary about Morgan cars in the UK. They’re a hand-built kind of car company. Quite expensive, very hand-built, and very specialized.

And I ended up by almost throwing things at the TV because they were talking about the skills of this one individual. They only had one guy who could actually bend the metal to create the bonnet, the hood, of the car in the way that it needed to be done. And it took two or three years to train this guy, and I’m thinking, “Well, if you just automated the process, and the robot built it, you wouldn’t need to have that variability.” I mean, it’s just so annoying, right?

In the same way, with data science we’re talking about laying bricks — not Michelangelo hammering out the figure of David. What I’m really trying to say is a lot of the data science in our customer’s organizations are fairly mundane. To get that through the door, get it done and dusted, and give them time to do the other bits of finesse using more skills — that’s what we’re trying to achieve. Both [the basics and the finesse] are necessary and they can all be done on the same production line.

Gardner: Doug, if we are going to reinvent and increase the productivity generally of data science, it sounds like technology is going to be a big part of the solution. But technology can also be part of the problem.

What is it about the way that organizations are deploying technology now that needs to shift? How is HPE helping them adjust to the technology that supports a better data science approach?

Define and refine

Cackett: We can probably all agree that most of the tooling around MLOps is relatively young. The two types of company we see are either companies that haven’t yet gotten to the stage where they’re trying to operationalize more models. In other words, they don’t really understand what the problem is yet.

Forrester research suggests that only 14 percent of organizations that they surveyed said they had a robust and repeatable operationalization process. It’s clear that the other 86 percent of organizations just haven’t refined what they’re doing yet. And that’s often because it’s quite difficult.

Many of these organizations have only just linked their data science to their big data instances or their data lakes. And they’re using it both for the workloads and to develop the models. And therein lies the problem. Often they get stuck with simple things like trying to have everyone use a uniform environment. All of your data scientists are both sharing the data and sharing the computer environment as well.

Data scientists can be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating terabytes of data, which can take a long time. That also demands new resources, including new hardware.

And data scientists can often be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating the data. And if you’re going to replicate terabytes of data, that can take a long period of time. That also means you need new resources, maybe new more compute power and that means approvals, and it might mean new hardware, too.

Often the biggest challenge is in provisioning the environment for data scientists to work on, the data that they want, and the tools they want. That can all often lead to huge delays in the process. And, as we talked about, this is often a time-sensitive problem. You want to get through more tasks and so every delayed minute, hour, or day that you have becomes a real challenge.

The other thing that is key is that data science is very peaky. You’ll find that data scientists may need no resources or tools on Monday and Tuesday, but then they may burn every GPU you have in the building on Wednesday, Thursday, and Friday. So, managing that as a business is also really important. If you’re going to get the most out of the budget you have, and the infrastructure you have, you need to think differently about all of these things. Does that make sense, Dana?

Gardner: Yes. Doug how is HPE Ezmeral being designed to help give the data scientists more of what they need, how they need it, and that helps close the gap between the ad hoc approach and that right kind of assembly line approach?

Two assembly lines to start

Cackett: Look at it as two assembly lines, at the very minimum. That’s the way we want to look at it. And the first thing the data scientists are doing is the discovery.

The second is the MLOps processes. There will be a range of people operationalizing the models. Imagine that you’re a data scientist, Dana, and I’ve just given you a task. Let’s say there’s a high defection or churn rate from our business, and you need to investigate why.

First you want to find out more about the problem because you might have to break that problem down into a number of steps. And then, in order to do something with the data, you’re going to want an environment to work in. So, in the first step, you may simply want to define the project, determine how long you have, and develop a cost center.

You may next define the environment: Maybe you need CPUs or GPUs. Maybe you need them highly available and maybe not. So you’d select the appropriate-sized environment. You then might next go and open the tools catalog. We’re not forcing you to use a specific tool; we have a range of tools available. You select the tools you want. Maybe you’re going to use Python. I know you’re hardcore, so you’re going to code using Jupyter and Python.

And the next step, you then want to find the right data, maybe through the data catalog. So you locate the data that you want to use and you just want to push a button and get provisioned for that lot. You don’t want to have to wait months for that data. That should be provisioned straight away, right?

You can do your work, save all your work away into a virtual repository, and save the data so it’s reproducible. You can also then check the things like model drift and data drift and those sorts of things. You can save the code and model parameters and those sorts of things away. And then you can put that on the backlog for the MLOps team.

Then the MLOps team picks it up and goes through a similar data science process. They want to create their own production line now, right? And so, they’re going to seek a different set of tools. This time, they need continuous integration and continuous delivery (CICD), plus a whole bunch of data stuff they want to operationalize your model. They’re going to define the way that that model is going to be deployed. Let’s say, we’re going to use Kubeflow for that. They might decide on, say, an A/B testing process. So they’re going to configure that, do the rest of the work, and press the button again, right?

Clearly, this is an ongoing process. Fundamentally that requires workflow and automatic provisioning of the environment to eliminate wasted time, waiting for stuff to be available. It is fundamentally what we’re doing in our MLOps product.

But in the wider sense, we also have consulting teams helping customers get up to speed, define these processes, and build the skills around the tools. We can also do this as-a-service via our HPE GreenLake proposition as well. Those are the kinds of things that we’re helping customers with.

Gardner: Doug, what you’re describing as needed in data science operations is a lot like what was needed for application development with the advent of DevOps several years ago. Is there commonality between what we’re doing with the flow and nature of the process for data and analytics and what was done not too long ago with application development? Isn’t that also akin to more of a cattle approach than a pet approach?

Operationalize with agility

Cackett: Yes, I completely agree. That’s exactly what this is about and for an MLOps process. It’s exactly that. It’s analogous to the sort of CICD, DevOps, part of the IT business. But a lot of that tool chain is being taken care of by things like Kubeflow and MLflow Project, some of these newer, open source technologies.

I should say that this is all very new, the ancillary tooling that wraps around the CICD. The CICD set of tools are also pretty new. What we’re also attempting to do is allow you, as a business, to bring these new tools and on-board them so you can evaluate them and see how they might impact what you’re doing as your process settles down.

The way we’re doing MLOps and data science is progressing extremely quickly. So you don’t want to lock yourself into a corner where you’re trapped in a particular workflow. You want to have agility. It’s analogous to the DevOps movement.

The idea is to put them in a wrapper and make them available so we get a more dynamic feel to this. The way we’re doing MLOps and data science generally is progressing extremely quickly at the moment. So you don’t want to lock yourself into a corner where you’re trapped into a particular workflow. You want to be able to have agility. Yes, it’s very analogous to the DevOps movement as we seek to operationalize the ML model.

The other thing to pay attention to are the changes that need to happen to your operational applications. You’re going to have to change those so they can tool the ML model at the appropriate place, get the result back, and then render that result in whatever way is appropriate. So changes to the operational apps are also important.

Gardner: You really couldn’t operationalize ML as a process if you’re only a tools provider. You couldn’t really do it if you’re a cloud services provider alone. You couldn’t just do this if you were a professional services provider.

It seems to me that HPE is actually in a very advantageous place to allow the best-of-breed tools approach where it’s most impactful but to also start put some standard glue around this — the industrialization. How is HPE is an advantageous place to have a meaningful impact on this difficult problem?

Cackett: Hopefully, we’re in an advantageous place. As you say, it’s not just a tool, is it? Think about the breadth of decisions that you need to make in your organization, and how many of those could be optimized using some kind of ML model.

You’d understand that it’s very unlikely that it’s going to be a tool. It’s going to be a range of tools, and that range of tools is going to be changing almost constantly over the next 10 and 20 years.

This is much more to do with a platform approach because this area is relatively new. Like any other technology, when it’s new it almost inevitably to tends to be very technical in implementation. So using the early tools can be very difficult. Over time, the tools mature, with a mature UI and a well-defined process, and they become simple to use.

But at the moment, we’re way up at the other end. And so I think this is about platforms. And what we’re providing at HPE is the platform through which you can plug in these tools and integrate them together. You have the freedom to use whatever tools you want. But at the same time, you’re inheriting the back-end system. So, that’s Active Directory and Lightweight Directory Access Protocol (LDAP) integrations, and that’s linkage back to the data, your most precious asset in your business. Whether that be in a data lake or a data warehouse, in data marts or even streaming applications.

This is the melting point of the business at the moment. And HPE has had a lot of experience helping our customers deliver value through information technology investments over many years. And that’s certainly what we’re trying to do right now.

Gardner: It seems that HPE Ezmeral is moving toward industrialization of data science, as well as other essential functions. But is that where you should start, with operationalizing data science? Or is there a certain order by which this becomes more fruitful? Where do you start?

Machine learning leads change

Cackett: This is such a hard question to answer, Dana. It’s so dependent on where you are as a business and what you’re trying to achieve. Typically, to be honest, we find that the engagement is normally with some element of change in our customers. That’s often, for example, where there’s a new digital transformation initiative going on. And you’ll find that the digital transformation is being held back by an inability to do the data science that’s required.

There is another Forrester report that I’m sure you’ll find interesting. It suggests that 98 percent of business leaders feel that ML is key to their competitive advantage. It’s hardly surprising then that ML is so closely related to digital transformation, right? Because that’s about the stage at which organizations are competing after all.

So we often find that that’s the starting point, yes. Why can’t we develop these models and get them into production in time to meet our digital transformation initiative? And then it becomes, “Well, what bits do we have to change? How do we transform our MLOps capability to be able to do this and do this at scale?”

Often this shift is led by an individual in an organization. There develops a momentum in an organization to make these changes. But the changes can be really small at the start, of course. You might start off with just a single ML problem related to digital transformation.

We acquired MapR some time ago, which is now our HPE Ezmeral Data Fabric. And it underpins a lot of the work that we’re doing. And so, we will often start with the data, to be honest with you, because a lot of the challenges in many of our organizations has to do with the data. And as businesses become more real-time and want to connect more closely to the edge, really that’s where the strengths of the data fabric approach come into play.

So another starting point might be the data. A new application at the edge, for example, has new, very stringent requirements for data and so we start there with building these data systems using our data fabric. And that leads to a requirement to do the analytics and brings us obviously nicely to the HPE Ezmeral MLOps, the data science proposition that we have.

Gardner: Doug, is the COVID-19 pandemic prompting people to bite the bullet and operationalize data science because they need to be fleet and agile and to do things in new ways that they couldn’t have anticipated?

Cackett: Yes, I’m sure it is. We know it’s happening; we’ve seen all the research. McKinsey has pointed out that the pandemic has accelerated a digital transformation journey. And inevitably that means more data science going forward because, as we talked about already with that Forrester research, some 98 percent think that it’s about competitive advantage. And it is, frankly. The research goes back a long way to people like Tom Davenport, of course, in his famous Harvard Business Review article. We know that customers who do more with analytics, or better analytics, outperform their peers on any measure. And ML is the next incarnation of that journey.

Gardner: Do you have any use cases of organizations that have gone to the industrialization approach to data science? What is it done for them?

Financial services benefits

Cackett: I’m afraid names are going to have to be left out. But a good example is in financial services. They have a problem in the form of many regulatory requirements.

When HPE acquired BlueData it gained an underlying technology, which we’ve transformed into our MLOps and container platform. BlueData had a long history of containerizing very difficult, problematic workloads. In this case, this particular financial services organization had a real challenge. They wanted to bring on new data scientists. But the problem is, every time they wanted to bring a new data scientist on, they had to go and acquire a bunch of new hardware, because their process required them to replicate the data and completely isolate the new data scientist from the other ones. This was their process. That’s what they had to do.

So as a result, it took them almost six months to do anything. And there’s no way that was sustainable. It was a well-defined process, but it’s still involved a six-month wait each time.

So instead we containerized their Cloudera implementation and separated the compute and storage as well. That means we could now create environments on the fly within minutes effectively. But it also means that we can take read-only snapshots of data. So, the read-only snapshot is just a set of pointers. So, it’s instantaneous.

They scaled out their data science without scaling up their costs or the number of people required. They are now doing that in a hybrid cloud environment. And they only have to change two lines of code to push workloads into AWS, which is pretty magical, right?

They were able to scale-out their data science without scaling up their costs or the number of people required. Interestingly, recently, they’ve moved that on further as well. Now doing all of that in a hybrid cloud environment. And they only have to change two lines of code to allow them to push workloads into AWS, for example, which is pretty magical, right? And that’s where they’re doing the data science.

Another good example that I can name is GM Finance, a fantastic example of how having started in one area for business — all about risk and compliance — they’ve been able to extend the value to things like credit risk.

But doing credit risk and risk in terms of insurance also means that they can look at policy pricing based on dynamic risk. For example, for auto insurance based on the way you’re driving. How about you, Dana? I drive like a complete idiot. So I couldn’t possibly afford that, right? But you, I’m sure you drive very safely.

But in this use-case, because they have the data science in place it means they can know how a car is being driven. They are able to look at the value of the car, the end of that lease period, and create more value from it.

These are types of detailed business outcomes we’re talking about. This is about giving our customers the means to do more data science. And because the data science becomes better, you’re able to do even more data science and create momentum in the organization, which means you can do increasingly more data science. It’s really a very compelling proposition.

Gardner: Doug, if I were to come to you in three years and ask similarly, “Give me the example of a company that has done this right and has really reshaped itself.” Describe what you think a correctly analytically driven company will be able to do. What is the end state?

A data-science driven future

Cackett: I can answer that in two ways. One relates to talking to an ex-colleague who worked at Facebook. And I’m so taken with what they were doing there. Basically, he said, what originally happened at Facebook, in his very words, is that to create a new product in Facebook they had an engineer and a product owner. They sat together and they created a new product.

Sometime later, they would ask a data scientist to get involved, too. That person would look at the data and tell them the results.

Then they completely changed that around. What they now do is first find the data scientist and bring him or her on board as they’re creating a product. So they’re instrumenting up what they’re doing in a way that best serves the data scientist, which is really interesting.

The data science is built-in from the start. If you ask me what’s going to happen in three years’ time, as we move to this democratization of ML, that’s exactly what’s going to happen. I think we’ll end up genuinely being information-driven as an organization.

That will build the data science into the products and the applications from the start, not tack them on to the end.

Gardner: And when you do that, it seems to me the payoffs are expansive — and perhaps accelerating.

Cackett: Yes. That’s the competitive advantage and differentiation we started off talking about. But the technology has to underpin that. You can’t deliver the ML without the technology; you won’t get the competitive advantage in your business, and so your digital transformation will also fail.

This is about getting the right technology with the right people in place to deliver these kinds of results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in Cloud computing, storage | Tagged , , , , , , , , , , , , , , , | Leave a comment

How remote work promises to deliver new levels of engagement, productivity, and innovation

The way people work has changed more in 2020 than the previous 10 years combined — and that’s saying a lot. Even more than the major technological impacts of cloud, mobile, and big data, the COVID-19 pandemic has greatly accelerated and deepened global behavioral shifts.

The ways that people think about where and how to work may never be the same, and new technology alone could not have made such a rapid impact.

So now is the time to take advantage of a perhaps once-in-a-lifetime disruption for the better. Steps can be taken to make sure that such a sea change comes less with a price and more with a broad boon — to both workers and businesses.

The next BriefingsDirect work strategies panel discussion explores research into the future of work and how unprecedented innovation could very well mean a doubling of overall productivity in the coming years.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We’re joined by a panel to hear insights on how a remote-first strategy leads to a reinvention of work expectations and payoffs. Please welcoming our guests, Jeff Vincent, Chief Executive Officer at Lucid Technology ServicesRay Wolf, Chief Executive Officer at A2K Partners, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, you’ve done some new research at Citrix. You’ve looked into what’s going on with the nature of work and a shift from what seems to be from chaos to opportunity. Tell us about the research and why it fosters such optimism.

Minahan: Most of the world has been focused on the here-and-now, with how to get employees home safely, maintain business continuity, and keep employees engaged and productive in a prolonged work-from-home model. Yet we spent the bulk of the last year partnering with Oxford Analytica and Coleman Parkes to survey thousands of business and IT executives and to conduct qualitative interviews with C-level executives, academia, and futurists on what work is going to look like 15 years from now — in 2035 — and predict the role that technology will play.

Certainly, we’re already seeing an acceleration of the findings from the report. And if there’s any iota of a silver lining in this global crisis we’re all living through, it’s that it has caused many organizations to rethink their operating models, business models, and their work models and workforce strategies.

Work has no-doubt forever changed. We’re seeing an acceleration of companies embracing new workforce strategies, reaching to pools of talent in remote locales using technology, and opening up access to skill sets that were previously too costly near their office and work hubs.

Now they can access talent anywhere, enabling and elevating the skill sets of all employees by leveraging artificial intelligence (AI) and machine learning (ML) to help them perform as their best employees. They are ensuring that they can embrace entirely new work models, possibly even the Uber-fication of work by tapping into recent retirees, work-from-home parents, and caregivers who had opted-out of the workforce — not because they didn’t have the skills or expertise that folks needed — but because traditional work models didn’t support their home environment.

We’re seeing an acceleration of companies liberated by the fact that they realize work can happen outside of the office. Many executives across every industry have begun to rethink what the future of work is going to look like when we come out of this pandemic.

Gardner: Tim, one of the things that jumped out at me from your research was a majority feel that technology will make workers at least twice as productive by 2035. Why such a newfound opportunity for higher productivity, which had been fairly flat for quite a while? What has changed in behavior and technology that seems to be breaking us out of the doldrums when it comes to productivity?

Work 2035: Citrix Research

Reveals a More Intelligent Future

Minahan: Certainly, the doubling of employee productivity is a factor of a couple things. Number one, new more flexible work models allow employees to work wherever they can do their best work. But more importantly, it is the emergence of the augmented worker, using AI and ML to help not just offer up the right information at the right time, but help employees make more informed decisions and speed up the decision-making process, as well as automating menial tasks so employees can focus on the strategic aspects of driving creativity and innovation for the business. This is one of the areas we think is the most exciting as we look forward to the future.

Gardner: We’re going to dig into that research more in our discussion. But let’s go to Jeff at Lucid Technology Services. Tell us about Lucid, Jeff, and why a remote-first strategy has been a good fit for you.

Remote service keep SMBs safe

Vincent: Lucid Technology Services delivers what amounts to a fractional chief information officer (CIO) service. Small- to medium-sized businesses (SMBs) need CIOs but don’t generally have the working capital to afford a full-time, always-on, and always-there CIO or chief technology officer (CTO). That’s where we fill the gap.

We bring essentially an IT department to SMBs, everything from budgeting to documentation — and all points in between. And one of the big things that taught us to look forward is by looking backward. In 1908, Henry Ford gave us the modern assembly line, which promptly gave us the model T. And so horse-drawn buggy whip factories and buggy accessories suddenly became obsolete.

Something similar happened in the early 1990s. It was a fad called the Internetand it revolutionized work in ways that could not have been foreseen up to that point in time. We firmly believe that we’re on the precipice of another revolution of work just like then. The technology is mature at this point. We can move forward with it, using things like Citrix.

Gardner: Bringing a CIO-caliber function to SMBs sounds like it would be difficult to scale, if you had to do it in-person. So, by nature, you have been a pioneer in a remote-first strategy. Is it effective? Some people think you can’t be remote and be effective.

Vincent: Well, that’s not what we’ve been finding. This has been an evolution in my business for 20 years now. And the field has grown as the need has grown. Fortunately, the technology has kept pace with it. So, yes, I think we’re very effective.

Previously, let’s say a CPA firm of 15 providers, or a medical firm of three or four doctors with another 10 or so administrative and assistance staff on site all of the time, they had privileged information and data under regulation that needs safeguarding.

Well, if you are Arthur Andersen, a large, national firm, or Kaiser Permanente, or some really large corporation that has an entire team of IT staff on-site, then that isn’t really a problem. But when you’re under 25 to 50 employees, that’s a real problem because even if you were compromised, you wouldn’t necessarily know it.

If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do the work of a lot of people.

We leverage monitoring technology, such as next-generation firewalls, and a team of people looking after that network operation center (NOC) and help desk to head those problems off at the pass. If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do a lot of work for a lot of people. That is the secret sauce of our success.

Gardner: Jeff, from your experience, how often is it the CIO who is driving the remote work strategy?

Vincent: I don’t think remote work prior to the pandemic could have been driven from any other any other seat than the CIO/CTO. It’s his or her job. It’s their entire ethos to keep the finger on pulse of technology, where it’s going, and what it’s currently capable of doing.

In my experience, anybody else on the C-suite team has so much else going on. Everybody is wearing multiple hats and doing double-duty. So, the CTO is where that would have been driven.

But now, what I’ve seen in my own business, is that since the pandemic, as the CTO, I’m not generally leading the discussion — I’m answering the questions. That’s been very exciting and one of the silver linings I’ve seen through this very trying time. We’re not forcing the conversation anymore. We are responding to the questions. I certainly didn’t envision a pandemic shutting down businesses. But clearly, the possibility was there, and it’s been a lot easier conversation [about remote work] to have over the past several months.

The nomadic way of work

Gardner: Ray, tell us about A2K Partners. What do you have in common with Jeff Vincent at Lucid about the perceived value of a remote-first strategy?

Wolf: A2K Partners is a digital transformation company. Our secret sauce is we translate technology into the business applications, outcomes, and impacts that people care about.

Our company was founded by individuals who were previously in C-level business positions, running global organizations. We were the consumers of technology. And honestly, we didn’t want to spend a lot of time configuring the technologies. We wanted to speed things up, drive efficiency, and drive revenue and growth. So we essentially built the company around that.

We focus on work redesign, work orchestration, and employee engagement. We leverage platforms like Citrix for the future of work and for bringing in productivity enhancements to the actual processes of doing work. We ask, what’s the current state? What’s the future state? That’s where we spend a lot of our time.

As for a remote-first strategy, I want to highlight that our company is a nomadic company. We recruit people who want to live and work from anywhere. We think there’s a different mindset there. They are more apt to accept and embrace change. So untethered work is really key.

What we have been seeing with our clients — and the conversations that we’re having currently today — is the leaders of every organization, at every level, are trying to figure out how we come out of this pandemic better than when we went in. Some actually feel victims, and we’re encouraging them that this is really an opportunity.

Some statistics from the last three economic downturns: One very interesting one is that companies that started before the downturn in the bottom 20 percent emerged in the top 20 percent after the downturn. And you ask yourself, “How does a mediocre company all of a sudden rise to the top through a crisis?” This is where we’ve been spending time, in figuring out what plays they are running and how to better help them execute on it.

As Work Goes Virtual, Citrix Research Shows

Companies Need to Follow Talent Fleeing Cities

The companies that have decided to use this as a period to change the business model, change the services and products they’re offering, are doing it in stealth mode. They’re not noisy. There are no press releases. But I will tell you that next March, June, or September, what will come from them will create an Amazon-like experience for their customers and their employees.

Gardner: Tim, in listening to Jeff and Ray, it strikes me that they look at remote work not as the destination — but the starting point. Is that what you’re starting to see? Have people reconciled themselves with the notion that a significant portion of their workforce will probably be remote? And how do we use that as a starting point — and to what?

Minahan: As Jeff said, companies are rethinking their work models in ways they haven’t since Henry Ford. We just did OnePoll research polling with thousands of US-based knowledge workers. Some 47 percent have either relocated out of big metropolitan areas or are in the process of doing that right now. They can primarily because they’ve proven to themselves that they can be productive when not necessarily in the office.

No alt text provided for this image

Similarly, some 80 percent of companies are now looking at making remote work a more permanent part of their workforce strategy. And why is that? It is not just merely should Sam or Sally work in the office or work at home. No, they’re fundamentally rethinking the role of work, the workforce, the office, and what role the physical office should play.

And they’re seeing an opportunity, not just from real estate cost-reduction, but more so from access to talent. If we remember back nine months ago to before the great pandemic, we were having a different discussion. That discussion was the fact that there was a global talent shortage, according to McKinsey, of 95 million medium- to high-skilled workers.

That hasn’t changed. It was exacerbated at that time because we were organized around traditional work-hub models — where you build an office, build a call center, and you try like heck to hire people from around that area. Of course, if you happen to build in a metropolitan area right down the street from one of your top competitors — you can see the challenge.

In addition, there was a challenge around attaining the right skillsets to modernize and digitize your businesses. We’re also seeing an acceleration in the need for those skills because, candidly, very few businesses can continue to maintain their physical operations in light of the pandemic. They have had to go digital.

And so, as companies are rethinking all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere, just as Ray indicated. I like the nomadic work concept.

As companies rethink all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere. I like the nomadic work concept.

Now, how do I use technology to even further raise the skillsets of all of my employees so they perform like the very best. This is where that interesting angle of AI and ML comes in, of being able to offer up the right insights to guide employees to the right next step in a very simple way. At the same time, that approach removes the noise from their day and helps them focus on the tasks they need to get done to be productive. It gives them the space to be creative and innovative and to drive that next level of growth for their company.

Gardner: Jeff, it sounds like the remote work and the future of work that Tim is describing sets us up for a force-multiplier when it comes to addressable markets. And not just addressable markets in terms of your customers, who can be anywhere, but also that your workers can be anywhere. Is that one of the things that will lead to a doubling of productivity?

Workers and customers anywhere

Vincent: Certainly. And the thing about truth is that it’s where you find it. And if it’s true in one area of human operations, it’s going to at least have some application in every other. For example, I live in the Central Valley of California. Because of our climate, the geology, and the way this valley was carved out of the hillside, we have a disproportionately high ability to produce food. So one of the major industries here in the Central Valley is agriculture.

You can’t do what we do here just anywhere because of all those considerations: climate, soil, and rainfall, when it comes. The fact that we have one of the tallest mountain ranges right next to us gives us tons of water, even if it doesn’t rain a lot here in Fresno. But you can’t outsource any of those things. You can’t move any of those things — but that’s becoming a rarity.

If you focus on a remote-first workplace, you can source talent from anywhere; you can locate your business center anywhere. So you get a much greater recruiting tool both for clientele and for talent.

Another thing that has been driven by this pandemic is that people have been forced to go home, stay there, and work there. Either you’re going to figure out a way to get around the obstacles of not being able to go to the office or you’re going to have to close down, and nobody wants to do that. So they’ve learned to adapt, by and large.

And the benefits that we’re seeing are just manifold. They go into everything. Our business agility is much greater. The human considerations of your team members improve, too. They have had an artificial dichotomy between work responsibilities and home life. Think of a single parent trying to raise a family and put bread on the table.

Work Has Changed Forever, So That Experience

Must Be Empowered to Any Location

Now, with the remote-first workplace, it becomes much easier. Your son, your daughter, they have a medical appointment; they have a school need; they have something going on in the middle of the day. Previously you had to request time off, schedule around that, and move other team members into place. And now this person can go and be there for their child, or their aging parents, or any of the other hundreds of things that can go sideways for a family.

With a cloud-based workforce, that becomes much less of a problem. You have still got some challenges you’ve got to overcome, but there are fewer of them. I think everybody is reaping the benefits of that because with fewer people needing to be in the office, that means you can have a smaller office. Fewer people on the roads means less environmental impact of moving around and commuting for an hour twice a day.

Gardner: Ray Wolf, what is it about technology that is now enabling these people to be flexible and adaptive? What do you look for in technology platforms to give those people the tools they need?

Do more with less

Wolf: First, let’s talk about the current technology situation. The average worker out there has eight applications and 10 windows open. The way technology is provisioned to some of our remote workers is working against them. We have these technologies for all. Just because you give someone access to a customer relationship management (CRM) system or a human resources (HR) system doesn’t necessarily make them more productive. It doesn’t take into consideration how they like to do work. When you bring on new employees, it leaves it up to the individual to figure out how to get stuff done.

With the new platforms, Citrix Workspace with intelligence, for example, we’re able to take those mundane tasks and lock then into memory muscle through automation. And so, what we do is free-up time and energy using the Citrix platform. Then people can start moving and essentially upscaling, taking on higher cognitive tasks, and building new products and services.

No alt text provided for this image

That’s what we love about it. The other side is it’s no code and low code. The key here is just figuring out where to get started and making sure that the workers have their fingerprints on the plan because your worker today knows exactly where the inefficiencies are. They know where the frustration is. So we have a number of use cases that in the matter of six weeks, we were able to unlock almost a day per week worth of productivity gains, of which one of our customers in the sale spaces, a sales vice president, coined the word “proactivity.”

For them, they were taking that one extra day a week and starting to be proactive by pursuing new sales and leads and driving revenue where they just didn’t have the bandwidth before.

Through of our own polling of about 200 executives, we discovered that 50 percent of the companies are scaling down on their resources because they are unsure of the future. And that leaves them with the situation of doing more with less. That’s why the automation platforms are ideal for freeing up time and energy so they can deal with a reduced work force, but still gain the bandwidth to pursue new services and products. Then they can come out and be in that top 20 percent after the pandemic.

Gardner: Tim, I’m hearing Citrix Workspace referred to as an automation platform. How does Workspace not just help people connect, but helps them automate and accelerate productivity?

Keep talent optimized every day

Minahan: Ray put his finger on the pulse of the third dynamic we were seeing pre-pandemic, and it’s only been exacerbated. We talked first about the global shortage of medium- to high-skills talent. But then we talked about the acute shortage of digital skills that those folks need.

The third part is, if you’re lucky enough to have that talent, it’s likely they are very frustrated at work. A recent Gallup poll says 87 percent of employees are disengaged at work, and that’s being exacerbated by all of the things that Ray talked about. We’ve provided these workers with all of these tools, all these different channels, Teams and Slack and the like, and they’re meant to improve their performance in collaboration. But we have reached a tipping point of complexity that really has turned your top talent into task rabbits.

What Citrix does with our digital Workspace technology is it abstracts away all of that complexity. It provides unified access to everything an employee needs to be productive in one experience that travels with them. So, their work environment is this digital workspace — no matter what device they are on, no matter what location they are at, no matter what work channel they need to navigate across.

The second thing is it wrappers that in security, both secure access on the way in (I call it the bouncer at the front door), as well as ongoing contextual application of security policies. I call that the bodyguard who follows you around the club to make sure you stay out of trouble. And that gives IT the confidence that those employees can indeed work wherever they need to, and from whatever device they need to, with a level of comfort that their company’s information and assets are made secure.

What gets exciting now is the intelligence components. Infusing this with ML and AI automates away and guides an employee through their workday. It automates away those menial tasks so they can focus on what’s important.

But what gets exciting now is the intelligence components. Infusing this with ML and AI automates away and guides an employee through their work day. It automates away those menial tasks so they can focus on what’s important.

And that’s where folks like A2K come in. They can bring in their intellectual property and understanding of the business processes — using those low- to no-code tools — to actually develop extensions to the workspace that meet the needs of individual functions or individual industries and personalize the workspace experience for every individual employee.

Ray mentioned sales force productivity. They are also doing call center optimization. So, very, very discreet solutions that before required users to navigate across multiple different applications but are now handled through a micro app player that simplifies the engagement model for the employee, offering up the right insights and the right tasks at the right time so that they can do their very best work.

Gardner: Jeff Vincent, we have been talking about this in terms of worker productivity. But I’m wondering about leadership productivity. You are the CEO of a company that relies on remote work to a large degree. How do you find that tools like Citrix and remote-first culture works for you as a leader? Do you feel like you can lead a company remotely?

Workspace enhances leadership

Vincent: Absolutely. I’m trying to take a sip out of a fire hose, because everything I am hearing is exactly what we have been seeing — just put a bit more eloquently and with a bit more data behind it — for quite a long time now.

Leading a remote team really isn’t any different than leading a team that you look at. I mean, one of the aspects of leadership, as it pertains to this discussion, is having everybody know what is expected of them and when the due date is, enabling them with the tools they need to get the work done on time and on budget, right?

No alt text provided for this image

And with Citrix Workspace technology, the workflows automate expense report approvals, they automate calendar appointments, and automate the menial tasks that take up a lot of our time every single day. They now become seamless. They happen almost without effort. So that allows the leaders to focus on, “Okay, what does John need today to get done the task that’s going to be due in a month or in a quarter? Where are we at with this prospect or this leader or this project?”

And it allows everybody to take a moment, reflect on where they are, reflect on where they need to be, and then get more surgical with our people on getting there.

Gardner: Ray, also as a CEO, how do you see the intersection of technology, behavior, and culture coming together so that leaders like yourself are the ones going to be twice as productive?

Wolf: This goes to a human capital strategy, where you’re focusing on the numerator. So, the cost of your resources and the type of resource you need fit within a band. That’s the denominator.

The numerator is what productivity you get out of your workforce. There’s a number of things that have to come into play. It’s people, process, culture, and technology — but not independent or operating in a silo.

And that’s the big opportunity Jeff and Tim are talking about here. Imagine when we start to bring system-level thinking to how we do work both inside and outside of our company. It’s the ecosystem, like hiring Ray Wolf as the individual contributor, yet getting 13 Ray Wolfs; that’s great.

But what happens if we orchestrate the work between finance, HR, the supply chain, and procurement? And then we take it an even bigger step by applying this outside of our company with partners?

How Lucid Technology Services Adapts

To the Work-from-Home Revolution

We’re working with a very large distributor right know with hundreds of resellers. In order to close deals, they have to get into the other partner’s CRM system. Well, today, that happens with about eight emails over a number of days. And that’s just inefficient. But with Citrix Workspace you’re able to cross-integrate processes inside and outside of your company in a secure manner, so that entire ecosystems work seamlessly. As an example, just think about the travel reservation systems, which are not owned by the airlines, but are still a heart-lung function for them, and they have to work in unison.

We’re really jazzed about that. How did we discover this? Two things. One, I’m an aerospace engineer by first degree, so I saw this come together in complex machines, like jet engines. And then, second, by running a global company, I was spending 80 hours a week trying to reconcile disparate data: One data set says sales were up, another that productivity was up, and then my profit margins go down. I couldn’t figure it out without spending a lot of hours.

And then we started a new way of thinking, which is now accelerated with the Citrix Workspace. Disparate systems can work together. It makes clear what needs to be done, and then we can move to the next level, which is democratization of data. With that, you’re able to put information in front of people in synchronization. They can see complex supply chains complete, they can close sales quicker, et cetera. So, it’s really awesome.

I think we’re still at the tip of the iceberg. The innovation that I’m aware of on the product roadmap with Citrix is just awesome, and that’s why we’re here as a partner.

Gardner: Tim, we’re hearing about the importance of extended enterprise collaboration and democratization of data. Is there anything in your research that shows why that’s important and how you’re using that understanding of what’s important to help shape the direction of Citrix products?

Augmented workers arrive

Minahan: As Ray said, it’s about abstracting away that lower-level complexity, providing all the integrations, the source systems, the service security model, and providing the underlying workflow engines and tools. Then experts like Lucid and A2K can extend that to create new solutions for driving business outcomes.

From the research, we can expect the emergence of the augmented worker, number one. We’re already beginning to see it with bots and robotic process automation (RPA) systems. But at Citrix we’re going to be moving to a much higher level, where it will do things similar to what Ray and Jeff were saying, abstracting away a lot of the menial tasks that can be automated. But we can also perform at a higher level, tasks at a much more informed and rapid pace through use of AI, which can compress and analyze massive amounts of data that would take us a very long time individually. ML can adapt and personalize that experience for us.

The research indicates that while robots will replace some tasks and jobs, they will also create many new jobs. You’ll see a rise in demand for new roles, such as a bot or AI trainer, a virtual reality manager, and advanced data scientists.

Secondly, the research indicates that while robots will replace some tasks and jobs, they will also create many new jobs. And, according to our Work 2035 research, you’ll see a rise in demand for new roles, such as a bot or AI trainer, a virtual reality manager, advanced data scientists, privacy and trust managers, and design thinkers such as the folks at A2K and Lucid Technology Solutions are already doing. They are already working with clients to uncover the art of the possible and rethinking business process transformation.

Importantly, we also identified the need for flexibility of work. Shifting your mindset from thinking about a workforce in terms of full-time equivalents (FTEs)instead of pools of talent. And you understand the individual skillsets that you need and bring them together and assemble them rather quickly to address a certain project or issue that you have using digital Citrix Workspace technology, and then disassemble them just as quickly.

But you’ll also see a change in leadership. AI is going to take over a lot of those business decisions and possibly eliminate the need for some middle management teams. The bulk of our focus can be not so much managing as driving new creative ideas and innovation.

Gardner: I’d love to hear more from both Jeff and Ray about how businesses prepare themselves to best take advantage of the next stages of remote work. What do you tell businesses about thinking differently in order to take advantage of this opportunity?

Imagine what’s possible to work

Vincent: Probably the single biggest thing you can do to get prepared for the future of work is to rethink IT and your human capital, your team members. What do they need as a whole?

A business calls me up and says, “Our server is getting old, we need to get a new server.” And previously, I’d say, “Well, I don’t know if you actually need a server on-site, maybe we talk about the cloud.”

So educate yourself as a business leader on what out there is possible. Then take that step, listen to your IT staff, listen to your IT director, whoever that may be, and talk to them about what is out there and what’s really possible. The technology enabling remote work has grown exponentially, even in last few months, in its adoption and capabilities.

If you looked at the technology a year or two ago, that world doesn’t exist anymore. The technology has grown dramatically. The price point has come down dramatically. What is now possible wasn’t a few years ago.

So listen to your technology advisers, look at what’s possible, and prepare yourself for the next step. Take capital and reinvest it into the future of work.

Wolf: What we’re seeing that’s working the best is people are getting started anyway, anyhow. There really wasn’t a playbook set up for a pandemic, and it’s still evolving. We’re experiencing about 15 years’ worth of change in every three months of what’s going on. And there’s still plenty of uncertainty, but that can’t paralyze you.

No alt text provided for this image

We recommend that people fundamentally take a look at what your core business is. What do you do for a living? And then everything that enables you to do that is kind of ancillary or secondary.

When it comes to your workforce — whether it’s comprised of contractors or freelancers or permanent employees — no matter where they are, have a get stuff done mentality. It’s about what you are trying to get done. Don’t ask them about the systems yet. Just say, “What are you trying to get done?” And, “What will it take for you to double your speed and essentially only put half the effort into it?”

And listen. And then define, configure, and acquire the technologies that will enable that to happen. We need to think about what’s possible at the ground level, and not so much thinking about it all in terms of the systems and the applications. What are people trying to do every day and how do we make their work experience and their work life better so that they can thrive through this situation as well as the company?

Gardner: Tim, what did you find most surprising or unexpected in the research from the Work 2035 project? And is there a way for our audience to learn more about this Citrix research?

Minahan: One of the most alarming things to me from the Work 2035 project, the one where we’ve gotten the most visceral reaction, was the anticipation that, by 2035, in order to gain an advantage in the workplace, employees would literally be embedding microchips to help them process information and be far more productive in the workforce.

I’m interested to see whether that comes to bear or not, but certainly it’s very clear that the role of AI and ML — we’re only scratching the surface as we drive to new work models and new levels of productivity. We’re already seeing the beginnings of the augmented worker and just what’s possible when you have bots sitting — virtually and physically — alongside employees in the workplace.

We’re seeing the future of work accelerate much quicker than we anticipated. As we emerge out the other side of the pandemic, with the guidance of folks like Lucid and A2K, companies are beginning to rethink their work models and liberate their thinking in ways they hadn’t considered for decades. So it’s an incredibly exciting time.

Gardner: And where can people go to learn more about your research findings at Citrix?

Minahan: To view the Work 2035 project, you can find the foundational research at Citrix.com, but this is an ongoing dialogue that we want to continue to foster with thought leaders like Ray and Jeff, as well as academia and governments, as we all prepare not just technically but culturally for the future of work.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

Posted in application transformation, artificial intelligence, BYOD, Citrix, Cloud computing, Cyber security, digital transformation, enterprise architecture, Information management, machine learning, professional services, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

How the Journey to Modern Data Management is Paved with an Inclusive Edge-to-Cloud Data Fabric

The next BriefingsDirect Voice of Analytics Innovation discussion focuses on the latest insights into end-to-end data management strategies.

As businesses seek to gain insights for more elements of their physical edge — from factory sensors, myriad machinery, and across field operations — data remains fragmented. But a Data Fabric approach allows information and analytics to reside locally at the edge yet contribute to the global improvement in optimizing large-scale operations.

Stay with us now as we explore how edge-to-core-to-cloud dispersed data can be harmonized with a common fabric to make it accessible for use by more apps and across more analytics.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the ways all data can be managed for today’s data-rich but too often insights-poor organizations, we’re joined by Chad Smykay, Field Chief Technology Officer for Data Fabric at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chad, why are companies still flooded with data? It seems like they have the data, but they’re still thirsty for actionable insights. If you have the data, why shouldn’t you also have the insights readily available?

Smykay: There are a couple reasons for that. We still see today challenges for our customers. One is just having a common data governance methodology. That’s not just to govern the security and audits, and the techniques around that — but determining just what your data is.

I’ve gone into so many projects where they don’t even know where their data lives; just a simple matrix of where the data is, where it lives, and how it’s important to the business. This is really the first step that most companies just don’t do.

Gardner: What’s happening with managing data access when they do decide they want to find it? What’s been happening with managing the explosive growth of unstructured data from all corners of the enterprise?

Tame your data

Smykay: Five years ago, it was still the Wild West of data access. But we’re finally seeing some great standards being deployed and application programming interfaces (APIs) for that data access. Companies are now realizing there’s power in having one API to rule them all. In this case, we see mostly Amazon S3

There are some other great APIs for data access out there, but just having more standardized API access into multiple datatypes has been great for our customers. It allows for APIs to gain access across many different use cases. For example, business intelligence (BI) tools can come in via an API. Or an application developer can access the same API. So that approach really cuts down on my access methodologies, my security domains, and just how I manage that data for API access.

Gardner: And when we look to get buy-in from the very top levels of businesses, why are leaders now rethinking data management and exploitation of analytics? What are the business drivers that are helping technologists get the resources they need to improve data access and management?

Smykay: The business drivers gain when data access methods are as reusable as possible across the different use cases. It used to be that you’d have different point solutions, or different open source tools, needed to solve a business use-case. That was great for the short-term, maybe with some quarterly project or something for the year you did it in.

But then, down the road, say three years out, they would say, “My gosh, we have 10 different tools across the many different use cases we’re using.” It makes it really hard to standardize for the next set of use cases.

Gaining a common, secure access layer that can access different types of data is the biggest driver of our HPE Data Fabric. And the business drivers gain when the data access methods are as reusable as possible. 

So that’s been a big business driver, gaining a common, secure access layer that can access different types of data. That’s been the biggest driver for our HPE Data Fabric. That and having common API access definitely reduces the management layer cost, as well as the security cost.

Gardner: It seems to me that such data access commonality, when you attain it, becomes a gift that keeps giving. The many different types of data often need to go from the edge to dispersed data centers and sometimes dispersed in the cloud. Doesn’t data access commonality also help solve issues about managing access across disparate architectures and deployment models?

Smykay: You just hit the nail on the head. Having commonality for that API layer really gives you the ability to deploy anywhere. When I have the same API set, it makes it very easy to go from one cloud provider, or one solution, to another. But that can also create issues in terms of where my data lives. You still have data gravity issues, for example. And if you don’t have portability of the APIs and the data, you start to see some lock-in with the either the point solution you went with or the cloud provider that’s providing that data access for you.

Gardner: Following through on the gift that keeps giving idea, what is it about the Data Fabric approach that also makes analytics easier? Does it help attain a common method for applying analytics?

Data Fabric deployment options

Smykay: There are a couple of things there. One, it allows you to keep the data where it may need to stay. That could be for regulatory reasons or just depend on where you build and deploy the analytics models. A Data Fabric helps you to start separating out your computing and storage capabilities, but also keeps them coupled for wherever the deployment location is.

No alt text provided for this image

For example, a lot of our customers today have the flexibility to deploy IT resources out in the edge. That could be a small cluster or system that pre-processes data. They may typically slowly trickle all the data back to one location, a core data center or a cloud location. Having these systems at the edge gives them the benefit of both pushing information out, as well as continuing to process at the edge. They can choose to deploy as they want, and to make the data analytics solutions deployed at the core even better for reporting or modeling.

Gardner: It gets to the idea of act locally and learn globally. How is that important, and why are organizations interested in doing that?

Smykay: It’s just-in-time, right? We want everything to be faster, and that’s what this Data Fabric approach gets for you.

In the past, we’ve seen edge solutions deployed, but you weren’t processing a whole lot at the edge. You were pushing along all the data back to a central, core location — and then doing something with that data. But we don’t have the time to do that anymore.

Unless you can change the laws of physics — last time I checked, they haven’t done that yet — we’re bound by the speed of light for these networks. And so we need to keep as much data and systems as we can out locally at the edge. Yet we need to still take some of that information back to one central location so we can understand what’s happening across all the different locations. We still want to make the rearview reporting better globally for our business, as well as allow for more global model management.

Gardner: Let’s look at some of the hurdles organizations have to overcome to make use of such a Data Fabric. What is it about the way that data and information exist today that makes it hard to get the most out of it? Why is it hard to put advanced data access and management in place quickly and easily?

Track the data journey

Smykay: It’s tough for most organizations because they can’t take the wings off the airplane while flying. We get that. You have to begin by creating some new standards within your organization, whether that’s standardizing on an API set for different datatypes, multiple datatypes, a single datatype.

Then you need to standardize the deployment mechanisms within your organization for that data. With the HPE Data Fabric, we give the ability to just say, “Hey, it doesn’t matter where you deploy. We just need some x86 servers and we can help you standardize either on one API or multiple APIs.”

We now support more than 10 APIs, as well as the many different data types that these organizations may have.

We see a lot of data silos out there today with customers — and they’re getting worse. They’re now all over the place between multiple cloud providers. And there’s all the networking in the middle. I call it silo sprawl.

Typically, we see a lot of data silos still out there today with customers – and they’re getting worse. By worse, I mean they’re now all over the place between multiple cloud providers. I may use some of these cloud storage bucket systems from cloud vendor A, but I may use somebody else’s SQL databases from cloud vendor B, and those may end up having their own access methodologies and their own software development kits (SDKs).

Next you have to consider all the networking in the middle. And let’s not even bring up security and authorization to all of them. So we find that the silos still exist, but they’ve just gotten worse and they’ve just sprawled out more. I call it the silo sprawl.

Gardner: Wow. So, if we have that silo sprawl now, and that complexity is becoming a hurdle, the estimates are that we’re going to just keep getting more and more data from more and more devices. So, if you don’t get a handle on this now, you’re never going to be able to scale, right?

Smykay: Yes, absolutely. If you’re going to have diversity of your data, the right way to manage it is to make it use-case-driven. Don’t boil the ocean. That’s where we’ve seen all of our successes. Focus on a couple of different use cases to start, especially if you’re getting into newer predictive model management and using machine learning (ML) techniques.

But, you also have to look a little further out to say, “Okay, what’s next?” Right? “What’s coming?” When you go down that data engineering and data science journey, you must understand that, “Oh, I’m going to complete use case A, that’s going to lead to use case B, which means I’m going to have to go grab from other data sources to either enrich the model or create a whole other project or application for the business.”

You should create a data journey and understand where you’re going so you don’t just end up with silo sprawl.

Gardner: Another challenge for organizations is their legacy installations. When we talk about zettabytes of data coming, what is it about the legacy solutions — and even the cloud storage legacy — that organizations need to rethink to be able to scale?

Zettabytes of data coming

Smykay: It’s a very important point. Can we just have a moment of silence? Because now we’re talking about zettabytes of data. Okay, I’m in.

Some 20 years ago, we were talking about petabytes of data. We thought that was a lot of data, but if you look out to the future, we’re talking about some studies showing connected Internet of Things (IoT) devices generating this zettabytes amount of data.

No alt text provided for this image

If you don’t get a handle on where your data points are going to be generated, how they’re going to be stored, and how they’re going to be accessed now, this problem is just going to get worse and worse for organizations.

Look, Data Fabric is a great solution. We have it, and it can solve a ton of these problems. But as a consultant, if you don’t get ahead of these issues right now, you’re going to be under the umbrella of probably 20 different cloud solutions for the next 10 years. So, really, we need to look at the datatypes that you’re going to have to support, the access methodologies, and where those need to be located and supported for your organization.

Gardner: Chad, it wasn’t that long ago that we were talking about how to manage big data, and Hadoop was a big part of that. NoSQL and other open source databases in particular became popular. What is it about the legacy of the big data approach that also needs to be rethought?

Smykay: One common issue we often see is the tendency to go either/or. By that I mean saying, “Okay, we can do real-time analytics, but that’s a separate data deployment. Or we can do batch, rearview reporting analytics, and that’s a separate data deployment.” But one thing that our HPE Data Fabric has always been able to support is both — at the same time — and that’s still true.

So if you’re going down a big data or data lake journey — I think now the term now is a data lakehouse, that’s a new one. For these, basically I need to be able to do my real-time analytics, as well as my traditional BI reporting or rearview mirror reporting — and that’s what we’ve been doing for over 10 years. That’s probably one of the biggest limitations we have seen.

But it’s a heavy lift to get that data from one location to another, just because of the metadata layer of Hadoop. And then you had dependencies with some of these NoSQL databases out there on Hadoop, it caused some performance issues. You can only get so much performance out of those databases, which is why we have NoSQL databases just out of the box of our Data Fabric — and we’ve never run into any of those issues.

Gardner: Of course, we can’t talk about end-to-end data without thinking about end-to-end security. So, how do we think about the HPE Data Fabric approach helping when it comes to security from the edge to the core?

Secure data from edge to core

Smykay: This is near-and-dear to my heart because everyone always talks about these great solutions out there to do edge computing. But I always ask, “Well, how do you secure it? How do you authorize it? How does my application authorization happen all the way back from the edge application to the data store in the core or in the cloud somewhere?”

That’s what I call off-sprawl, where those issues just add up. If we don’t have one way to secure and manage all of our different data types, then what happens is, “Okay, well, I have this object-based system out there, and it has its own authorization techniques.” It has its own authentication techniques. By the way, it has its own way of enforcing security in terms of who has access to what, unless … I haven’t talked about monitoring, right? How do we monitor this solution?

So, now imagine doing that for each type of data that you have in your organization — whether it’s a SQL database, because that application is just a driving requirement for that, or a file-based workload, or a block-based workload. You can see where this starts to steamroll and build up to be a huge problem within an organization, and we see that all the time.

We’re seeing a ton of issues today in the security space. We’re seeing people getting hacked. It happens all the way down to the application layer, as you often have security sprawl that makes it very hard to manage all of the different systems.

And, by the way, when it comes to your application developers, that becomes the biggest annoyance for them. Why? Because when they want to go and create an application, they have to go and say, “Okay, wait. How do I access this data? Oh, it’s different. Okay. I’ll use a different key.” And then, “Oh, that’s a different authorization system. It’s a completely different way to authenticate with my app.”

I honestly think that’s why we’re seeing a ton of issues today in the security space. It’s why we’re seeing people get hacked. It happens all the way down to the application layer, as you often have this security sprawl that makes it very hard to manage all of these different systems.

Gardner: We’ve come up in this word sprawl several times now. We’re sprawling with this, we’re sprawling with that; there’s complexity and then there’s going to be even more scale demanded.

The bad news is there is quite a bit to consider when you want end-to-end data management that takes the edge into consideration and has all these other anti-sprawl requirements. The good news is a platform and standards approach with a Data Fabric forms the best, single way to satisfy these many requirements.

So let’s talk about the solutions. How does HPE Ezmeral generally — and the Ezmeral Data Fabric specifically — provide a common means to solve many of these thorny problems?

Smykay: We were just talking about security. We provide the same security domain across all deployments. That means having one web-based user interface (UI), or one REST API call, to manage all of those different datatypes.

No alt text provided for this image

We can be deployed across any x86 system. And having that multi-API access — we have more than 10 – allows for multi-data access. It includes everything from storing data into files and storing data in blocks. We’re soon going to be able to support blocks in our solution. And then we’ll be storing data into bit streams such as Kafka, and then into a NoSQL database as well.

Gardner: It’s important for people to understand that HPE Ezmeral is a larger family and that the Data Fabric is a subset. But the whole seems to be greater than the sum of the parts. Why is that the case? How has what HPE is doing in architecting Ezmeral been a lot more than data management?

Smykay: Whenever you have this “whole is greater than the sum of the parts,” you start reducing so many things across the chain. When we talk about deploying a solution, that includes, “How do I manage it? How do I update it? How do I monitor it?” And then back to securing it.

Honestly, there is a great report from IDC that says it best. We show a 567-percent, five-year return on investment (ROI). That’s not from us, that’s IDC talking to our customers. I don’t know of a better business value from a solution than that. The report speaks for itself, but it comes down to these paper cuts of managing a solution. When you start to have multiple paper cuts, across multiple arms, it starts to add up in an organization.

Gardner: Chad, what is it about the HPE Ezmeral portfolio and the way the Data Fabric fits in that provides a catalyst to more improvement?

All data put to future use

Smykay: One, the HPE Data Fabric can be deployed anywhere. It can be deployed independently. We have hundreds and hundreds of customers. We have to continue supporting them on their journey of compute and storage, but today we are already shipping a solution where we can containerize the Data Fabric as a part of our HPE Ezmeral Container Platform and also provide persistent storage for your containers.

The HPE Ezmeral Container Platform comes with the Data Fabric, it’s a part of the persistent storage. That gives you full end-to-end management of the containers, not only the application APIs. That means the management and the data portability.

So, now imagine being able to ship the data by containers from your location, as it makes sense for your use case. That’s the powerful message. We have already been on the compute and storage journey; been down that road. That road is not going away. We have many customers for that, and it makes sense for many use cases. We’ve already been on the journey of separating out compute and storage. And we’re in general availability today. There are some other solutions out there that are still on a road map as far as we know, but at HPE we’re there today. Customers have this deployed. They’re going down their compute and storage separation journey with us.

Gardner: One of the things that gets me excited about the potential for Ezmeral is when you do this right, it puts you in a position to be able to do advanced analytics in ways that hadn’t been done before. Where do you see the HPE Ezmeral Data Fabric helping when it comes to broader use of analytics across global operations?

Smykay: One of our CMOs used to say it best, and which Jack Morris has said: “If it’s going to be about the data, it better be all about the data.”

No alt text provided for this image

When you improve automating data management across multiple deployments — managing it, monitoring it, keeping it secure — you can then focus on those actual use cases. You can focus on the data itself, right? That’s living in the HPE Data Fabric. That is the higher-level takeaway. Our users are not spending all their time and money worrying about the data lifecycle. Instead, they can now go use that data for their organizations and for future use cases.

HPE Ezmeral sets your organization up to use your data instead of worrying about your data. We are set up to start using the Data Fabric for newer use cases and separating out compute and storage, and having it run in containers. We’ve been doing that for years. The high-level takeaway is you can go focus on using your data and not worrying about your data.

Gardner: How about some of the technical ways that you’re doing this? Things like global namespaces, analytics-ready fabrics, and native multi-temperature management. Why are they important specifically for getting to where we can capitalize on those new use cases?

Smykay: Global namespaces is probably the top feature we hear back from our customers on. It allows them to gain one view of the data with the same common security model. Imagine you’re a lawyer sitting at your computer and you double-click on a Data Fabric drive, you can literally then see all of your deployments globally. That helps with discovery. That helps with bringing onboard your data engineers and data scientists. Over the years that’s been one of the biggest challenges, they spend a lot of time building up their data science and data engineering groups and on just discovering the data.

Global namespace means I’m reducing my discovery time to figure out where the data is. A lot of this analytics-ready value we’ve been supporting in the open source community for more than 10 years. There’s a ton of Apache open source projects out there, like PrestoHive, and Drill. Of course there’s also Spark-ready, and we have been supporting Spark for many years. That’s pretty much the de facto standard we’re seeing when it comes to doing any kind of real-time processing or analytics on data.

As for multi-temperature, that feature allows you to decrease your cost of your deployment, but still allows managing all your data in one location. There are a lot of different ways we do that. We use erasure coding. We can tear off to Amazon S3-compliant devices to reduce the overall cost of deployment.

These features contribute to making it still easier. You gain a common Data Fabric, common security layer, and common API layer.

Gardner: Chad, we talked about much more data at the edge, how that’s created a number of requirements, and the benefits of a comprehensive approach to data management. We talked about the HPE Data Fabric solution, what it brings, and how it works. But we’ve been talking in the abstract.

What about on the ground? Do you have any examples of organizations that have bitten off and made Data Fabric core for them? As an adopter, what do they get? What are the business outcomes?

Central view benefits businesses 

Smykay: We’ve been talking a lot about edge-to-core-to-cloud, and the one example that’s just top-of-mind is a big, tier-1 telecoms provider. This provider makes the equipment for your AT&Ts and your Vodafones. That equipment sits out on the cell towers. And they have many Data Fabric use cases, more than 30 with us. 

But the one I love most is real-time antenna tuning. They’re able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antenna. They do it via real-time data collection on the antennas and then aggregating that across all of the different layers that they have in their deployments.

One example is real-time antennae tuning. They’re able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antennae. They do it instead via real-time data collection and aggregating that across all of their deployments.

They gain a central view of all of the data using a modern API for the DevOps needs. They still centrally process data, but they also process it at the edge today. We replicate all of that data for them. We manage that for them and take a lot of the traditional data management tasks off the table for them, so they can focus on the use case of the best way to tune antennas.

Gardner: They have the local benefit of tuning the antenna. But what’s the global payback? Do we have a business quantitative or qualitative returns for them in doing that?

Smykay: Yes, but they’re pretty secretive. We’ve heard that they’ve gotten a payback in the millions of dollars, but an immediate, direct payback for them is in reducing the application development spend everywhere across the layer. That reduction is because they can use the same type of API to publish that data as a stream, and then use the same API semantics to secure and manage it all. They can then take that same application, which is deployed in a container today, and easily deploy it to any remote location around the world.

Gardner: There’s that key aspect of the application portability that we’ve danced around a bit. Any other examples that demonstrate the adoption of the HPE Data Fabric and the business pay-offs?

Smykay: Another one off the top of my head is a midstream oil and gas customer in the Houston area. This one’s not so much about edge-to-core-to-cloud. This is more about consolidation of use cases.

We discussed earlier that we can support both rearview reporting analytics as well as real-time reporting use cases. And in this case, they actually have multiple use cases, up to about five or six right now. Among them, they are able to do predictive failure reports for heat exchangers. These heat exchangers are deployed regionally and they are really temperamental. You have to monitor them all the time.

But now they have a proactive model where they can do a predictive failure monitor on those heat exchangers just by checking the temperatures on the floor cameras. They bring in all real-time camera data and they can predict, “Oh, we think we’re having an issue with this heat exchanger on this time and this day.” So that decreases management cost for them.

They also gain a dynamic parts management capability for all of their inventory in their warehouses. They can deliver faster, not only on parts, but reduce their capital expenditure (CapEx) costs, too. They have gained material measurement balances. When you push oil across a pipeline, they can detect where that balance is off across the pipeline and detect where they’re losing money, because if they are not pushing oil across the pipe at x amount of psi, they’re losing money.

So they’re able to dynamically detect that and fix it along the pipe. They also have a pipeline leak detection that they have been working on, which is modeled to detect corrosion and decay.

The point is there are multiple use cases. But because they’re able to start putting those data types together and continue to build off of it, every use case gets stronger and stronger.

Gardner: It becomes a virtuous adoption cycle; the more you can use the data generally, then the more value, then the more you invest in getting a standard fabric approach, and then the more use cases pop up. It can become very powerful.

This last example also shows the intersection of operational technology (OT) and IT. Together they can start to discover high-level, end-to-end business operational efficiencies. Is that what you’re seeing?

Data science teams work together

Smykay: Yes, absolutely. A Data Fabric is kind of the Kumbaya set among these different groups. If they’re able to standardize on the IT and developer side, it makes it easier for them to talk the same language. I’ve seen this with the oil and gas customer. Now those data science and data engineering teams work hand in hand, which is where you want to get in your organization. You want those IT teams working with the teams managing your solutions today. That’s what I’m seeing. As you get a better, more common data model or fabric, you get faster and you get better management savings by having your people working better together.

Gardner: And, of course, when you’re able to do data-driven operations, procurement, logistics, and transportation you get to what we’re referring generally as digital business transformation.

Chad, how does a Data Fabric approach then contribute to the larger goal of business transformation?

Depending on size of the organization, you’re talking to three to five different groups, and sometimes 10 different people, just to put a use case together. But as you create a common data access method, you see an organization where it’s easier and easier for not only your use cases, but your businesses to work together on the goal of whatever you’re trying to do and use your data for.

Listen to the podcast. Find it on iTunes. Read a full transcript or downloada copy. Sponsor:Hewlett Packard Enterprise.

Posted in big data, Business intelligence, Cloud computing, Cyber security, data analysis, data center, Data center transformation, digital transformation, Enterprise architect, Hadoop, Hewlett Packard Enterprise, Security, storage | Leave a comment

How COVID-19 teaches higher education institutes to embrace latest IT to advance remote learning

Like many businesses, innovators in higher education have been transforming themselves for the digital age for years, but the COVID-19 pandemic nearly overnight accelerated the need for flexible new learning models.

As a result, colleges and universities must rapidly redefine and implement a new and dynamic balance between in-person and remote interactions. This new normal amounts to more than a repaving of centuries-old, in-class traditions of higher education with a digital wrapper. It requires re-invention — and perhaps new ways of redefining — of the very act of learning itself.

The next BriefingsDirect panel discussion explores how such innovation today in remote learning may also hold lessons for how businesses and governments interact with and enlighten their workers, customers, and ultimately citizens.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share recent experiences in finding new ways to learn and work during a global pandemic are Chris Foward, Head of Services for IT Services at The University of Northampton in the UK; Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix, and Dr. Scott Ralls, President of Wake Tech Community Collegein Raleigh, North Carolina. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Scott, tell us about Wake Tech Community College and why you’ve been able to accelerate your path to broader remote learning?

Ralls: Wake Tech is the largest college in North Carolina, one of the largest community colleges in the United States. We have 75,000 total students across all of our different program areas spread over six different campuses.

In mid-March, we took an early step in moving completely online because of the COVID-19 pandemic. But if we had just started our planning at that point, I think we would have been in trouble; it would have been a big challenge for us, as it has been for much of higher education.

The journey really began six years earlier with a plan to move to a more online-supported, virtual-blended world. For us, the last six months have been about sprinting. We are on a journey that hasn’t been so much about changing our direction or changing our efficacy, but really sprinting the last one-fourth of the race. And that’s been difficult and challenging.

But it’s not been as challenging as if you were trying to figure out the directions from the very beginning. I’ve been very proud of our team, and I think things are going remarkably well here despite a very challenging situation.

Education sprints online

Gardner: Chris, please tell us about The University of Northampton and how the pandemic has accelerated change for you.

Foward: The University of Northampton has invested very heavily in its campus. A number of years ago and we built a new one called Waterside campus. The Waterside campus was designed to work with active blended learning (ABL) as an approach to delivering all course works, and — similar to Wake Tech — we’ve faced challenges around how we deliver online teaching.

We were in a fortunate position because during the building of our new campus we implemented all-new technology from the ground up — from our plant-based systems right through to our backend infrastructure. We aimed at taking on new technologies that were either cloud-based or that allowed us to deliver teaching in a remote manner. That was done predominantly to support our ABL approach to delivery of education. But certainly the COVID-19 pandemic has sped up the uptake of those services.

Gardner: Chris, what was the impetus to the pre-pandemic blended learning? Why were you doing it? How did technology help support it?

Foward: The University of Northampton since 2014 has been moving toward its current institutional approach to learning and teaching. We never perceived of this as a large-scale online learning or a distance learning solution. But ABL does rely on fluent and thoughtful use of technologies for learning.

Our teachers found that the work they’ve done since 2014 really did stand us in good stead as we were able to very quickly change from an on-campus-taught environment to a digital experience for our students.

And this has stood the university in good stead in terms of how we actually deliver to our students. What our lecturers and teachers found is that the work they’ve done since 2014 really did stand us in a good stead as we were able to very quickly change from an on-campus-taught environment to a digital experience for our students.

Gardner: Scott, has technology enabled you to seek remote learning, or was remote learning the goal and then you had to go find the technology? What’s the relationship between remote learning and technology?

Ralls: For us, particularly in community colleges, it was more the second in that remote learning is an important priority for us because a majority of our students work. So the issues of just having the convenience of remote learning started community colleges in the United States down the path of remote learning much more quickly than for other forms of higher education. And so that helped us years ago to start thinking about what technologies are required.

Our college has been very thoughtful about the equity issues in remote learning. Some students succeed in more remote learning platforms, while others struggle with what those solutions may be. It was much more about the need for remote learning to allow working students with the capacities and conveniences, and then looking at what the technologies are and the best practices to achieve those goals.

Businesses learn from schools’ success

Gardner: Tim, when you hear Chris and Scott describing what they are doing in higher education, does it strike you that they are leaders and innovators compared generally to businesses? Should businesses pay attention to what’s going on in higher education these days, particularly around remote, balanced, and blended interactions?

Minahan: Yes, I certainly think they are leading, Dana. That leadership comes from having been prepared for this in advance. If there’s any silver lining to this global crisis we are all living through, it’s that it’s caused organizations and participants in all industries to rethink how they work, school, and live.

Employers, having seen that work can now actually happen outside of an office, are catching up similarly. They’re rethinking their long-term workforce strategies and work models. They’re embracing more flexible and hybrid work approaches for the long-term.

And lower costs and improved productivity and engagement are giving them access to new pools of talent that were previously inaccessible to them in the traditional work-hub model, where you build a big office or call center and then you hire folks to fill them. Now, they can remotely reach talent in any location, including retirees or stay-at-home parents, and caretakers. They can be reactivated into the workforce.

As Kids Do More Remote School,

Managers Have Extra Homework, Too

Similarly to the diversity of the student body you’re seeing at Wake Tech, to do this they need a foundation, a digital workspace platform, that allows them to deliver consistent and secure access to the resources that employees or staff — or in this case, students — need to do their very best work across any channel or location. That can be in the classroom, on the road, or as we’ve seen recently in the home.

I think going forward, you’re going to see not just higher education, which we are hearing about here, but all industries begin to embrace this blended model for some very real benefits, both to their employees and their constituents, but to their own organizations as well.

Gardner: Chris, because Northampton put an emphasis on technology to accomplish blended learning, was the technology typical a few years back — traditional, stack-based enterprise IT — a hindrance? Did you need to rethink technology as you were trying to accomplish your education goals?

Tech learning advances agility

Foward: Yes, we did. When we built our new campus, we looked at what new technologies were coming onto the market. We then moved toward a couple of key suppliers to ensure that we received best-in-class services as well as easy-to-use products. We chose partners like Microsoft for our software programs, like Office, and those sorts of productivity products.

We chose Cisco for networking and servers, and we also pulled in Citrix for delivery of our virtual applications and desktops from any location, anywhere, anytime. It allows flexibility for our students to access the systems from a smartphone and see a specific CAB-type models if we join those through solutions we have. It allows our factor of business and law to be able to present some of this bespoke software that they use. We can tailor the solutions that they see within these environments to meet the educational needs and courses that they are attending.

Gardner: Scott, at Wake Tech, as president of the university, you’re probably not necessarily a technologist. But how do you not be a technologist nowadays when you’re delivering everything as remote learning? How has your relationship with technology evolved? Have you had to learn a lot more tech?

Ralls: Oh, absolutely, yes. And even my own use of technology has evolved quite a bit. I was always aware and had broad goals. But, as I mentioned, we started sprinting very quickly, and when you are sprinting you want to know what’s happening.

We are very fortunate to have a great IT team that is both thoughtful in its direction and very urgent in their movement. So those two things gave me a lot of confidence. It’s also allowed us to sprint to places that we wouldn’t have been able to had these circumstances not come along.

We are very fortunate to have a great IT team that is both thoughtful in its direction and very urgent in their movement. Those two things gave me a lot of confidence. It also allowed us to sprint to places that we wouldn’t have been able to.

I will use an example. We have six campuses. I would do face-to-face forums with faculty, staff, and students, so three meetings on every campus but once a semester. Now, I do those kinds of forums most days with students, faculty, or staff using the technology. Many of us have found that with the directions we were going that there are greater efficiencies to be achieved in many ways that we would not have tried had it not been for the [pandemic] circumstances.

And I think after we get past the issues we are facing with the pandemic; our world will be completely changed because this has accelerated our movement in this direction and accelerated our utility of the usage as well.

Gardner: Tim, we have seen over the years that the intersection between business and technology is not always the easiest relationship. Is what we’re seeing now as a result of the pandemic helping organizations attain the agility that they perhaps struggled to find before?

Minahan: Yes, indeed, Dana. As you just heard, another thing the pandemic has taught us is that agility is key. Fixed infrastructure — whether it’s real estate, the work-hub-centric models, data centers with loads of servers, and on-premise applications — has proven to be an anchor during the pandemic. Organizations that rely heavily on such fixed infrastructure have had a much more difficult time shifting to a remote work or remote learning model to keep their employees and students safe and productive.

In fact, by an anecdote, we had one financial services customer, a CIO, recently say, “Hey, we can’t add servers and capacity fast enough.” And so, similar to Scott and Chris, we’re seeing an increasing number of our customers moving to adopt more variable operating models in everything they do. They are rethinking the real estate, staffing, and their IT infrastructure. As a result, we’re seeing customers take their measured plans for a one- to three-year transition to the cloud and accelerated that to three months, or even a few weeks.

They’re also increasing adoption of digital workspaces so that they can provide a consistent and secure work or learning experience for employees or students across any channel or location. It really boils down to organizations building agility into their operations so they can scale up quickly in the face of the next inevitable, unplanned crisis — or opportunity.

Gardner: We’ve been talking about this through the lens of the higher education institute and the technology provider. But what’s been the experience over the past several months for the user? How are your students at Northampton adjusting to this, Chris? Is this rapid shift a burden or is there a silver lining to more blended and remote learning?

Easy-to-use options for student adoption

Foward: I’ll be honest, I think our students have yet to adopt it fully.

There are always challenges with new technology when it comes in. The uptake will be mainly driven in October when we see our mainstream student cohorts come onboard. I do think the types of technologies we have chosen are key, because making technology simple to use and easy to access will drive further adoption of those products.

What we have seen is that our staff’s uptake on our Citrix environment was phenomenal. And if there’s one positive to take from the COVID-19 situation it is the adoption of technology. Our staff has taken to it like ducks to water. Our IT team has delivered something exceptional, and I think our students will also see a massive benefit from these products, and especially the ease of use of these products.

So, yes, the key thing is making the products easily accessible and easy to use. If we overcomplicate it, you won’t get adoption and you won’t get an experience that customers need when they come to our education institutions.

Gardner: Dr. Ralls, have the students adjusted to these changes in a way that gives them agility as they absorb education?

Ralls: They have. All of us — whether we work, teach, or are students at Wake Tech — have gained more confidence in these environments than we had before. I have regular conversations with these students. There was a lot of uncertainty, just like for many of us working remotely. How would that all work?

And we’ve now seen that we can do it. Things will still change around the notions of making the adjustments we need to. And for many of our students, it isn’t just how things will it change in the class, but in all of the things that they need around that class. For example, we have tutoring centers in our libraries. How do we make those work remotely and by appointment? We all wondered how that would work. And now we’ve seen that it can work, and it does work; and there’s an ease of doing that.

Reimagining Education

In a Remote World

Because we are a community college, we’re an open-admissions college. Many of our students haven’t had the level of academic preparation or opportunity that others have had. And so for some of our students who have a sense of uncertainty or anxiety, we have found that there is a challenge for them to move to remote learning and to have confidence initially.

Sometimes we can see that in withdrawals, but we’ve also found that we can rally around our students using different tools. We have found the value of different types of remote learning that are effective. For example, we’re doing a lot of the HyFlex model now, which is a combination of hybrid and remote, online-based education.

Over time we have seen in many of our classes that where classes started as hybrid, students then shifted to more fully remote and online. So you see the confidence grow over time.

Gardner: Scott, another benefit of doing more online is that you gain a data trail. When it comes to retention, and seeing how your programs are working, you have a better sense of participation — and many other metrics. Does the data that comes along with remote learning help you identify students at risk, and are there other benefits?

Remote learning delivers data

Ralls: We’re a very data-focused college. For instance, even before we moved to more remote learning, every one of our courses had an online shell. We had already moved to where every course was available online. So we knew when our students were interacting.

One of the shifts we’ve seen at Wake Tech with more remote services is the expansion of those hours, as well as the ability to access counseling — and all of our services remotely — and through answer centers and other things.

But that means we had to change our way of thinking. Before, we knew when students took our courses, because they took them when you scheduled the courses. Now, as they are working remotely, we can also tell when they are working. And we know from many of our students that they are more likely to be online and engaged in our coursework between the hours of 5 pm and 10 pm, as opposed to 8 am and noon. Most of when we had been operating, from just having physical sites, was 8 am to 5 pm. Consequently, we have had to move the hours, and I think that’s something that will always be different about us and so that does give us that indication.

We had to change our way of thinking. Before, we knew when students took our courses because they took them when you scheduled the courses. Now, remotely we can also tell when they are working. We have had to move the hours to when they are actually operating.

One other thing about us that has been unique is because of who we are, because we do so much technical education — that’s why we are called Wake Tech — and much of that is hands-on. You can’t do it fully remotely. But every one of our programs has found out the value of remote-based access through the support.

For example, we have a remarkable baking and pastry program. They have figured out how help the students get all of their hands-on resources at home in their own kitchens. They no longer have to come into the labs for what they do. Every program has found that value, the best aspects of their program being remote, even if their full program cannot be remote because of the hands-on matrix.

Gardner: Chris, is the capability to use the data that you get along the way at Northampton a benefit to you, and how?

Foward: Data is key for us in IT Services. We like to try and understand how people are using our systems and which applications they are using. It allows us to then fix the delivery of our applications more effectively. Our courses are also very data-driven. In our games art courses, for example, data allows us to design the materials more effectively for our students.

Gardner: Tim, when you are providing more value back through your technology, the data seems to be key as well. It’s about optimization and even reducing costs with better business and education outcomes. How does the data equation benefit Citrix’s customers, and how do you expect to improve on that?

Data enhances experiences

Minahan: Dana, data plays a major role in every aspect of what we do. When you think about the need to deliver digital workspaces by providing consistent and secure access to the resources — whether it’s employees or students — they need to be able to perform at their best wherever that work needs to get done. The data that we are gathering is applied in a number of different ways.

Number one is around the security model. I use the analogy of not just having security access in — the bouncer at the front door to make sure you have authenticated and are on the list to be access the resources you need — but also having the bodyguard that follows you around the club, if you will, to constantly monitor your behavior and apply additional security policies.

The data is valuable for that because we understand the behavior of the individual user, whether they are typically accessing from a particular device or location or via the types of information or applications they access.

The second area is around performance. If we move to a much more distributed model, or a flexible or a blended model, vital to that is ensuring that those employees or students have reliable access to the applications and information they need to perform at their best. Being able to constantly monitor that environment allows for increasing bandwidth, or moving to a different channel as needed, so they get the best experience.

And then the last one gets very exciting. It is literally about productivity. Being able to push the right information or the right tasks, or even automate a particular task or remove it from their work stream in real time is vital to ensuring that we are not drowning in this cacophony of different apps and alerts — and all the noise that gets in the way of us actually doing our best work or learning. And so data is actually vital to our overall digital workspace strategy at Citrix.

Gardner: Chris, to attain an improved posture around ABL, that can mean helping students pick up wherever they left off — whether in a classroom, their workplace, at a bakery or in a kitchen at home. It requires a seamless transition regardless of their network and end device. How important is it to allow students to not have to start from scratch or find themselves lost in this collaboration environment? How is Citrix an important part of that?

Foward: With our ABL approach, we have small collaborative groups that work together to deliver or gain their learning.

We also ensure that the students have face-to-face contact with tutors, other distance learning, or while on campus. And with the technology, we store all of the academic materials in one location, called our mail site, which allows students to be able to access and learn as and when they need to.

Citrix plays a key part in that because we can deliver applications into that state quickly and seamlessly. It allows students to always be able to understand and see the applications they need for their specific courses. It allows them to experiment, discuss ideas, and get more feedback from our lecturers because they understand what materials are being stored and how to access them.

Gardner: Dr. Ralls, how do you at Wake Tech prevent learning gaps from occurring? How does the technology help students move seamlessly throughout their education process, regardless of the location or device?

Seamless tracking lets students thrive

Ralls: There are different types of gaps. In terms of courses, one of the things we found recently is our students are looking for different types of access. Many of our students are looking for additional types of access — perhaps replicating our seated courses to gain the value of synchronous experiences. We have had to make sure that all of our courses have that capacity, and that it works well.

Then, because many of our students are also in a work environment, they want an asynchronous capability. And so we are now working on making sure students know the difference and how to match those expectations.

Also, because we are an open access college — and as I like to say, we take the top 100 percent of our applicant students — for many of our students, gaps come not just within a course, but between courses or toward their goals. For many of our students who are first-generation students, higher education is new. They may have also been away from education for a period of time.

We have to be much more intrusive and to help students and monitor to make sure our students are making it from one place to the next. We need to make sure that learning makes sense to them.

So we have to be much more intrusive and to help students and monitor to make sure our students are making it from one place to the next. We need to make sure that learning makes sense to them and that they are making it to whatever their ultimate goals are.

We use technology to track that and to know when our students are getting close to leaving. We call that being like rumble strips on the side of the road. There are gaps that we are looking at, not just within courses, but between courses, on the way to our students’ academic goals.

Gardner: Tim, when I hear Chris and Scott describe these challenges in education, I think how impactful this can be for other businesses in general as they increasingly have blended workforces. They are going to face similar gaps too. What, from Citrix’s point of view, should businesses be learning from the experiences at University of Northampton and Wake Tech?

Minahan: I think Winston Churchill summed it up best: “Never let a good crisis go to waste.” Smart organizations are using the current crisis — not just to survive, but to thrive. They are using the opportunity to accelerate their digital transformation and rethink long-held work and operating models in ways they probably hadn’t before.

So as demonstrated both at Wake Tech and Northampton, and as Scott and Chris both said, for both school and work the future is definitely going to be blended.

We have, for example, another higher education customer, the University of Sydney that was able to get 20,000 students and faculty transition to an online learning environment last March, literally within a week. But that’s not the real story, it’s where they are going next with this.

As they entered the new school year in Sydney, they now have 100 core and software as a service (SaaS) applications that students can access through the digital workspace regardless of the type of device or their location. And they can ensure they have that consistent and secure and reliable experience with those apps. They say the student experience is as good, and sometimes even better, than what a student would have when using a locally installed app on a physical computer.

And now the university, most importantly, has used this remote learning model as an opportunity to reach new students — and even new faculty — in locations that they couldn’t have supported before due to geographic limitations of largely classroom-based models.

These are the types of things that businesses also have to think through. And as we hear from Wake Tech and Northampton, businesses can take a page from the courseware from many forward-thinking higher education organizations that are already in a blended learning model and see how that applies to their own business.

Gardner: Dr. Ralls, when you look to the future, what comes next? What would you like to see happen around remote learning, and what can the technologists like Citrix do to help?

Blended learning without walls

Ralls: Right now, there is so much greater efficiency than we had before. I think there is a way to bring that greater efficiency even more into our classrooms. For years we have talked about a flipped classroom, which really means those things that are better accomplished outside in a lab or in a shop, to do those outside of the classroom.

We have to all get to a place where the learning process just doesn’t happen within the walls of the classrooms. So the ability for students to go back and review work, to pick up on work, to use multiple different tools to add and supplement what they are getting through a classroom-based experience, a shop-based experience — I think that’s what we are moving to.

See How Leading Institutes Use

Technology to Transform Education Delivery

For Wake Tech, this really hit us about March 15, 2020 when we went fully remote. We don’t want to go back to the way we were in April. We don’t want to be a fully remote, online college. But we also don’t want to be where we were in February.

This pandemic crisis has presented to us a greater acceleration of where we want to be, of where we can be. It’s what we aspire to be in terms of better education — not just more convenient access of education — but better educational opportunities through the multiple different opportunities that are brought to us by technology to supplement the core work that we have always done through our seat-based environment.

Gardner: Chris, at Northampton, what’s the next step for the technology enabling these higher goals that Dr. Ralls just described? Where would you like to see the technology take Northampton students next?

Foward: The technology is definitely key to what we are trying to do as education providers, to provide the right skill sets wherein students move from higher education into business. Certainly, with the likes of Citrix, with what was originally a commercial-focused application, and bringing it into our institution, we have allowed our students to gain access and understand how the system works — and understand how to use it.

And that’s similar with most of our technologies that we have brought in. It gives students more of a commercial feel for how operations should be running, how systems should be accessed, and the ways to use those systems.

Gardner: Tim, graduates from Wake Tech and from University of Northampton a year or two from now, they are going to be well-versed in these technologies, and this level of collaboration and seamless transitions between blended approaches. How are the companies they go to going to anticipate these new mindsets? What should businesses be doing to take full advantage of what these students have already been doing in these universities?

Students become empowered employees

Minahan: That’s a great point, and it is certainly something that business is grappling with now as we move beyond hiring Millennials to the next generation of highly educated, grown-up-on-the-Internet students with high expectations who are coming out of universities today.

For the next few years, it all boils down to the need to deliver a superior employee experience, to empower employees to perform at their best, and to do the jobs they were hired to do. We should not burden them, as we have in a lot of corporate America, with a host of different distractions, apps, and rules and regulations that keep them away from doing their core jobs.

We need to deliver a superior employee experience. We should not burden them with a host of different distractions, apps, and rules that keep them from doing their core jobs.

And key to that, not surprisingly, is going to require a digital workspace environment that empowers and provides unified access to all of the resources and information that the employee needs to perform at their best across any work channel or location. They need a behind-the-scenes security model that ensures the security of the corporate assets, applications, and information — as well as the privacy of individuals — without getting in the way of work.

And then, at a higher level, as we talked about earlier, we need an intelligence model with more analytics built into that environment. It will then not just offer up a launch pad to access the resources you need, but will actually guide you through your day, presenting the right tasks and insights as you need them, and allowing you to get the noise out of your day so you can really create, innovate, and do your best work. And that will be whether work is in an office, on the road, or work as we have seen recently, in the home.

Gardner: I wouldn’t be surprised if the students coming out of these innovative institutes of higher learning are going to be the instigators of change and innovation in their employment environments. So a point on the arrow from education into the business realm.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

Posted in application transformation, Citrix, Cloud computing, digital transformation, Enterprise transformation, Security, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

The path to a digital-first enterprise is paved with an Emergence Model and Digital Transformation Playbook

The next BriefingsDirect digital business optimization discussion explores how open standards help support a playbook approach for organizations to improve and accelerate their digital transformation.

As companies chart a critical journey to become digital-first enterprises, they need new forms of structure to make rapid adaptation a regular recurring core competency. Stay with us as we explore how standards, resources, and playbooks around digital best practices can guide organizations through unprecedented challenges — and allow them to emerge even stronger as a result. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.  

Here to explain how to architect for ongoing disruptive innovation is our panel, Jim Doss, Managing Director at IT Management and Governance, LLC, and Vice Chair of the Digital Practitioner Work Group (DPWG) at The Open Group; Mike Fulton, Associate Vice President of Technology Strategy and Innovation at Nationwide and Academic Director of Digital Education at The Ohio State University, and Dave Lounsbury, Chief Technical Officer at The Open Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. 

Here are some excerpts: 

Gardner: Dave, the pressure from the COVID-19 pandemic response has focused a large, 40-year gathering of knowledge into a new digitization need. What is that new digitization need, and why are standards a crucial part of it?

Lounsbury: It’s not just digitization, but also the need to move to digital. That’s what we’re seeing here. The sad fact of this terrible pandemic is that it has forced us all to live a more no-contact, touch-free, and virtual life.

We’ve all experienced having to be on Zoom, or not going into work, or even when you’re out doing take-out at a restaurant. You don’t sign a piece of paper anymore; you scan something on your phone, and all of that is based on having the skills and the business processes to actually deliver some part of your business’s value digitally. 

This was always an evolution, and we’ve been working on it for years. But now, this pandemic has forced us to face the reality that you have to adopt digital in order to survive. And there’s a lot of evidence for that. I can cite McKinsey studies where the companies that realized this early and pivoted to digital delivery are reaping the business benefits. And, of course, that means you have to have both the technology, the digitization part, but also embrace the idea that you have to conduct some part of your business, or deliver your value, digitally. This has now become crystal clear in the focus of everyone’s mind.

Gardner: And what is the value in adopting standards? How do they help organizations from going off the rails or succumbing to complexity and chaos?

Lounsbury: There’s classically been a split between information technology (IT) in an organization and the people who are in the business. And, something I picked up at one of the Center for Information Research meetings was, the minute an IT person talks about “the business” you’ve gone off the rails.

If you’re going to deliver your business value digitally — even if it’s something simple like contactless payments or an integrated take-out order system — that knowledge might have been previously in an IT shop or something that you outsourced. Now it has to be in the line of business. 

Pandemic survival is digital

There has to be some awareness of these digital fundamentals at almost all levels of the business. And, of course, to do that quickly, people need a structure and a guidebook for what digital skills they need at different points of their organizational evolution. And that is where standards, complemented by education and training, play a big role.

Fulton: I want to hit on this idea of digitization versus digital. Dave made that point and I think it’s a good one. But in the context of the pandemic, it’s incredibly critical that we understand the value that digitization brings — as well as the value that digital brings.

When we talk about digitization, typically what we’re talking about is the application of technology inside of a company to drive productivity and improve the operating model of the company. In the context of the pandemic, that value becomes much more important. Driving internal productivity is absolutely critical.

We’re seeing that here at Nationwide. We are taking steps to apply digitization internally to increase the productivity of our organization and help us drive the cost down of the insurance that we provide to our customers very specifically. This is in response to the adjustment in the value equation in the context of the pandemic.

But then, the digital context is more about looking externally. Digital in this context is applying those technologies to the customer experience and to the business model. And that’s where the contact list, as Dave was talking about, is so critically important.

There are so many ways now to interact with our customers, and in ways that don’t involve human beings. How to get things done in this pandemic, or to involve human beings in a different way — in a digital fashion — that’s where both digitization and digital are so critically important in this current context.

Gardner: Jim Doss, as organizations face a survival-of-the-fittest environment, how do we keep this a business transformation with technology pieces — and not the other way around?

Project management to product journey

Doss: The days of architecting IT and the business separately, or as a pure cascade or top-down thing; those days are going. Instead of those “inside-out” approaches, “outside-in” architectural thinking now keenly focuses on customer experiences and the value streams aligned to those experiences. Agile Architecture promotes enterprise segmentation to facilitate concurrent development and architecture refactoring, guided by architectural guardrails, a kind of lightweight governance structure that facilitates interoperability and keeps people from straying into dangerous territory.

If you read books like Team Topologies, The Open Group Digital Practitioner Body of Knowledge™️ (DPBoK), and Open Agile Architecture™️ Standards, they are designed for team cognitive load, whether they are IT teams or business teams. And doing things like the Inverse Conway Maneuver segments the organization into teams that deliver a product, a product feature, a journey, or a sub-journey.

Those are some really huge trends and the project-to-product shift is going on in business and IT. These trends have been going on for a few years. But when it comes to undoing 30 or 40 years of project management mentality in IT — we’re still at the beginning of the project-to-product shift. It’s massive. 

To summarize what David was saying, the business can no longer outsource digital transformation. As matter of fact, by definition, you can’t outsource digital transformation to IT anymore. This is a joint-effort going forward.

Gardner: Dave, as we’re further defining digital transformation, this goes beyond just improving IT operations and systems optimization. Isn’t digital transformation also about redefining their total value proposition?

Lounsbury: Yes, that’s a very good point. We may have brushed over this point, but when we say and use the word digital, at The Open Group we really mean a change in the mindset of how you deliver your business.

This is not something that the technology team does. It’s a reorientation of your business focus and how you think about your interactions with the customer, as well as how you deliver value to the customer. How do you give them more ways of interacting with you? How do you give them more ways of personalizing their experience and doing what they want to do?

This goes very deep into the organization, to how you think about your value chains, in business model leverage, and things like that.

One of the things we see a lot of is people thinking about is trying to do old processes faster. We have been doing that incremental improvement and efficiency forever and applying machines to do part of the value-delivery job. But the essential decision now is thinking about the customers’ view as being primarily a digital interaction, and to give them customization, web access, and let them do the whole value chain in digital. That goes right to the top of the company and to how structure your business model or value delivery.

Balanced structure for flexibility

Gardner: Mike Fulton, more structure comes with great value in that you can manage complexity and keep things from going off of the rails. But some people think that too much structure slows you down. How do you reach the right balance? And does that balance vary from company to company, or there are general rules about finding that Nirvana between enough structure and too little?

Fulton: If we want to provide flexibility and speed, we have to move away from rules and start thinking more about guardrails, guidelines, and about driving things from a principled perspective.

That’s one of the biggest shifts we’re seeing in the digital space related to enterprise architecture (EA). Whereas, historically, architecture played a directional, governance role, what we’re seeing now is that architecture in a digital context provides guardrails for development teams to work within. And that way, it provides more room for flexibility and for choice at the lower levels of an organization as you’re building out your new digital products.

Historically, architecture played a directional, governance role. Now architecture in a digital context provides guardrails for development teams to work within. It provides more room for flexibility and for choice at the lower levels of an organization as you’re building out your new digital products.

Those digital products still need to work in the context of a broader EA, and an architecture that’s been developed leveraging potentially new techniques, like what’s coming out of The Open Group with the Open Agile Architecture standard. That’s new, different, and critically important for thinking about architecture in a different way. But, I think, that’s where we provide flexibility — through the creation of guardrails.

Doss: The days are over for “Ivory Tower” EA – the top-down, highly centralized EA. Today, EA is responding to right-to-left and outside-in versus inside-out pressures. It has to be more about responding, as Mike said, to the customer-centric needs using market data, customer data, and continuous feedback.

EA is really different now. It responds to product needs, market needs, and all of the domain-driven design and other things that go along with that. 

Lounsbury: Sometimes we use the term agile, and it’s almost like a religious term. But agile essentially means you’re structured to respond to changes quickly and you learn from your mistakes through repeatedly refining your concepts. That’s actually a key part of what’s in the Open Agile Architecture Standard that Mike referred to.

The reason for this is fundamental to why people need to worry about digital right now. With digital, your customer interface is no longer your fancy storefront. It’s that black mirror on your phone, right? You have exactly the same six-by-two-and-a-half-inch screen that everybody else has to get your message across.

And so, the side effect of that, is that the customer has much more power to select among competitors than they did in the past. There’s been plenty of evidence that customers will pick convenience or safety over brand loyalty in a heartbeat these days.

Internally that means as a business that you have to have your team structured so they can quickly respond to the marketplace, and not have to go all the way up the management chain for some big decision and then bring it all way back down again. You’ll be out-competed if you do it that way. There is a hyper-acceleration to “survival of the fittest” in business and IT; this has been called the Red Queen effect.

That’s why it’s essential to have agile not as a religion, but as the organizational agility to respond to outside-in customer pressures as a competitive factor in how you run your business. And, of course, that then pulls along the need to be agile in your business practices and in how you empower your agile teams. How do you give them the guardrails? How do you give them the infrastructure they need to succeed at all of those things?

It’s almost as if the pyramid has been turned on its head. It’s not a pyramid that comes down from the top of some high-level business decisions, but the pyramid grows backward from a point of interaction with the customers.

Gardner: Before we drill down on how to attain that organizational agility, let’s dwell on the challenges. What’s holding up organizations from attaining digital transformation now that they face an existential need for it?

Digital delivers agile advantage

Doss: We see a lot of companies try to bring in digital technologies but really aren’t employing the needed digital practices to bring the fuller intended value, so there’s a cultural lag. 

The digital technologies are often used in combination and mashed up in amazing ways to bring out new products and business models. But you need digital practices along with those digital technologies. There’s a growing body of evidence that the difference between companies that actually get that are not just outperforming their industry peers by percentages — it’s almost exponential.

The findings from the “State of DevOps” Reports for the last few years gives us clear evidence on this. Product teams are really driving a lot of the work and functionality across the silos, and increasingly into operations.

And this is why the standards and bodies knowledge are so important — because you need these ideas. With The Open Group DPBoK, we’ve woven all of this together in one Emergence Model and kept these digital practices connected. That’s the “P” in DPBoK, the practitioner. It’s those digital practices that bring in the value.

Fulton: Jim makes a great point here. But in my context with Digital Executive Education at Ohio State, when we look at that journey to a digital enterprise we think of it in three parts: The vision, the transformation, and the execution.

The piece that Jim was just talking about talks to execution. Once you’re in a digital enterprise, how do you have the right capabilities and practices to create new digital products day to day?  And that’s absolutely critical.

But you also have to set the vision upfront. You have to be able to envision, as a leadership team of an organization, what a digital enterprise looks like. What is your blueprint for that digital enterprise? And so, you have to be able to figure that out. Then, once you have aligned that blueprint with your leadership team, you have to lead that digital transformation journey.

You have to be able to envision, as a leadership team of an organization, what a digital enterprise looks like. What is your blueprint for that digital enterprise? Once you have aligned that blueprint with your leadership team, you have to lead that digital transformation journey.

And that transformation takes you from the vision to the execution. And that’s what I really love about The Open Group and the new direction around an open digital portfolio, the portfolio digital standards that work together in concert to take you across that entire journey. 

These are the standards help you envision the future. Standards that help you drive that digital transformation like the Open Agile Architecture Standard. Standards that help you with digital delivery such as IT4IT. A critically important part of this journey is rethinking your digital delivery because the vast majority of products that companies produce today are digital products.

But then, how do you actually deliver the capabilities and practices, and uplift the organization with the new skills to function in this digital enterprise once you get there? And you can’t wait. You have to bring people along that journey from the very start. The entire organization needs to think differently, and it needs to act differently, once you become a digital enterprise.

Lounsbury: Right. And that’s an important point, Mike, and one that’s come out of the digital thinking going on at The Open Group. A part of the digital portfolio is understanding the difference between “what a company is” and “what a company does” — that vision that you talked about – and then how we operate to deliver on that vision.

Dana, you began this with a question about the barriers and what’s slowing progress down. Those things used to be vertically aligned. What the business is and does used to be decomposed through some top-down, reductionist, refactor or delegate, decompose and delegate of all of the responsibilities. And if everybody does their job at the edge, then the vision will be realized. That’s not true anymore because of the outside-in digital reality.

A big part of the challenge for most organizations is the old idea that, “Well, if we do that all faster, we’ll somehow be able to compete.” That is gone, right? That fundamental change and challenge for top- and middle-management is, “How do we make the transition to the structure that matches the new competitive environment of outside-in?”

“What does it mean to empower our team? What is the culture we need in our company to actually have a productive team at the edge?” Things like, “Are you escalating every decision up to a higher level of management?” You just don’t have time for that anymore.

Are people free to choose the tools and interfaces with the customers that they believe will maximize the customer experience? And if it doesn’t work out, how do you move on to the next step without being punished for the failure of your experiment? If it reflects negatively on you, that’s going to inhibit your ability to respond, too.

All of these techniques, all of these digital ways of working, to use Jim’s term, have to be brought into the organization. And, as Mike said, that’s where the power of standards comes in. That’s where the playbooks that The Open Group has created in the DPBoK Standard, the Open Agile Architecture Standard, and the IT4IT Reference Architecture actually give you the guidance on how to do that.

Part of the Emergence Model is knowing when to do what, at the right stage in your organization’s growth or transformation.

Gardner: And leading up to the Emergence Model, we’ve been talking about standards and playbooks. But what is a “playbook” when it comes to standards?  And why is The Open Group ahead of the curve to extend the value when you have multiple open standards and playbooks?

Teams need playbook to win

Lounsbury: I’ll be honest, Dana, The Open Group is at a very exciting time. We’re in a bit of a transition. When there was a clear division between IT and business, there were different standards and different bodies of knowledge for how you adapt to each of those. A big part of the role of the enterprise architect was in bridging those two worlds.

The world has changed, and The Open Group is in the process of adapting to that. We’re looking to build on the robust and proven standards and build those into a much more coherent and unified digital playbook, where there is easy discoverability and navigability between the different standards. 

People today want to have quick access. They want to say, “Oh, what does it mean to have an agile team? What does it mean to have an outside-in mindset?” They want to quickly discover that and then drill in deeper. And that’s what we pioneered with the DPBoK, with the architecture of the document called the Emergence Model, and that’s been picked up by other standards of The Open Group. It’s clearly the direction we need to do more in.

Gardner: Mike, why are multiple standards acting in concert good?

Fulton: For me, when I think about why you need multiple standards, it’s because if you were to try to create a single standard that covered everything, that standard would become incomprehensible.

If you want an industry standard, you need to bring the right subject matter experts together, the best of the best, the right thought leaders — and that’s what The Open Group does. It brings thought leaders from across the world together to talk about specific topics to develop the best information that we have as an industry and to put that into our standards.

The Open Group, with the digital portfolio, is intentionally bringing the standards together to make sure that the standards align. That brings the standards together to make sure we’re thinking about big, broad concepts in the same way and then dig down into the details with the right subject matter experts.

But it’s a rare bird, indeed, that can do that across multiple parts of an organization, or multiple capabilities, or multiple practices. And so by building these standards up individually, it allows us to tap into the right subject matter experts, the right passions, and the right areas of expertise.

But then, what The Open Group is now doing with the digital portfolio is intentionally bringing those standards together to make sure that the standards align. It brings the standards together to make sure that they have the same messaging, that we’re all working on the same definitions, and that we’re all thinking about big, broad concepts together in the same way and then allow us to dig down into the details with the right subject matter experts at the level of granularity needed to provide the appropriate levels of benefits for industry.

Gardner: And how does the Emergence Model help harmonize multiple standards, particularly around the Digital Practitioner’s Workgroup?

Emergence Model scales

Lounsbury: We talked about outside-in, and there are a couple of ways you can approach how you organize such a topic. As Mike just said, there’s a lot of detail that you need to understand to fully grasp it.

But you don’t always have to fully grasp everything at the start. And there are different ways you can look at organizations. You can look at the typical stack, decomposition, and the top-down view. You can look at lifecycles, that when you start at the left and you go to the right, what are all the steps in-between?

And the third dimension, which we’re picking up on inside The Open Group, is the concept of scale through the Emergence Model. And that’s what we’ve tried to do, particularly in the DPBoK Standard. It’s the best example we have right now. And that approach is coming into other parts of our standards. The idea comes out of lean startup thinking, which comes out of lean manufacturing.

When you’re a startup, or starting a new initiative, there are a few critical things you have to know. What is your concept of digital value? What do you need to deliver that value? Things like that.

Then you ideally succeed and grow and then, “Wow, I need more people.” So now you have a team. Well, that brings in the idea of, “What does team management mean? What do I have to do to make a team productive? What infrastructure does it need?”

And then, with that, the success goes on because of the steps you’ve taken from the beginning. As you get into more complexity, you get into multiple teams, which brings in budgeting. You soon have large-scale enterprises, which means you have all sorts of compliance, accounting, and auditing. These things go on and on.

But you don’t know those things at the start. You do have to know them at the end. What you need to know at the start is that you have a map as to how to get there. And that’s the architecture, and the process to that is what we call the Emergence Model.

It is how you map to scale. And I should say, people think of this quite often in terms of, “Oh it’s just for a startup. I’m not a startup, I’m in a big company.” But many big companies — Mike, I think you’ve had some experience with this – have many internal innovation centers. You do entrepreneurial funding for a small group of people and, depending on their success, feed them more resources. 

So you have the need for an Emergence Model even inside of big companies. And, by the way, there are many use cases for using a pattern for success in how to do digital transformation. Don’t start from the top-down; start with some experiments and grow from the inside-out.

Doss: I refer to that as downscale digital piloting. You may be a massive enterprise, but if you’re going to adapt and adopt new business models, like your competitors and smaller innovators who are in your space, you need to think more like them.

Though I’m in a huge enterprise, I’m going to start some smaller initiatives and fence them off from governance and other things that slow those teams down. I’m going to bring in only lean aspects for those initiatives.

You may be a massive enterprise, but if you’re going to adapt and adopt new business models, like your competitors and smaller innovators, you need to think more like them. In a huge enterprise, you need to start some smaller initiatives and fence them off from the governance that could slow them down and bring in lean aspects. 

And then, you amplify what works and scale that to the enterprise. As David said, you have the smaller organizations that have a great guidebook now for what’s right around the corner. They’re growing now, they don’t have just one product anymore, they have two or three products and so the original product owner can’t be in every product meeting.

So, all of those things are happening as a company grows and the DPBoK and Emergence Model is great for, “Hey, this is what’s around the corner.”

With a lot of other frameworks, you’d have to spend a lot of time extracting for scale-specific guidance on digital practices. So, you’d have to extract all that scale-specific stuff and it’s a lot of work, to be honest, and it’s hard to get right. So, in the DPBoK, we built the guidance so it’s much easier to move in either direction — going up- and down-scale digital piloting as well.

Gardner: Mike, you’re on the pointy end of this, I think, in one of your jobs. 

Intentional innovation

Fulton: Yes, at Nationwide, in our technology innovation team, we are doing exactly what Dave and Jim have described. We create new digital products for the organization and we leverage a combination of lean startup methodologies, agile methodologies, and the Emergence Model from The Open Group DPBoK to help us think about what we need at different points in time in that lifecycle of a digital product.

And that’s been really effective for us as we have brought new products to market. I shared the full story at The Open Group presentation about six months ago. But it is something that I believe is a really valuable tool for big enterprises trying to innovate. It helps you think about being very intentional about what are you using. What capabilities and components are you using that are lean versus more robust? What capabilities are you using that are implicit versus explicit, and what point in time do you actually need to start writing things down?

At what point in time do you absolutely need to start leveraging those slightly bigger, more robust enterprise processes to be able to effectively bring a digital product to market versus using processes that might be more appropriate in a startup world? And I found the DPBoK to be incredibly helpful and instructive as we went through that process at Nationwide. 

Gardner: Are there any other examples of what’s working, perhaps even in the public sector? This is not just for private sector corporations. A lot of organizations of all stripes are trying to align, become more agile, more digital, and be more responsive to their end-users through digital channels. Any examples of what is working when it comes to the Emergence Model, rapid digitization, and leveraging of multiple standards appropriately?

Good governance digitally 

Doss: We’re really still in the early days with digital in the US federal government. I do a lot of work in the federal space, and I’ve done a lot of commercial work as well. 

They’re still struggling in the federal space with the project-to-product shift. 

There is still a huge focus on the legacy project management mentality. When you think about the legacy definition of a deliverable, the project is done at the deliverable. So, that supports “throw it over the wall and run the other way.”

Various forms of the plan-build-operate (PBO) IT organization structure still dominate in the federal space. Orgs that are PBO-aligned tend to push work from left to right across the P, B & O silos, and the space between these siloes are heavily stage-gated. So, this inside-out thinking and the stage-gating also supports “throw it over the wall and run the other way.” In the federal space, waterfall is baked into nearly everything.

These are two huge digital anti-patterns that the federal space is really struggling with. 

Product management, for example, employs a single persistent team that remains with the work across the lifecycle and ties together those dysfunctional silos. Such “full product lifecycle teams” eliminate a lot of the communication and hand-off problems associated with such legacy structures.

The other problem in the federal space with the PBO IT org structure is that the real power resides in these silos and these silos’ management focus is downward into their silo….not as much across the silos; so there are a lot of cross functional initiatives such as EA, service ownership, product ownership or digital initiative that might get some traction for a while but such initiatives of functions have no real buying power or “go/no-go” decision authority so they get squashed eventually by the silo heads, where the real power resides in such organizations. 

In the US, I look over time for Congressional, via new laws or Office of Management and Budget (OMB) via policy, to bring in some needed changes and governance about how IT orgs get structured and governed. 

Ironically, these two digital anti-patterns also lead to the creation of lots of over-baked governance over decades to try to assure that the intended value was still captured, which is like chasing more bad money after that other bad money.

This is not just true in federal this is also true in the commercial world. Such over-baked governance just happens to be really, really bad in the federal space.

For federal IT, you have laws like Clinger-CohenFederal Information Technology Acquisition Reform Act (FITARA), policies and required checks by the OMB, Capital Planning and Investment controlAcquisition RegulationsDoD Architecture Framework, and I could go on — all which require tons of artifacts and evidence of sound decision making.

The problem is nobody is rationalizing these together… like figuring out what supersedes what when something new comes out. So, the governance just gets more and more un-lean, over-bloated and what you have at the end is agencies are either misguided by out-of-date guidance or overburdened by over-bloated governance.

Fulton: I don’t have nearly the level of depth in the government space that Jim does, but I do have a couple examples I want to point people to if they are trying to look for more government-related examples. I point you to a couple here in Ohio, both Doug McColloughand his work with the City of Dublin in Ohio. He’s done a lot of work with digital technologies; digital transformation at the city level. 

And then again here in Ohio – and I’m just using Ohio references because I live in Ohio and I know a little bit more intimately what some of these folks are doing — Ervan Rodgers, CEO of the State of Ohio, has done a really nice job of focusing on digital capabilities and practices to build up across state employees.

The third I’ll point to is the work going on in India. There’s been a tremendous amount of really great work in India related to government, architecture, and getting to the digital transformation conversation at the government level. So, if folks are interested in more examples, more stories, I’d recommend you look into those three as places to start.

Lounsbury: The thing, I think, you’re referring to there, Mike, is the IndEA India Enterprise Architecture initiative and the pivot to digital that several of the Indian provinces are making. We can certainly talk about that more on a different podcast.

Transformation is almost always driven by a Darwinian force. Something has changed in your environment that causes you to evolve, and we’ve seen that in the federal and defense sectors in things like avionics where the cost of software is unaffordable. They then turned to modular, decomposable systems based on standards just to stay in business.

I will toss in one ray of light to what Jim said. Transformation is almost always driven by an almost Darwinian force. There’s something changed in your environment that causes you to evolve and we’ve seen that in the federal sector and the defense sector in particular where things like in avionics, the cost of software is becoming unaffordable. They turned to modular, decomposable systems based on standards in order to achieve the necessary cost savings to just stay in business.

Similarly, in India, the utter need to deliver to a very diverse, large rural population, and grow that needed digitization. And certainly, the U.S. federal sector and the defense sector are very aware of the disparity. And I think, things like, the defense budget changes or changes in mission will drive some of these changes that we’ve talked about that are driven by the pandemic urgently in the commercial sector.

So, it will happen, but it is, I’ll agree with Jim, probably the most challenging ultimate top-down environment that you could possibly imagine doing a transformation.

Gardner: In closing, what’s coming next from The Open Group, particularly around digital practitioner resources? How can organizations best exploit these resources?

Harmony on the horizon

Lounsbury: We’ve talked about the evolution The Open Group is going through, about the digital portfolio and the digital playbooks having all of our standards speak common language and working together.

A first step in that is to develop a set of principles by which we’re going to do that evolution and the documents is called, Principles for Open Digital Standards. You can get that from The Open Group bookstore and if you want to find it quickly, you go to The Open Group’s The Digital-First Enterprise page that links to all of these standards.

Looking forward, there are activities going on in all of the forums of The Open Group and the forums are voluntary organizations. But certainly, the IT4IT Forum, the Digital Practitioner Workgroup, in these large swaths of our architecture activity they are working on how we can harmonize the language and bring common knowledge to our standards.

And then, to look beyond that, I think we need to address the problems of discoverability and navigability that I mentioned earlier to give that coherent and an easy-to-access picture of where a person can find out what they need when they need it.

Fulton: Dave, I think probably one of the most important pieces of work that will be delivered soon by The Open Group is putting a stake in the ground around what it means to be a digital product. And that’s something that I don’t think we’ve seen anywhere else in the industry. I think it will really move the ball forward and be a unifying document for the entire open digital portfolio.

And so, we have some great work that’s already gone on in the DPBoK and the Open Agile Architecture standard, but I think that digital product will be a rallying cry that will make all of the standards even more cohesive going forward.

Doss: And I’ll just add my final two cents here. I think a lot of it, Dana, is just awareness. People need to just understand that there’s a DPBoK Standard out there for digital practitioners. 

If you’re in IT, you’re not just an IT practitioner anymore, you’re using digital technology and digital practices to bring lean, user-centric value to your business or mission. So, digital is the new best practice. So, there’s a framework in a body of knowledge out there now that supports and helps people transform in their careers. The same thing with Agile Architecture. And so it’s just the awareness that these things are out there. The most powerful thing to me is, both of these works that I just mentioned have more than 500 references from most of the last 10 years of leading digital thinkers. So, again, the way these are structured, the way these are built, bringing in just the scale-specific guidance and that sort of stuff is hugely powerful. There needs to be an increasing awareness that this stuff is out there.

Lounsbury: And if I can pick up on that awareness point, I do want to mention, as always, The Open Group publishes the standards as freely available to all. You can go to that digital enterprise page or The Open Group Library to find these. We also have an active training ecosystem that you can find these days. Everybody does that digital training. 


There are ways of learning the standards in depth and getting certified that you’re proficient in the knowledge of that. But I also should mention, we have at least two U.S. universities and more interest on the international sector for graduate work in executive-level education. And Mike has mentioned his executive teaching at Ohio State, and there are others as well.

Gardner: Right, and many of these resources are available at The Open Group website. There are also many events, many of them now virtual, as well as certification processes and resources. There’s always something new, it’s a very active place.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

Posted in Cloud computing, enterprise architecture, Enterprise transformation, Platform 3.0, Security, The Open Group | Tagged , , , , , , , , , , , , , , | Leave a comment

How The Open Group enterprise architecture portfolio enables an agile digital enterprise

The next BriefingsDirect agile business enablement discussion explores how a portfolio approach to standards has emerged as a key way to grapple with digital transformation.

As businesses seek to make agility a key differentiator in a rapidly changing world, applying enterprise architecture (EA) in concert with many other standards has never been more powerful. Stay with us here to explore how to define and corral a comprehensive standards resources approach for making businesses intrinsically agile and competitive. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about attaining agility via an embrace of a broad toolkit of standards, we are joined by our panel, Chris Frost, Principal Enterprise Architect and Distinguished Engineer, Application Technology Consulting Division, at FujitsuSonia Gonzalez, The Open Group TOGAF® Product Manager, and Paul Homan, Distinguished Engineer and Chief Technology Officer, Industrial, at IBM Services.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Sonia, why is it critical to modernize businesses in a more comprehensive and structured fashion? How do standards help best compete in this digital-intensive era?

Gonzalez: The question is more important than ever. We need to be very quickly responding to changes in the market.

It’s not only that we have more technology trends and competitors. Organizations are also changing their business models — the way they offer products and services. And there’s much more uncertainty in the business environment.

The current situation with COVID-19 has made for a very unpredictable environment. So we need to be faster in the ways we respond. We need to make better use of our resources and to be able to innovate in how we offer our products and services. And since everybody else is also doing that, we must be agile and respond quickly. 

Gardner: Chris, how are things different now than a year ago? Is speed all that we’re dealing with when it comes to agility? Or is there something more to it?

Frost: Speed is clearly a very important part of it, and market trends are driving that need for speed and agility. But this has been building for a lot more than a year.

We now have, with some of the hyperscale cloud providers, the capability to deploy new systems and new business processes more quickly than ever before. And with some of the new technologies — like artificial intelligence (AI), data analytics, and 5G – there are new technological innovations that enable us to do things that we couldn’t do before.

Faster, better, more agile

A combination of these things has come together in the last few years that has produced a unique need now for speed. That’s what I seek in the market, Dana.

Gardner: Paul, when it comes to manufacturing and industrial organizations, how do things change for them in particular? Is there something about the data, the complexity? Why are standards more important than ever in certain verticals?

Homan: The industrial world in particular, focusing on engineering and manufacturing, has brought together the physical and digital worlds. And whilst these industries have not been as quick to embrace the technologies as other sectors have, we can now see how they are connected. That means connected products, connected factories and places of work, and connected ecosystems.

There are still so many more things that need to be integrated, and fundamentally EA comes back to the how – how do you integrate all of these things? A great deal of the connectivity we’re now seeing around the world needs a higher level of integration.

Gardner: Sonia, to follow this point on broader integration, does applying standards across different parts of any organization now make more sense than in the past? Why does one part of the business need to be in concert with the others? And how does The Open Group portfolio help produce a more comprehensive and coordinated approach to integration?

Integrate with standards

Gonzalez: Yes, what Paul mentioned about being able to integrate and interconnect is paramount for us. Our portfolio of standards, which is more than just [The Open Group Architectural Forum (TOGAF®)]  Standard, is like having a toolkit of different open standards that you can use to address different needs, depending upon your particular situation.

For example, there may be cases in which we need to build physical products across an extended industrial environment. In that case, certain kinds of standards will apply. Also critical is how the different standards will be used together and pursue interoperability. Therefore, borderless information flow is one of our trademarks at The Open Group.

 Other more intangible cases, such as digital services, need standards. For example, the Digital Practitioner Body of Knowledge (DPBoK™) supports a scale model to support the digital enterprise.

Other standards are coming around agile enterprises and best practices. They support how to make interconnections and interoperability faster — but at the same time having the proper consistency and integration to align with the overall strategy. At the end of the day, it’s not enough to integrate for just a technical point of view. You need bring new value to your businesses. You need to be aligned with your business model, and with your business view, to your strategy.

Therefore, the change is not only to integrate technical platforms, even though that is paramount, but also to change your business and operational model and to go deeper to cover your partners and the way your company is put together.

So, therefore, we have different standards that cover all of those different areas. As I said at the beginning, these form a toolkit with which you can choose different standards and make them work together conforming a portfolio of standards.

Gardner: So, whether we look to standards individually or together as a toolkit, it’s important that they have a real-world applicability and benefits. I’m curious, Paul and Chris, what’s holding organizations back from using more standards to help them?

Homan: When we use the term traditional enterprise architecture, it always needs to be adapted to suit the environment and the context. TOGAF, for example, has to be tailored to the organization and for the individual assignment.

But I’ve been around in the industry long enough to be familiar with a number of what I call anti-patterns that have grown up around EA practices and which are not helping with the need for agility. This comes from the idea that EA has heavy governance.

We have all witnessed such core practices — and I will confess to having being part of some of them. And these obviously fly in the face of the agility, flexibility, of being able to push decisions out to the edge and pivot quickly, and to make mistakes and be allowed to learn from them. So kind of an experimental attitude.

And so gaining such adaptation is more than just promoting good architectural decision-making within a set of guide rails — it allows decision-making to happen at the point of need. So that’s the needed adaption that I see.

Gardner: Chris, what challenges do you see organizations dealing with, and why are standards be so important to helping them attain a higher level of agility?

Frost: The standards are important, not so much because they are a standard but because they represent industry best practices. The way standards are developed in The Open Group are not some sort of theoretical exercise. It’s very much member-driven and brought together by the members drawing on their practical experiences.

Automation of business workflows and processes with a businessman in background touching a button

To me, the point is more about industry best practice, and not so much the standard. There are good things about standard ways of working, being able to share things, and everybody having a common understanding about what things mean. But that aspect of the standard that represents industry best practices — that’s the real value right now.

Coming back to what Paul said, there is a certain historical perspective here that we have to acknowledge. EA projects in the past — and certainly things I have been personally involved in — were often delivered in a very waterfall fashion. That created a certain perception that somehow EA means big-design-upfront-waterfall-style projects — and that absolutely isn’t the case.

That is one of the reasons why a certain adaptation is needed. Guidance about how to adapt is needed. The word adapt is very important because it’s not as if all of the knowledge and fundamental techniques that we have learned over the past few years are being thrown away. It’s a question of how we adapt to agile delivery, and the things we have been doing recently in The Open Group demonstrate exactly how to do that.

Gardner: And does this concept of a minimum viable architecture fit in to that? Does that help people move past the notion of the older waterfall structure to EA?

Reach minimum viable architecture

Frost: Yes, very much it does. It’s something that you might regard as reaching first base. In architectural terms, that minimum viable architecture is like reaching first base, and that emphasizes a notion of rapidly getting to something that you can take forward to the next stage. You can get feedback and also an acknowledgment that you will improve and iterate in the future. Those are fundamental about agile working. So, yes, that minimum viable architecture concept is a really important one. 

Gardner: Sonia, if we are thinking about a minimum viable architecture we are probably also working toward a maximum value standards portfolio. How do standards like TOGAF work in concert with other open standards, standards not in The Open Group? How do we get to that maximum value when it comes to a portfolio of standards?

Gonzalez: That’s very important. First, it has to do with adapting the practice, and not only the standard. In order to face new challenges, especially ones with agile and digital, the practices need to evolve and therefore, the standards – including the whole portfolio of The Open Group standards which are constantly in evolution and improvement. Our members are the ones contributing with the content that follows the new trends, best practices, and uses for all of those practices.

The standards need to evolve to cover areas like digital and agile. And with the concept of minimal viable architecture, the standards are evolving to provide guidance on how EA as a practice supports agile. Actually, nothing in the standard says it has to be used in the waterfall way, even though some people may say that.

TOGAF is now building guidance for how people can use the standards supporting the agile enterprise, delivering that in an agile way, and also supporting an agile approach, which is having a different view of how the practice is applied following this new shift and this new adaption.

Adapt to sector-specific needs

The practice needs to be adapted, the standards need to evolve to fulfill that, and need to be applied to specific situations. For example, it’s not the same to architect organizations in which you have ground processes, especially in a back office than other ones that are more customer facing. For the first ones, their processes are heavier, they don’t need to be that agile. That agile architecture is for upfront customers that need to support a faster pace.

So, you might have cases in which you need to mix different ways to apply the practices and standards. Less agile approach for the back office and a more agile approach for customer facing applications such as, for example, online banking.

Adaptation also depends on the nature of companies. The healthcare industry is one example. We cannot experiment that much in that area because that’s more risk assessment and less subject to experimentation. For these kinds of organizations a different approach is needed. 

There is work in progress in different sectors. For example, we have a very good guide and case study about how to use the TOGAF standard along with the ArchiMate® modeling notation in the banking industry using the BIAN®  Reference Model. That’s a very good use case in The Open Group library. We also have a work in progress in the forum around how governments architect. The IndEA Reference Model is another example of a reference model for that government and has been put together based on open standards.

We also have work in progress around security, such as with the SABSA [framework for Business Security Architecture], for example. We have developed guidance about standards and security along with SABSA. We also have a partnership with the Object Management Group (OMG), in which we are pioneers and have a liaison to build products that will go to market to help practitioners use external standards along with our own portfolio.

Gardner: When we look at standards as promoting greater business agility, there might be people who look to the past and say, “Well, yes, but it was associated with a structured waterfall approach for so long.”

But what happens if you don’t have architecture and you try to be agile? What’s the downside if you don’t have enough structure; you don’t put in these best practices? What can happen if you try to be agile without a necessary amount of architectural integrity?

Guardrails required

Homan: I’m glad that you asked, because I have a number of organizations that I have worked with that have experienced the results of diminishing their architectural governance. I won’t name who they are for obvious reasons, but I know of organizations that have embraced agility. They had great responses to being able to do things quickly, find things out, move fleet-of-foot, and then combined with that cloud computing capabilities. They had great freedom to exercise where they choose to source commodity cloud services.

And, as an enterprise architect, if I look in, that freedom created a massive amount of mini-silos. As soon as those need to come together and scale — and scale is the big word — that’s where the problems started. I’ve seen, for example, around common use of information and standards, processes and workflows that don’t cross between one cloud vendor and another. And these are end-customer-facing services and deliveries that frankly clash from the same organization, from the same brand.

And those sorts of things came about because they weren’t using common reference architectures. There wasn’t a common understanding of the value propositions that were being worked toward, and they manifested because you could rapidly spin stuff out.

When you have a small, agile model of everybody co-located in a relatively contained space — where they can readily connect and communicate — great. But unfortunately as soon as you go and disperse the model, have a round of additional development, distribute to more geographies and markets, with lots of different products, you behave like a large organization. It’s inevitable that people are going to plough their own furrow and go in different directions. And so, you need to have a way of bringing it back together again.

And that’s typically where people come in and start asking how to reintegrate. They love the freedom and we want to keep the freedom, but they need to combine that with a way of having some gentle guardrails that allow them to exercise freedom of speed but not diverge too much.

Frost: The word guardrails is really important because that is very much the emphasis of how agile architectures need to work. My observation is that, without some amount of architecture and planning, what tends to go wrong is some of the foundational things – such as using common descriptions of data or common underlying platforms. If you don’t get those right, different aspects of an overall solution can diverge and fail to integrate. 

Some of those things may include what we generally refer to as non-functional requirements, things like capacity, performance, and possibly safety or regulatory compliance. These rules are often things that easily tend to get overlooked unless there is some degree of planning and architecture, surrounding architecture definitions that think through how to incorporate some of those really important features.

A really important judgment point is what’s just enough architecture upfront to set down those important guardrails without going too far and going back into the big design upfront approach, which we want to avoid to still create the most freedom that we can.

Gardner: Sonia, a big part of the COVID-19 response has been rapidly reorganizing or refactoring supply chains. This requires extended enterprise cooperation and ultimately integration. How are standards like TOGAF and the toolkit from The Open Group important to allow organizations to enjoy agility across organizational boundaries, perhaps under dire circumstances?

COVID-19 necessitates holistic view

Gonzalez: That is precisely when more architecture is needed, because you need to be able to put together a landscape, a whole view of your organization, which is now a standard organization. Your partners, customers, customer alliances, all of your liaisons, are a part of your value chain and you need to have visibility over this.

You mentioned suppliers and providers. These are changing due to the current situation. The way they work, everything is going more digital and virtual, with less face-to-face. So we need to change processes. We need to change value streams. And we need to be sure that we have the right capabilities. Having standards, it’s spot-on, because one of the advantages of having standards, and open standards especially, is that you facilitate communication with other parties. If you are talking the same language it will be easier to integrate and get people together.

Now that most people are working virtually, that implies the need for very good management or your whole portfolio of products and lifecycle. For addressing all this complexity and to gain a holistic view of your capabilities you need to have an architecture focus. Therefore, there are different standards that can fit together in those different areas.

For example, you may need to deliver more digital capabilities to work virtually. You may need to change your whole process view to become more efficient and allow such remote work, and to do that you use standards. In the TOGAF standard we have a set of very good guidance for our business architecture, business models, business capabilities, and value streams; all of them are providing guidance on how to do that.

Another very good guide under the TOGAF standard umbrella for their organization is called Organization Map Guide. It’s much more than having a formal organizational chart to your company. It’s how you map to different resources to respond quickly to changes in your landscape. So, having a more dynamic view, having a cross-coding view of your working teams, is required to be agile and to have interdisciplinary teams work together. So you need to have architecture, and you need to have open standards to address those challenges.

Gardner: And, of course, The Open Group is not standing still, along with many other organizations, in trying to react to the environment and help organizations become more digital and enhance their customer and end-user experiences. What are some of the latest developments at The Open Group?

Standards evolve steadily

Gonzalez: First, we are evolving our standards constantly. The TOGAF standard is evolving to address more of these agile-digital trends, how to adopt new technology trends in a way that they will be adopted in accord with your business model for your strategy and organizational culture. That’s an improvement that is coming. Also, the structure of the standard has evolved to be easier to use and more agile. It has been designed to evolve through new and improved versions more frequently than in the past.

We also have other components coming into the portfolio. One of them is the Agile Architecture Standard, which is going to be released soon. That one is going straight into the agile space. It’s proposing a holistic view of the organization. This coupling between agile and digital is addressed in that standard. It is also suitable to be used along with the TOGAF standard. Both complement each other. The DPBoK is also evolving to address new trends in the market.

We also have other standards. The Microservice Architecture is a very active working group that is delivering guidance on microservices delivered using the TOGAF standard. Another important one is the Zero Trust Architecturein the security space. Now more than ever, as we go virtual and rely on platforms, we need to be sure that we are having proper consistency in security and compliance. We have, for example, the General Data Protection Regulation (GDPR) considerations, which are stronger than ever. Those kinds of security breaches are addressed in that specific context.

The IT4IT standard, which is another reference architecture, is evolving toward becoming more oriented to a digital product concept to precisely address all of those changes.

All of these standards, all the pieces, are moving together. There are other things coming, for example, delivering standards to serve specific areas like oil, gas, and electricity, which are more facility-oriented, more physically-oriented. We are also working toward those to be sure that we are addressing all of the different possibilities.

Another very important thing here is we are aiming for every standard we deliver into the market to have a certification program along with it. We have that for the TOGAF standard, ArchiMate standard, IT4IT, and DPBoK. So the idea is to continue increasing our portfolio of certification along with the portfolio of standards.

Furthermore, we have more credentials as part of the TOGAF certification to allow people to go into specializations. For example, I’m TOGAF-certified but I also wanted to go for a Solution Architect Practitioner or a Digital Architect. So, we are combining the different products that we have, different standards, to have these building blocks we’re putting together for this learning curve around certifications, which is an important part of our offering.

Gardner: I think it’s important to illustrate where these standards are put to work and how organizations find the right balance between a minimum viable architecture and a maximum value portfolio for agility.

So let’s go through our panel for some examples. Are there organizations you are working with that come to mind that have found and struck the right balance? Are they using a portfolio to gain agility and integration across organizational boundaries?

More tools in the toolkit

Homan: The key part for me is do these resources help people do architecture? And in some of the organizations I’ve worked with, some of the greatest successes have been where they have been able to pick and choose – cherry pick, if you like — bits of different things and create a toolkit. It’s not about following just one thing. It’s about having a kit.

The reason I mentioned that is because one of the examples I want to reference has to do with development of ecosystems. In ecosystems, it’s about how organizations work with each other to deliver some kind of customer-centric propositions. I’ve seen this in the construction industry in particular, where lots of organizations historically have had to come together to undertake large construction efforts.

And we’re now seeing what I consider to be an architected approach across those ecosystems. That helps build a digital thread, a digital twin equivalent of what is being designed, what is being constructed for safety reasons, both in terms of what is being built at the time for the people that are building it, but also for the people that then occupy it or use it, for the reasons of being able to share common standards and interoperate across the processes from end-to-end to be able to do these thing in a more agile way of course, but in a business agile way.

So that’s one industry that always had ecosystems, but IT has come in and therefore architects have had to better collaborate and find ways to integrate beyond the boundary of their organization, coming back to the whole mission of boundaryless information flow, if you will.

Gardner: Chris, any examples that put a spotlight on some of the best uses of standards and the best applicability of them particularly for fostering agility?

Frost: Yes, a number of customers in both the private and public sector are going through this transition to using agile processes. Some have been there for quite some time; some are just starting on that journey. We shouldn’t be surprised by this in the public and private sectors because everybody is reacting to the same market fundamentals driving the need for agile delivery.

We’ve certainly worked with a few customers that have been very much at the forefront of developing new agile practices and how that integrates with EA and benefits from all of the good architectural skills and experiences that are in frameworks like the TOGAF standard.

Paul talked about developing ecosystems. We’ve seen things such as organizations embarking on large-scale internal re-engineering where they are adjusting their own internal IT portfolios to react to the changing marketplace that they are confronted by.

I am seeing a lot of common problems about fitting together agile techniques and architecture and needing to work in these iterative styles. But overwhelmingly, these problems are being solved. We are seeing the benefits of this iterative way of working with rapid feedback and the more rapid response to changing market techniques.

I would say even inside The Open Group we’re seeing some of the effects of that. We’ve been talking about the development of some of the agile guidance for the TOGAF standard within The Open Group, and even within the working group itself we’ve seen adaption of more agile styles of working using some of the tools that are common in agile activities. Things like GitLab and Slack and these sorts of things. So it really is quite a pervasive thing we are seeing in the marketplace.

Gardner: Sonia, are there any examples that come to mind that illustrate where organizations will be in the coming few years when it comes to the right intersection of agile, architecture, and the use of open standard? Any early adopters, if you will, or trendsetters that come to mind that illustrate where we should be expecting more organizations to be in the near future?

Steering wheel for agility

Gonzalez: Things are evolving rapidly. In order to be agile and a digital enterprise there are different things that need to change around the organization. It’s a business issue, it’s not something related to only platforms of technology, or technology adoption. It’s going ahead of that to the business models.

For example, we now see more-and-more the need to have an outside-in view of the market and trends. Being efficient and effective is not enough anymore. We need to innovate to figure out what the market is asking for. And sometimes to even generate that demand and generate new digital offerings for your market.

That means more experimentation and more innovation, keeping in mind that in order to really deliver that digital offering you must have the right capabilities, so changes in your business and operational models, your backbone, need to be identified and then of course connected and delivered through technical platforms.

Data is also another key component. We have several working groups and Forums working around data management and data science. If you don’t have the information, you won’t be able to understand your customers. That’s another trend, having a more customer journey-oriented view. At the end, you need to give your value to your end users and of course also internally to your company.

That’s why even internally, at The Open Group, we are considering having our own standards get a closer view of the customer. That is something that companies need to be addressing. And for them to do that, practitioners need to be able to develop new skills and to evolve rapidly. They will need to study not only the new technology trends, but how you can communicate that to your business, so more communications, marketing, and a more aggressive approach through innovation.

Sustainability is another area we are considering at The Open Group, being able to deliver standards that will support organizations make better use of resources internally and externally and selecting the tools to be sustainable within their environments.  

Those are some of the things we see for the coming years. As we have all learned this year, we should be able to shift very quickly. I was recently reading a very good blog that said agile is not only having a good engine, but also having a good steering wheel to be able to change direction quickly. That’s a very good metaphor for how you should evolve. It’s great to have a good engine, but you need to have a good direction, and that direction is precisely what they need to pay attention to, not being agile only for the sake of being agile.

So, that’s the direction we are taking with our whole portfolio. We are also considering other areas. For example, we are trying to improve our offering in vertical industry areas. We have other things on the move like Open Communities, especially for the ArchiMate Standard, which is one of our executable standards easier to be implemented using architecture tools.

So, those are the kinds of things in our strategy at The Open Group as we work to serve our customers.

Gardner: And what’s next when it comes to The Open Group events? How are you helping people become the types of architects who reach that balance between agility and structure in the next wave of digital innovation?

New virtual direction for events

Gonzalez: We have many different kinds of customers. We have our members, of course. We have our trainers. We have people that are not members but are using our standards and they are very important. They might eventually become members. So, we have been identifying those different markets on building customer journeys for all of them in order to serve them properly.

Serving them, for example, means providing better ways for them to find information on our website and to get access to our resources. All of our publications are free to be downloaded and used if you are and end user organization. You only need a commercial license if you will apply them to deliver services to others.

In terms of events, we have had a very good experience with virtual events. The good thing about our crisis is that you can use it for learning, and we have learned that virtual events are very good. First, because we can address more coverage. For example, if you organize a face-to-face event in Europe, probably people from Europe will attend, but it’s very unlikely that people from Asia or even the U.S. or Latin America will attend. But a virtual event, also being free events, are attracting people from different countries, different geographies.

We have very good attendance on those virtual events. This year, all four events, except the one that we had in San Antonio have been virtual. Besides the big ones that we have every three months, we also have organized other smaller ones. We had a very good one in Brazil, we have another one from the Latin-American community in Spanish, and we’re organizing more of these events.

For next year, probably we are going to have some kind of a mix of virtual and face-to-face, because, of course, face-to-face is very important. And for our members, for example, sharing experiences as a network is a value that you can only have if you’re physically there. So, probably for next year, depending on how the situation is evolving, it will be a mix of virtual and face-to-face events.

We are trying to get a closer view what the market is demanding from us, not only in the architecture space but in general.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

Copyright Interarbor Solutions, LLC and The Open Group, 2005-2020. All rights reserved.

Posted in Cloud computing, Cyber security, Data center transformation, digital transformation, enterprise architecture, Enterprise transformation, open source, The Open Group | Tagged , , , , , , , , , , , , , , | Leave a comment

The IT intelligence foundation for digital business transformation rests on HPE InfoSight AIOps

The next BriefingsDirect podcast explores how artificial intelligence (AI) increasingly supports IT operations.

One of the most successful uses of machine learning (ML) and AI for IT efficiency has been the InfoSight technology developed at Nimble Storage, now part of Hewlett Packard Enterprise (HPE).


Initially targeting storage optimization, HPE InfoSight has emerged as a broad and inclusive capability for AIOps across an expanding array of HPE products and services.

Please welcome a Nimble Storage founder, along with a cutting-edge machine learning architect, to examine the expanding role and impact of HPE InfoSight in making IT resiliency better than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest IT operations solutions that help companies deliver agility and edge-to-cloud business continuity, we’re joined by Varun Mehta, Vice President and General Manager for InfoSight at HPE and founder of Nimble Storage, and David Adamson, Machine Learning Architect at HPE InfoSight. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Varun, what was the primary motivation for creating HPE InfoSight? What did you have in mind when you built this technology?

Mehta: Various forms of call home were already in place when we started Nimble, and that’s what we had set up to do. But then we realized that the call home data was used to do very simple actions. It was basically to look at the data one time and try and find problems that the machine was having right then. These were very obvious issues, like a crash. If you had had any kind of software crash, that’s what call home data would identify.

We found that if instead of just scanning the data one time, if we could store it in a database and actually look for problems over time in areas wider than just a single use, we could come up with something very interesting. Part of the problem until then was that a database that could store this amount of data cheaply was just not available, which is why people would just do the one-time scan.

The enabler was that a new database became available. We found that rather than just scan once, we could put everyone’s data into one place, look at it, and discover issues across the entire population. That was very powerful. And then we could do other interesting things using data science such as workload planning from all of that data. So the realization was that if the databases became available, we could do a lot more with that data.

Gardner: And by taking advantage of that large data capability and the distribution of analytics through a cloud model, did the scope and relevancy of what HPE InfoSight did exceed your expectations? How far has this now come?

Mehta: It turned out that this model was really successful. They say that, “imitation is the sincerest form of flattery.” And that was proven true, too. Our customers loved it, our competitors found out that our customers loved it, and it basically spawned an entire set of features across all of our competitors.

The reason our customers loved it — followed by our competitors — was that it gave people a much broader idea of the issues they were facing. We then found that people wanted to expand this envelope of understanding that we had created beyond just storage.

Data delivers more than a quick fix

And that led to people wanting to understand how their hypervisor was doing, for example. And so, we expanded the capability to look into that. People loved the solution and wanted us to expand the scope into far more than just storage optimization.

Gardner: David, you hear Varun describing what this was originally intended for. As a machine learning architect, how has HPE InfoSight provided you with a foundation to do increasingly more when it comes to AIOps, dependability, and reliability of platforms and systems?

The database is full of data that not only tracks everything longitudinally across the installed base, but also over time. The richness of that data gives us features we otherwise could not have conceived of. Many issues can now be automated away. 

Adamson: As Varun was describing, the database is full of data that not only tracks everything longitudinally across the installed base, but also over time. The richness of that data set gives us an opportunity to come up with features that we otherwise wouldn’t have conceived of if we hadn’t been looking through the data. Also very powerful from InfoSight’s early days was the proactive nature of the IT support because so many simple issues had now been automated away. 

That allowed us to spend time investigating more interesting and advanced problems, which demanded ML solutions. Once you’ve cleaned up the Pareto curve of all the simple tasks that can be automated with simple rules or SQL statements, you uncover problems that take longer to solve and require a look at time series and telemetry that’s quantitative in nature and multidimensional. That data opens up the requirement to use more sophisticated techniques in order to make actionable recommendations.

Gardner: Speaking of actionable, something that really impressed me when I first learned about HPE InfoSight, Varun, was how quickly you can take the analytics and apply them. Why has that rapid capability to dynamically impact what’s going on from the data proved so successful? 

Support to succeed

Mehta: It turned out to be one of the key points of our success. I really have to compliment the deep partnership that our support organization has had with the HPE InfoSight team.

The support team right from the beginning prided themselves on providing outstanding service. Part of the proof of that was incredible Net Promoter scores (NPS), which is this independent measurement of how satisfied customers are with our products. Nimble’s NPS score was 86, which is even higher than Apple. We prided ourselves on providing a really strong support experience to the customer.

Whenever a problem would surface, we would work with the support team. Our goal was for a customer to see a problem only once. And then we would rapidly fix that problem for every other customer. In fact, we would fix it preemptively so customers would never have to see it. So, we evolved this culture of identifying problems, creating signatures for these problems, and then running everybody’s data through the signatures so that customers would be preemptively inoculated from these problems. That’s why it became very successful.

Gardner: It hasn’t been that long since we were dealing with red light-green light types of IT support scenarios, but we’ve come a long way. We’re not all the way to fully automated, lights-out, machines running machines operations.

David, where do you think we are on that automated support spectrum? How has HPE InfoSight helped change the nature of systems’ dependability, getting closer to that point where they are more automated and more intelligent?

Adamson: The challenge with fully automated infrastructure stems from the variety of different components in the environments — and all of the interoperability among those components. If you look at just a simple IT stack, they are typically applications on top of virtual machines (VMs), on top of hosts — they may or may not have independent storage attached – and then the networking of all these components. That’s discounting all the different applications and various software components required to run them.

There are just so many opportunities for things to break down. In that context, you need a holistic perspective to begin to realize a world in which the management of that entire unit is managed in a comprehensive way. And so we strive for observability models and services that collect all the data from all of those sources. If we can get that data in one place to look at the interoperability issues, we can follow the dependency chains.

But then you need to add intelligence on top of that, and that intelligence needs to not only understand all of the components and their dependencies, but also what kinds of exceptions can arise and what is important to the end users.

So far, with HPE InfoSight, we go so far as to pull in all of our subject matter expertise into the models and exception-handling automation. We may not necessarily have upfront information about what the most important parts of your environment are. Instead, we can stop and let the user provide some judgment. It’s truly about messaging to the user the different alternative approaches that they can take. As we see exceptions happening, we can provide those recommendations in a clean and interpretable way, so [the end user] can bring context to bear that we don’t necessarily have ourselves.

Gardner: And the timing for these advanced IT operations services is very auspicious. Just as we’re now able to extend intelligence, we’re also at the point where we have end-to-end requirements – from the edge, to the cloud, and back to the data center.

And under such a hybrid IT approach, we are also facing a great need for general digital transformation in businesses, especially as they seek to be agile and best react to the COVID-19 pandemic. Are we able yet to apply HPE InfoSight across such a horizontal architecture problem? How far can it go?

Seeing the future: End-to-end visibility 

Mehta: Just to continue from where David started, part of our limitation so far has been from where we began. We started out in storage, and then as Nimble became part of HPE, we expanded it to compute resources. We targeted hypervisors; we are expanding it now to applications. To really fix problems, you need to have end-to-end visibility. And so that is our goal, to analyze, identify, and fix problems end-to-end.

That is one of the axis of development we’re pursuing. The other axis of development is that things are just becoming more-and-more complex. As businesses require their IT infrastructure to become highly adaptable they also need scalability, self-healing, and enhanced performance. To achieve this, there is greater-and-greater complexity. And part of that complexity has been driven by really poor utilization of resources.

Go back 20 years and we had standalone compute and storage machines that were not individually very well-utilized. Then you had virtualization come along, and virtualization gave you much higher utilization — but it added a whole layer of complexity. You had one machine, but now you could have 10 VMs in that one place.

Now, we have containers coming out, and that’s going to further increase complexity by a factor of 10. And right on the horizon, we have serverless computing, which will increase the complexity another order of magnitude.

Complexity is increasing, interconnectedness is increasing, and yet the demands on the business to stay agile, competitive, and scalable are also increasing. It’s really hard for IT administrators to stay on top of this. That’s why you need end-to-end automation.

So, the complexity is increasing, the interconnectedness is increasing, and yet the demands on businesses to stay agile and competitive and scalable are also increasing. It’s really hard for IT administrators to stay on top of this. And that’s why you need end-to-end automation and to collect all of the data to actually figure out what is going on. We have a lot of work cut out for us. There is another area of research, and David spends a lot of time working on this, which is you really want to avoid false positives. That is a big problem with lots of tools. They provide so many false positives that people just turn them off. Instead, we need to work through all of your data to actually say, “Hey, this is a recommendation that you really should pay attention to.” That requires a lot of technology, a lot of ML, and a lot of data science experience to separate the wheat from the chaff.

One of the things that’s happened with the COVID-19 pandemic response is the need for very quick response stats. For example, people have had to quickly set up web sites for contact tracing, reporting on the diseases, and for vaccines use. That shows an accelerated manner in how people need digital solutions — and it’s just not possible without serious automation.

Gardner: Varun just laid out the complexity and the demands for both the business and the technology. It sounds like a problem that mere mortals cannot solve. So how are we helping those mere mortals to bring AI to bear in a way that allows them to benefit – but, as Varun also pointed out, allows them to trust that technology and use it to its full potential?

Complexity requires automated assistance

Adamson: The point Varun is making is key. If you are talking about complexity, we’re well beyond the point where people could realistically expect to log-in to each machine to find, analyze, or manage exceptions that happen across this ever-growing, complex regime.

Even if you’re at a place where you have the observability solved, and you’re monitoring all of these moving parts together in one place — even then, it easily becomes overwhelming, with pages and pages of dashboards. You couldn’t employ enough people to monitor and act to spot everything that you need to be spotting.

You need to be able to trust automated exception [finding] methods to handle the scope and complexity of what people are dealing with now. So that means doing a few things.

People will often start with naïve thresholds. They create manual thresholds to give alerts to handle really critical issues, such as all the servers went down.

But there are often more subtle issues that show up that you wouldn’t necessarily have anticipated setting a threshold for. Or maybe your threshold isn’t right. It depends on context. Maybe the metrics that you’re looking at are just the raw metrics you’re pulling out of the system and aren’t even the metrics that give a reliable signal.

What we see from the data science side is that a lot of these problems are multi-dimensional. There isn’t just one metric that you could set a threshold on to get a good, reliable alert. So how do you do that right?

For the problems that IT support provides to us, we apply automation and we move down the Pareto chart to solve things in priority of importance. We also turn to ML models. In some of these cases, we can train a model from the installed base and use a peer-learning approach, where we understand the correlations between problem states and indicator variables well enough so that we can identify a root cause for different customers and different issues.

Sometimes though, if the issue is rare enough, scanning the installed base isn’t going to give us a high enough signal to the noise. Then we can take some of these curated examples from support and do a semi-supervised loop. We basically say, “We have three examples that are known. We’re going to train a model on them.” Maybe it’s a few tens of thousands of data points, but it’s still in the three examples, so there’s co-correlation that we are worried about. 

In that case we say: “Let me go fishing in that installed base with these examples and pull back what else gets flagged.” Then we can turn those back over to our support subject matter experts and say, “Which of these really look right?” And in that way, you can move past the fact that your starting data set of examples is very small and you can use semi-supervised training to develop a more robust model to identify the issues.

Gardner: As you are refining and improving these models, one of the benefits in being a part of HPE is to access growing data sets across entire industries, regions, and in fact the globe. So, Varun, what is the advantage of being part of HPE and extending those datasets to allow for the budding models to become even more accurate and powerful over time?

Gain a global point of view

Mehta: Being part of HPE has enabled us to leapfrog our competition. As I said, our roots are in storage, but really storage is just the foundation of where things are located in an organization. There is compute, networking, hypervisors, operating systems, and applications. With HPE, we certainly now cover the base infrastructure, which is storage followed by compute. At some point we will bring in networking. We already have hypervisor monitoring, and we are actively working on application monitoring.

HPE has allowed us to radically increase the scope of what we can look at, which also means we can radically improve the quality of the solutions we offer to our customers. And so it’s been a win-win solution, both for HPE where we can offer a lot of different insights into our products, and for our customers where we can offer them faster solutions to more kinds of problems.

Gardner: David, anything more to offer on the depth, breadth, and scope of data as it’s helping you improve the models?

Adamson: I certainly agree with everything that Varun said. The one thing I might add is in the feedback we’ve received over time. And that is, one of the key things in making the notifications possible is getting us as close as possible to the customer experience of the applications and services running on the infrastructure.

Gaining additional measurements from the applications themselves is going to give us the ability to differentiate ourselves, to find the important exceptions to the end user, what they really want us to take action on, the events that are truly business-critical. 

We’ve done a lot of work to make sure we identify what look like meaningful problems. But we’re fundamentally limited if the scope of what we measure is only at the storage or hypervisor layer. So gaining additional measurements from the applications themselves is going to give us the ability to differentiate ourselves, to find the important exceptions to the end user, what they really want to take action on. That’s critical for us — not sending people alerts they are not interested in but making sure we find the events that are truly business-critical. 

Gardner: And as we think about the extensibility of the solution — extending past storage into compute, ultimately networking, and applications — there is the need to deal with the heterogeneity of architecture. So multicloud, hybrid cloud, edge-to-cloud, and many edges to cloud. Has HPE InfoSight been designed in a way to extend it across different IT topologies?

Across all architecture

Mehta: At heart, we are building a big data warehouse. You know, part of the challenge is that we’ve had this explosion in the amount of data that we can bring home. For the last 10 years, since InfoSight was first developed, the tools have gotten a lot more powerful. What we now want to do is take advantage of those tools so we can bring in more data and provide even better analytics.

The first step is to deal with all of these use cases. Beyond that, there will probably be custom solutions. For example, you talked about edge-to-cloud. There will be locations where you have good bandwidth, such as a colocation center, and you can send back large amounts of data. But if you’re sitting as the only compute in a large retail store like a Home Depot, for example, or a McDonald’s, then the bandwidth back is going to be limited. You have to live within that and still provide effective monitoring. So I’m sure we will have to make some adjustments as we widen our scope, but the key is having a really strong foundation and that’s what we’re working on right now.

Gardner: David, anything more to offer on the extensibility across different types of architecture, of analyzing the different sources of analytics?

Adamson: Yes, originally, when we were storage-focused and grew to the hypervisor level, we discovered some things about the way we keep our data organized. If we made it more modular, we could make it easier to write simple rules and build complex models to keep turnaround time fast. We developed some experience and so we’ve taken that and applied it in the most recent release of recommendations into our customer portal.

We’ve modularized our data model even further to help us support more use cases from environments that may or may not have specific components. Historically, we’ve relied on having Nimble Storage, they’re a hub for everything to be collected. But we can’t rely on that anymore. We want to be able to monitor environments that don’t necessarily have that particular storage device, and we may have to support various combinations of HPE products and other non-HPE applications.

Modularizing our data model to truly accommodate that has been something that we started along the path for and I think we’re making good strides toward.

The other piece is in terms of the data science. We’re trying to leverage longitudinal data as much as possible, but we want to make sure we have a sufficient set of meaningful ML offerings. So we’re looking at unsupervised learning capabilities that we can apply to environments for which we don’t have a critical mass of data yet, especially as we onboard monitoring for new applications. That’s been quite exciting to work on.

Gardner: We’ve been talking a lot about the HPE InfoSight technology, but there also has to be considerations for culture. A big part of digital transformation is getting silos between people broken down.

Is there a cultural silo between the data scientists and the IT operations people? Are we able to get the IT operations people to better understand what data science can do for them and their jobs? And perhaps, also allow the data scientists to understand the requirements of a modern, complex IT operations organization? How is it going between these two groups, and how well are they melding?

IT support and data science team up

Adamson: One of the things that Nimble did well from the get-go was have tight coupling between the IT support engineers and the data science team. The support engineers were fielding the calls from the IT operations guys. They had their fingers on the pulse of what was most important. That meant not only building features that would help our support engineers solve their escalations more quickly, but also things that we can productize for our customers to get value from directly.

Gardner: One of the great ways for people to better understand a solution approach like HPE InfoSight is through examples.  Do we have any instances that help people understand what it can do, but also the paybacks? Do we have metrics of success when it comes to employing HPE InfoSight in a complex IT operations environment? 

Mehta: One of the examples I like to refer to was fairly early in our history but had a big impact. It was at the University Hospital of Basel in Switzerland. They had installed a new version of VMware, and a few weeks afterward things started going horribly wrong with their implementation that included a Nimble Storage device. They called VMware and VMware couldn’t figure it out. Eventually they called our support team and using InfoSight, our support team was able to figure it out really quickly. The problem turned out to be a result of a new version of VMware. If there was a hold up in the networking, some sort of bottleneck in their networking infrastructure, this VMware version would try really hard to get the data through.

We were able to preemptively alert other people who had the same combinations of VMware and Nimble Storage and say, “Guys, your should either upgrade to this new patch that VMware has made or just be aware that you are susceptible to this problem.”

So instead of submitting each write once to the storage array once, it would try 64 times. Suddenly, their traffic went up by 64 times. There was a lot of pounding on the network, pounding on the storage system, and we were able to tell with our analytics that, “Hey this traffic is going up by a huge amount.” As we tracked it back, it pointed to the new version of VMware that had been loaded. We then connected with the VMware support team and worked very closely with all of our partners to identify this bug, which VMware very promptly fixed. But, as you know, it takes time for these fixes to roll out to the field.

We were able to preemptively alert other people who had the same combination of VMware on Nimble Storage and say, “Guys, you should either upgrade to this new patch that VMware has made or just be aware that you are susceptible to this problem.”

So that’s a great example of how our analytics was able to find a problem, get it fixed very quickly — quicker than any other means possible — and then prevent others from seeing the same problem.

Gardner: David, what are some of your favorite examples of demonstrating the power and versatility of HPE InfoSight?

Adamson: One that comes to mind was the first time we turned to an exception-based model that we had to train. We had been building infrastructure designed to learn across our installed base to find common resource bottlenecks and identify and rank those very well. We had that in place, but we came across a problem that support was trying to write a signature for. It was basically a drive bandwidth issue.

But we were having trouble writing a signature that would identify the issue reliably. We had to turn to an ML approach because it was fundamentally a multidimensional problem. If we looked across, we have had probably 10 to 20 different metrics that we tracked per drive per minute on each system. We needed to, from those metrics, come up with a good understanding of the probability that this was the biggest bottleneck on the system. This was not a problem we could solve by just setting a threshold.

So we had to really go in and say, “We’re going to label known examples of these situations. We’re going to build the sort of tooling to allow us to do that, and we’re going to put ourselves in a regime where we can train on these examples and initiate that semi-supervised loop.”

We actually had two to three customers that hit that specific issue. By the time we wanted to put that in place, we were able to find a few more just through modeling. But that set us up to start identifying other exceptions in the same way.

We’ve been able to redeploy that pattern now several times to several different problems and solve those issues in an automated way, so we don’t have to keep diagnosing the same known flavors of problems repeatedly in the future.

Gardner: What comes next? How will AI impact IT operations over time? Varun, why are you optimistic about the future?

Software eats the world 

Mehta: I think having a machine in the loop is going to be required. As I pointed out earlier, complexity is increasing by leaps and bounds. We are going from virtualization to containers to serverless. The number of applications keeps increasing and demand on every industry keeps increasing. 

Andreessen Horowitz, a famous venture capital firm once said, “Software is eating the world,” and really, it is true. Everything is becoming tied to a piece of software. The complexity of that is just huge. The only way to manage this and make sure everything keeps working is to use machines.

That’s where the challenge and opportunity is. Because there is so much to keep track of, one of the fundamental challenges is to make sure you don’t have too many false positives. You want to make sure you alert only when there is a need to alert. It is an ongoing area of research.

There’s a big future in terms of the need for our solutions. There’s plenty of work to keep us busy to make sure we provide the appropriate solutions. So I’m really looking forward to it.

There’s also another axis to this. So far, people have stayed in the monitoring and analytics loop and it’s like self-driving cars. We’re not yet ready for machines to take over control of our cars. We get plenty of analytics from the machines. We have backup cameras. We have radars in front that alert us if the car in front is braking too quickly, but the cars aren’t yet driving themselves.

It’s all about analytics yet we haven’t graduated from analytics to control. I think that too is something that you can expect to see in the future of AIOps once the analytics get really good, and once the false positives go away. You will see things moving from analytics to control. So lots of really cool stuff ahead of us in this space.

Gardner: David, where do you see HPE InfoSight becoming more of a game changer and even transforming the end-to-end customer experience where people will see a dramatic improvement in how they interact with businesses?

Adamson: Our guiding light in terms of exception handling is making sure that not only are we providing ML models that have good precision and recall, but we’re making recommendations and statements in a timely manner that come only when they’re needed — regardless of the complexity.

A lot of hard work is being put into making sure we make those recommendation statements as actionable and standalone as possible. We’re building a differentiator through the fact that we maintain a focus on delivering a clean narrative, a very clear-cut, “human readable text” set of recommendations. 

And that has the potential to save a lot of people a lot of time in terms of hunting, pecking, and worrying about what’s unseen and going on in their environments.

Gardner: Varun, how should enterprise IT organizations prepare now for what’s coming with AIOps and automation? What might they do to be in a better position to leverage and exploit these technologies even as they evolve?

Pick up new tools

Mehta: My advice to organizations is to buy into this. Automation is coming. Too often we see people stuck in the old ways of doing things. They could potentially save themselves a lot of time and effort by moving to more modern tools. I recommend that IT organizations make use of the new tools that are available.

HPE InfoSight is generally available for free when you buy an HPE product, sometimes with only the support contract. So make use of the resources. Look at the literature with HPE InfoSight. It is one of those tools that can be fire-and-forget, which is you turn it on and then you don’t have to worry about it anymore.

It’s the best kind of tool because we will come back to you and tell you if there’s anything you need to be aware of. So that would be the primary advice I would have, which is to get familiar with these automation tools and analytics tools and start using them.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in Cloud computing, Hewlett Packard Enterprise, machine learning, server | Tagged , , , , , , , , , , | Leave a comment

How Unisys ClearPath mainframe apps now seamlessly transition to Azure Cloud without code changes

When applications are mission-critical, where they are hosted matters far less than keeping them operating smoothly.

As many organizations face a ticking time bomb to modernize mainframe applications, one solution is to find a dependable, repeatable way to transition to a public cloud without degrading these vulnerable and essential systems of record.

The next BriefingsDirect cloud adoption discussion explores the long-awaited means to solve the mainframe to cloud transition for essential but aging applications and data. We’re going to learn how Unisys and Microsoft can deliver ClearPath Forward assets to Microsoft Azure cloud without risky code changes.

Listen to the podcast. Find it on iTunes. Read a full transcript or downloada copy. 

To learn more about the latest on-ramps to secure and agile public cloud adoption, we welcome Chuck Lefebvre, Senior Director of Product Management for ClearPath Forward at Unisys, and Bob Ellsworth, Worldwide Director of Mainframe Transformation at Microsoft. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts: 

Gardner: Bob, what’s driving the demand nowadays for more organizations to run more of their legacy apps in the public cloud?

Ellsworth: We see that more and more customers are embracing digital transformation, and they are finding the cloud an integral part of their digital transformation journey. And when we think of digital transformation, at first it might begin with optimizing operations, which is a way of reducing costs by taking on-premises workloads and moving them to the cloud. 

But the journey just starts there. Customers now want to further empower employees to access the applications they need to be more efficient and effective, to engage with their customers in different ways, and to find ways of using cloud technologies to transform products, such as machine learning (ML)artificial intelligence (AI), and business intelligence (BI).

Gardner: And it’s not enough to just have some services or data in the cloud. It seems there’s a whole greater than the sum of the parts for organizations seeking digital transformation — to get more of their IT assets into a cloud or digitally agile environment.

Destination of choice: Cloud

Ellsworth: Yes, that’s absolutely correct. The beauty is that you don’t have to throw away what you have. You can take legacy workloads such as ClearPath workloads and move those into the cloud, but then continue the journey by embracing new digital capabilities such the advanced services such as ML, AI, or BI so you can extend the usefulness and benefits of those legacy applications. 

Gardner: And, of course, this has been a cloud adoption journey for well over 10 years. Do you sense that something is different now? Are there more means available to get more assets into a cloud environment? Is this a tipping point?

Ellsworth: It is a tipping point. We’ve seen — especially around the mainframe, which is what I focus on — a huge increase in customer interest and selection of the cloud in the last 1.5 years as the preferred destination. And one of the reasons is that Azure has absolutely demonstrated its capability to run these mission- and business-critical workloads.

Gardner: Are these cloud transitions emerging differently across the globe? Is there a regional bias of some sort? Is the public sector lagging or leading? How about vertical industries? Where is this cropping up first and foremost?

Ellsworth: We’re seeing it occur in all industries; in particular, financial services. We find there are more mainframes in financial services, banking capital markets, and insurance than in any other industries.

So we see a propensity there where, again, the cloud has become a destination of choice because of its capability to run mission- and business-critical workloads. But in addition, we’re seeing this in state and local governments, and in the US Federal Government. The challenge in the government sector is the long cycle it takes to get funding for these projects. So, it’s not a lack of desire, it’s more the time it takes to move through the funding process.

Gardner: Chuck, I’m still surprised all these years into the cloud journey that there’s still such a significant portion of data and applications that are not in the cloud environment. What’s holding things back? What’s preventing enterprises from taking advantage of cloud benefits?

Lefebvre: A lot of it is inertia. And in some cases, incorrect assumptions about what would be involved in moving. That’s what’s so attractive about our Unisys ClearPath solution. We can help clients move their ClearPath workloads without change. We take that ClearPath software stack from MCP initially and move it and re-platform it on Microsoft Azure.Learn How to TransitionClearPath Workloads
To the Cloud And that application and data comes across with no re-compilation, no refactoring of the data; it’s a straightforward transition. So, I think now that we have that in place, that transition is going to go a lot smoother and really enable that move to occur. 

I also second what Bob said earlier. We see a lot of interest from our financial partners. We have a large number of banking application partners running on our ClearPath MCP environment, and those partners are ready to go and help their clients as an option to move their workloads into the Azure public cloud.

Pandemic puts focus on agility

Gardner: Has the disruption from the coronavirus and the COVID-19disease been influencing this transition? Is it speeding it up? Slowing it down? Maybe some other form of impact?

Lefebvre: I haven’t seen it affecting any, either acceleration or deceleration. In our client-base most of the people were primarily interested initially in ensuring their people could work from home with the environments they have in place.

I think now that that’s settled in, they’ve sorted out their virtual private networks (VPNs) and their always-on access, processes, that perhaps now we’ll see some initiatives evolving. I think, initially, it was just businesses supporting their employees working from home. 

My perspective is that that should be enabled equally as well, whether they are running their backend systems of record in a public cloud or on-premises. Either way would work for them.

Gardner: Bob, at Microsoft, are you seeing any impact from the pandemic in terms of how people are adopting cloud services?

Ellsworth: We’re actually seeing an increase in customer interest and adoption of cloud services because of COVID-19. We’re seeing that in particular in some of our solutions such as Teams for doing collaboration and webinars, and connecting with others remotely. We’re seeing a big increase there.

And Office 365, we’ve seen a huge increase in deployments of customers using the Office 365 technology. In addition, Azure; we’ve also seen a big increase in Azure consumption as customers are dealing with the application growth and requirements of running these applications.

As far as new customers that are considering moving to the cloud, I had thought originally, back in March when this was starting to hit, that our conversations would slow down as people dealt with more immediate needs. But, in fact, it was about a two-to-three-week slow down. But now, we’re seeing a dramatic increase in interest in having conversations about what are the right solutions and methods to move the workloads to the cloud.

So, the adoption is accelerating as customers look for ways to reduce cost, increase agility, and find new ways of running the workloads that they have today.

Gardner: Chuck, another area of impact in the market is around skills. There is often either a lack of programmers for some of these older languages or the skills needed to run your own data centers. Is there a skill factor that’s moving the transition to cloud?

Lefebvre: Oh, there certainly is. One of the attractive elements of a public cloud is the infrastructure layer of the IT environment is managed by that cloud provider. So as we see our clients showing interest in moving to the public cloud — first with things like, as Bob said, Office 365 and maybe file shares with SharePoint – they are now looking at doing that for mainframe applications. And when they do that, they no longer have to be worried about that talent to do the care and feeding of that infrastructure. As we move those clients in that direction, we’re going to take care of that ClearPath infrastructure, the day-to-day management of that environment, and that will be included as part of our offering. 

We expect most clients – rather than managing it themselves in the cloud – will defer to us, and that will free up their staff to do other things. They will have retirements, but less risk.

Gardner: Bob, another issue that’s been top-of-mind for people is security. One of the things we found is that security can be a tough problem when you are transitioning, when you change a development environment, go from development to production, or move from on-premises to cloud. 

How are we helping people remain secure during a cloud transition, and also perhaps benefit from a better security posture once they make the transition?

Time to transition safely, securely

Ellsworth: We always recommend making security part of the planning process. When you’re thinking of transforming from a datacenter solution to the cloud, part of that planning is for the security elements. We always look to collaborate with our partners, such as Unisys, to help define that security infrastructure and deployment.

What’s great about the Azure solutions is we’ve focused on hybrid as the way of deploying customers’ workloads. Most customers aren’t ready to move everything to the cloud all at the same time. For that reason, and with the fact that we focus on hybrid, we allow a customer to deploy portions of the workload to the cloud and the other portions in their data center. Then, over time, they can transition to the cloud.

But during that process supporting your high levels of security for user access, identity management, or even controls of access to the right applications and data — that’s all done through the planning and using technologies such as Microsoft Active Directory and synchronization with Azure Active Directory. So with that planning it’s so important to ensure successful deployments and ensure the high levels of security that customers require.

Gardner: Chuck, anything to offer on the security?

Lefebvre: Yes, we’ll be complementing everything Bob just described with our Unisys Stealth technology. It allows always-on access and isolation capabilities for deployment of any of our applications from Unisys, but in particular the ClearPath environment. And that can be completely deployed in Azure or, as Bob said, in a hybrid environment across an enterprise. So we are excited about that deployment of Stealth to complement the rigor that Microsoft applies to the security planning process.

Gardner: We’ve described what’s driving organizations to the cloud, the fact that it’s accelerating, that there’s a tipping point in what adoption can be accomplished safely and reliably. We’ve also talked about what’s held people back and their challenges.

Let’s now look at what’s different about the latest solutions for the cloud transition journey. For Unisys, Chuck, how are your customers reconciling the mainframe past with the cloud future?

No change in digital footprint

Lefebvre: We are able to transition ClearPath applications with no change. It’s been roughly 10 years since we’ve been deploying these systems on Intel platforms, and in the case of MCP hosting it on a Microsoft Windows Server kernel. That’s been in place under our Unisys Libra brand for more than 10 years now.

In the last couple of years, we’ve also been allowing clients to deploy that software stack on virtualized servers of their choice: on Microsoft Hyper-Vand the VMware virtualization platforms. So it’s a natural transition for us to move that and offer that in Azure cloud. We can do that because of the layered approach in our technology. It’s allowed us to present an approach to our clients that is very risk-free, very straightforward.

Learn How to TransitionClearPath Workloads
To the Cloud 

The ClearPath software stack sits on a Windows kernel, which is also at the foundation level offered by the Azure hybrid infrastructure. The applications therefore don’t change a bit, literally. The digital footprint is the same. It’s just running in a different place, initially as platform-as-a-service (PaaS).

The cloud adoption transition is really a very low-risk, safe, and efficient journey to the public cloud for those existing solutions that our clients have on ClearPath.

Gardner: And you described this as an ongoing logical, cascading transition — standing on the shoulders of your accomplishments — and then continuing forward. How was that different from earlier migrations, or a lift-and-shift, approach? Why is today’s transition significantly different from past migrations?

Lefebvre: Well, a migration often involves third-parties doing a recompilation, a refactoring of the application, so taking the COBOL code, recompiling it, refactoring it into Java, and then breaking it up, and moving the data out of our data formats and into a different data structure. All of those steps have risk and disruption associated with them. I’m sure there are third-parties that have proven that. That can work. It just takes a long time and introduces risk.

For Unisys ClearPath clients who have invested years and years in those systems of record, that entire stack can now run in a public cloud using our approach — as I said before — with absolutely not a single bit of change to the application or the data.

Gardner: Bob, does that jibe with what you are seeing? Is the transition approach as Bob described it an advantage over a migration process as he described?

Ellsworth: Yes, Chuck described it very well. We see the very same thing. What I have found, — and I’ve been working with Unisys clients since I joined Microsoft in 2001, early on going to the Unisys UNITE conference — was that Unisys clients are very committed and dedicated to their platform. They like the solutions they are using. They are used to using those developer tools. They have built up the business-critical, mission-critical applications and workloads.

For those customers that continue to be committed to the platform, absolutely, this kind of what I call “re-platforming” could easily be called a “transition.” You are taking what you currently have and simply moving it onto the cloud. It is absolutely the lowest risk, the least cost, and the quickest time-to-deployment approach.

For those customers, just like with every platform, when you have an interest to transform to a different platform, there are other methods available. But I would say the vast majority of committed Unisys customers want to stay on the platform, and this provides the fastest way to get to the cloud — with the less risk and the quickest benefits.

Gardner: Chuck, the process around cloud adoption has been going on for a while. For those of us advocating for cloud 10 or 12 years ago, we were hoping that it would get to the point where it would be a smooth transition. Tell me about the history and the benefits of how ClearPath Forward and Azure had come together specifically? How long have Microsoft and Unisys been at this? Why is now, as we mentioned earlier, a tipping point?

Lefebvre: We’ve been working on this for a little over a year. We did some initial work with two of our financial application partners and North America banking partners and the initial testing was very positive. Then as we were finishing our engineering work to do the validation, our internal Unisys IT organization, which operates about 25 production applications to run the business, went ahead in parallel with us and deployed half of those on MCP in Azure, using the very process that I described earlier.

Today, they are running 25 production applications. About half of them have been there for nine months and the other half for the last two months. They are supporting things like invoicing our customers, tracking our supply chain status, and so, a number of critical applications. 

We have taken that journey not just from an engineering point of view, but we’ve proven it to ourselves. We drank our own champagne, so to speak, and that’s given us a lot of confidence. It’s the right way to go, and we expect our clients will see those benefits as well.

Gardner: We haven’t talked about the economics too much. Are you finding, now that you’ve been doing this for a while, that there is a compelling economic story? A lot of people are fearful that a transition or migration would be very costly, that they won’t necessarily save anything by doing this, and so maybe are resistant. But what’s the dollars’ and cents’ impact that you have been seeing now that you’ve been doing this while transitioning ClearPath to Azure?

Rapid returns

Lefebvre: Yes, there are tangible financial benefits that our IT organization has measured. In these small isolated applications, they calculated about a half-a-million dollars in savings across three years in their return on investment (ROI) analysis. And that return was nearly immediate because the transition for them was mostly about planning the outage period to ensure a non-stop operation and make sure we always supported the business. There wasn’t actually a lot of labor, just more planning time. So that return was almost immediate.

Gardner: Bob, anything to offer on the economics of making a smooth transition to cloud?

Ellsworth: Yes, absolutely. I have found a couple of catalysts for customers as far as cost savings. If a customer is faced with a potential hardware upgrade — perhaps the server they are running on is near end-of-life — by moving the workload to the cloud and only paying for the consumption of what you use, it allows you to avoid the hardware upgrade costs. So you get some nice and rapid benefits in cost avoidance.

In addition, for workloads such as test and development environments, or user acceptance testing environments, in addition to production uses, the beauty of the cloud pricing is you only pay for what you are consuming.

So for those test and development systems, you don’t need to have hardware sitting in the corner waiting to be used during peak periods. You can spin up an environment in the cloud, do all of your testing, and then spin it back down. You get some very nice cost savings by not having dedicated hardware for those test and development environments.

Gardner: Let’s dig into the technology. What’s under the hood that’s allowing this seamless cloud transition, Chuck?

Secret sauce

Lefebvre: Underneath the hood is the architecture that we have transformed to over the last 10 years where we are already running our ClearPath systems on Intel-based hardware on a Microsoft Windows Server kernel. That allows that environment to be used and re-platformed in the same manner.

To accomplish that, originally, we had some very clever technology that allows the Unisys compilers generating unique instructions to be emulated on an Intel-based, Windows-based server.

That’s really the fundamental underpinning that first allowed those clients to run on Intel servers instead of on proprietary Unisys-designed chips. Once that’s been completed, we’re able to be much more flexible on where it’s deployed. The rigor to which Microsoft has ensured that Windows is Windows — no matter if it’s running on a server you buy, whether it’s virtualized on Hyper-V, or virtualized in Azure — really allows us to achieve that seamless operation of running in any of those three different models and environments.

Gardner: Where do you see the secret sauce, Bob? Is the capability to have Windows be pure, if you will, across the hybrid spectrum of deployment models?

Learn How to TransitionClearPath Workloads
To the Cloud 

Ellsworth: Absolutely, the secret sauce as Chuck described was that transformation from proprietary instruction sets to standard Intel instruction sets for their systems, and then the beauty of running today on-premises on Hyper-V or VMware as a virtual machine (VM). 

And then the great thing is with the technologies available, it’s very, very easy to take VMs running in the data center and migrate them to infrastructure as a service (IaaS) VMs running in the cloud. So, seamless transformation and making that migration.

You’re taking everything that’s running in your production system, or test and development systems, and simply deploying them up in the cloud’s VM instead of on-premises. So, a great process. Definitely, the earlier investment that was made allows that capability to be able to utilize the cloud.

Gardner: Do you have early adopters who have gone through this? How do they benefit? 

Private- and public-sector success stories

Lefebvre: As I indicated earlier, our own Unisys IT operation has many production applications running our business on MCP. Those have all been moved from our own data center on an MCP Libra system to now running in the Microsoft Azure cloud. Our Unisys IT organization has been a long-time partner and user of Microsoft Office 365 and SharePoint in the cloud. Everything has now moved. This, in fact, was one of the last remaining Unisys IT operations that was not in the public cloud. That was part of our driver, and they are achieving the benefits that we had hoped for.

We also have two external clients, a banking partner is about to deploy a disaster recovery (DR) instance of their on-premises MCP banking application. That’s coming from our partner, Fiserv. Fiserv’s premier banking application is now available for running in Azure on our MCP systems. One of the clients is choosing to host a DR instance in Azure to support their on-premises production workload. They like that because, as Bob said, they only have to pay for it when they fire it up if they need to use that DR environment.

We have another large state government project that we’re just about to sign, where that client will be doing some of their ClearPath MCP workload and transition to and manage that in an Azure public cloud.

Once that contract is signed and we get agreement from that organization, we will be using that as one of our early use case studies.

Gardner: The public sector, with all of their mainframe apps, seems like a no-brainer to me for these transitions to the cloud. Any examples from the public sector that illustrate that opportunity?

Ellsworth: We have a number of customers, specifically on the Unisys MCP platform, that are evaluating moving their workloads from their data centers into the cloud. We don’t have a production system as far as I know yet, but they’re in the late stages of making that decision. 

There are so many ways of utilizing the cloud, for things like DR, at a very low cost, instead of having to have a separate data center or failover system. Customers can even leave their production on-premises in the short-term and stand up their test and development in the cloud and run MCP system in that way.

And then, once they’re in the cloud, they gain the capability to set up a high-availability DR system or high-availability production system, either within the same Azure data center, or failover from one system to another if they have an outage, and all at a very low cost. So, there are great benefits.

One other benefit is elasticity. When I talk about customers, they say, “Well, gee, I have this end-of-month process and I need a larger mainframe then because of those occasional higher capacity requirements. Well, the beauty of the cloud is the capability to grow and shrink those VMs when you need more capacity for such end-of-month process, for example.

Again, you don’t have to pre-purchase the hardware. You really only pay for the consumption of the capacity when you need it. So, there are great advantages and that’s what we talk to customers about. They can get benefits from considering deploying new systems in the cloud. Those are some great examples of why we’re in late-stage conversations with several customers about deploying the solution.

Increased data analytics

Gardner: I supposed it’s a little bit early to address this, but are there higher-order benefits when these customers do make the cloud transition? You mentioned earlier AI, ML, and getting more of your data into an executable environment where you can take advantage of analytics across more and larger data sets. 

Is there another shoe to drop when it comes to the ROI? Will they be able to do things with their data that just couldn’t have been done before, once you make a transition to cloud?

Ellsworth: Yes, that’s absolutely correct. When you move the systems up to the cloud, you’re now closer to all the new workloads and the advanced cloud services you can utilize to, for example, analyze all that data. It’s really about turning more data into intelligent action.

Now, if you think of back to the 1980s and 1990s, or even 2000s, when you were building custom applications, you had to pretty much code everything yourselves. Today, the way you build an application is to consume services. There’s no reason for a customer to build a ML application from scratch. Instead, you consume ML services from the cloud. So, once you’re in the cloud, it opens up a world of possibilities to being able to continue that digital business transformation journey.

LefebvreAnd I can confirm that that’s a key element for our product proposition as well as from a ClearPath point of view. We have some existing technology, a particular component called Data Exchange, that does an outstanding change and data capture model. We can pump the data coming into that backend system of record and using, Kafka, for example, feed that data directly into an AI or ML application that’s already in place.

One of the key areas for future investment — now that we have done the re-platforming to PaaS and IaaS – is extending our ePortal technology and other enabling software to ensure that these ClearPath applications really fit in well and leverage that cloud architecture. That’s the direction we see a lot of benefit in as we bring these applications into the public Azure cloud.

The cloud journey begins

Gardner: Chuck, if you are a ClearPath Forward client, you have these apps and data, what steps should you be taking now in order to put yourself in an advantageous position to make the cloud transition? Are there pre-steps before the journey? Or how should you be thinking in order to take advantage of these newer technologies?

Lefebvre: First of all, they should engage with their Unisys and Microsoft contacts that work with your organization to begin consultation on that journey. Data backup, data replication, DR, those elements around data and your policy with respect to data are the things that are likely going to change the most as you move to a different platform — whether that’s from a Libra system to an on-premises virtualized infrastructure or to Azure.

What you’ve done for replication with a certain disk subsystem probably won’t be there any longer. It’ll be done in a different way, and likely it’ll be done in a better way. The way you do your backups will be done differently.

Now, we have partnered with Dynamic Solutions International (DSI) and they offer a virtualized virtual tape solution so that you can still use your backup scripts on MCP to do backups in exactly the same way in Azure. But you may choose to alter the way you do backups.

So, your strategy for data and how you handle that, which is so very important to these enterprise class mainframe applications, that’s probably the place where you’ll need to do the most thinking and planning, around data handling.

Gardner: For those familiar with BriefingsDirect, we like to end our discussions with a forward-looking vision, an idea of what’s coming next. So when it comes to migrating, transitioning, getting more into the cloud environments — be that hybrid or pure public cloud — what’s going to come next in terms of helping people make the transition but also giving them the best payoff when they get there?

The cloud journey continues

Ellsworth: It’s a great question, because you should think of the world of opportunity, of possibility. I look back at my 47 years in the industry and it’s been incredible to see the transformations that have occurred, the technology advancements that have occurred, and they are coming fast and furious. There’s nothing slowing it down.

And so, when we see the cloud today, a lot of customers are still considering the cloud for strategy and for building any new solutions. You go into the cloud first and have to justify staying on-premises, and then customers move to a cloud-only strategy where they’re able to not only deploy new solutions but migrate their existing workloads such as ClearPath up to the cloud. They get to a point where they would be able to shut down most of what they run in their data centers and get out of that business of operating IT infrastructure and having operation support provided for them as-a-service.

Learn How to TransitionClearPath Workloads
To the Cloud 

Next, they move into transforming through cultural change in their own staff. Today the people that are managing, maintaining, and running new systems will have an opportunity to learn new skills and new ways of doing things, such as cloud technology. What I see over the next two to three years is a continuation of that journey, the transformation not just of the solutions the customers use, but also the culture of the people that operate and run those solutions.

Gardner: Chuck, for your installed base across the world, why should they be optimistic about the next two or three years? What’s your vision for how their situation is only going to improve?

Lefebvre: Everything that we’ve talked about today is focused on our ClearPath MCP market and the technology that those clients use. As we go forward into 2021, we’ll be providing similar capabilities for our ClearPath OS 2200 client base, and we’ll be growing the offering.

Today, we’re starting with the low-end of the customer base: development, test, DR, and the smaller images. But as the Microsoft Azure cloud matures, as it scales up to handle our scaling needs for our larger clients, we’ll see that maturing. We’ll be offering the full range of our products in the Azure cloud, right on up to our largest systems.

That builds confidence across the board in our client base; in Microsoft and in Unisys. We want to crawl, then walk, and then run. That journey, we believe, is the safest way to go. And as I mentioned earlier, this initial workload transformation is occurring through a re-platforming approach. The real exciting work is bringing cloud-native capabilities to do better integration of those systems of record, with better systems of engagement, that the cloud-native technology is offering. 

And we have some really interesting pieces under development now that will make that additional transformation straightforward. Our clients will be able to leverage that – and continue to extend that back-end investment in those systems. So we’re really excited about the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or downloada copy. Sponsors: Unisys and Microsoft.

A discussion on how many organizations face a reckoning to move mainframe applications to a cloud model without degrading the venerable and essential systems of record. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

Posted in AIOps, application transformation, Cloud computing, Cyber security, data center, Data center transformation, digital transformation, Enterprise transformation, Microsoft, Security, Unisys | Tagged , , , , , , , , , , , , , | 1 Comment

Digital transformation enables an unyielding and renewable value differentiation capability

The next edition of the BriefingsDirect Voice of Innovation podcast series explores architecting businesses for managing ongoing disruption.

As enterprises move past crisis mode in response to the COVID-19 pandemic, they require a systemic capability to better manage shifting market trends.

Stay with us to examine how Hewlett Packard Enterprise (HPE) Pointnext Services advises organizations on using digital transformation to take advantage of new and emerging opportunities.

Listen to the podcast. See the video. Find it on iTunes. Read a full transcript or download a copy. 

To share the Pointnext view on transforming businesses to effectively innovate in the new era of pervasive digital business, BriefingsDirect welcomes Craig Partridge, Senior Director Worldwide, Digital Advisory and Transformation Practice Lead, at HPE Pointnext Services.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Craig, how has the response to the pandemic accelerated the need for comprehensive digital transformation?

Partridge: We speak to a lot of customers around the world. And the one thing that we are picking up very commonly is a little bit counter-intuitive.

At the beginning of the pandemic — in fact, at the beginning of any major disruption — there is a sense that companies will put the brakes on and slow everything down. And that happened as we went through this initial period. Preserving cash and liquidity kicked in and a minimum viable operating model emerged. People were reluctant to invest.

But as they now begin to see the shifting landscape in industries, we are beginning to see a recognition that those pivoting out of these disruptive moments the quickest — with sustained, long-term viability built behind how they accelerate — those organizations are the ones driving new experiences and new insights. They are pushing hard on the digital agenda. In other words, digitally active companies seem to be the ones pivoting quicker out of these disruptions — and coming out stronger as well.

So although there was an initial pause as people pivoted to the new normal, we are seeing now acceleration of initiatives or projects, underpinned by technology, that are fundamentally about reshaping the customer experience. If you can do that through digital engagement models, you can continue to drive revenue and customer loyalty because you are executing those valued transactions through digital platforms.

Gardner: Has the pandemic and response made digital transformation more attractive? If you have to do more business digitally, if your consumers and your supply chain have become more digital, is this a larger opportunity?

Partridge: Yes, it’s not only more attractive – it’s more essential. That’s what we are learning.

A good example here in the UK, where I am based, is that big retailers have traditionally been deeply into the brick world experience of walking into a retail store or supermarket, those kinds of big, physical spaces. They figured out during this period of disruption that the only way to continue to drive revenue and take orders was on digital platforms. Well, guess what? Those digital platforms were only scaled and sized for a certain kind of demand, and that demand was based on a pre-pandemic normal.

This transformation is not just an attractive thing to do. For many organizations pivoting hard to digital engagement and digital revenue streams is their new normal. That’s what they have to focus on — not just to survive but for beyond that.

Now, they have to double or treble the capacity of their transactions across those digital platforms. They are having to increase massively their capability to not only buy online, but to get deliveries out to those customers as well.

So this transformation is not just an attractive thing to do. For many organizations pivoting hard to digital engagement and digital revenue streams is their new normal. That’s what they have to focus on — and not just to survive but for beyond that. It’s the direction to their new normal as well.

Gardner: It certainly seems that the behavior patterns of consumers, as well as employees, have changed for the longer term when it comes to things like working at home, using virtual collaboration, bypassing movie theaters for online releases, virtual museums, and so forth.

For those organizations that now have to cater to those online issues and factor in the support of their employees online, it seems to me that this shift in user behavior has accelerated what was already under way. Do companies therefore need to pick up the pace of what they are doing for their own internal digital transformation, recognizing that the behaviors in the market have shifted so dramatically?

Safety first 

Partridge: Yes, in the past digital transformation focused on the customer experience, the digital engagement channel, and building out that experience. You can relate that in large parts to the shift toward e-commerce. But increasingly people are aware of the need to integrate information about the physical space as well. And if this pandemic taught us anything, it’s that they need to not only create great experiences – they must create safe, great experiences.

What does that mean? I need to understand about my physical space so I can augment my service offerings in a way that’s safe. We are looking at scenarios where using video recognition and artificial intelligence (AI) will begin to work out whether that space being used safely. Are there measurements we can put in place to protect people better? Are people keeping certain social distancing rules?

All of that is triggering the next wave of customer experience, which isn’t just the online digital platform and digital interactions, but — as we get back out into the world and as we start to occupy those spaces again — how do I use the insight about the physical space to augment that experience and make sure that we can emerge safer, better, and enjoy those digital experiences in a way that’s also physically safe.

Beyond just the digital transactions side, now it’s much more about starting to address the movement that was already long on the way — the digitization of the physical world and how that plays into making these experiences more beneficial.

Gardner: So if the move to digitally transform your organization is an imperative, if those who did it earlier have an advantage, if those who haven’t done it want to do it more rapidly — what holds organizations back? What is it about legacy IT architectures that are perhaps a handicap?

Pivoting from the cloud 

Partridge: It’s a great question because when I talk to customers about moving into the digital era, that triggers the question, “Well, what was there before this digital era?” And we might argue it was the cloud era that preceded it.

Now, don’t get me wrong. These aren’t sequential. I’m not saying that the cloud era is over and the digital era has replaced it. As you know, these are waves. And they rise on top of each other. But organizations that are able to go fast and accelerate on the digital agenda are often the same organizations.

The biggest constraint we see as organizations try to stress-test their digital age adoption is to see if they actually have agility in the back end. Are the systems set up to be able to scale on-demand as they start to pivot toward digital channels to engage their customers? Does a recalibration of the supply chain mean applications and data are placed in the right part of on- or off-premises cloud architecture supply chains?

The biggest constraint we see as organizations try to stress-test their digital age adoption is to see if they actually have agility in the back end. Are the systems set up to be able to scale on-demand as they pivot to digital channels to engage with their customers? 

If you haven’t gone through a modernization agenda, if you haven’t tackled that core innovation issue, if you haven’t embraced cloud architectures, cloud-scale, and software-defined – and, increasingly, by the way, the shift to things like containerization, microservices, and decomposing big monolithic applications into manageable chunks that are application programming interface

(API)-connected — if you haven’t gone through that cloud-enabled exploration prior to the digital era, well, it looks like you still have some work to do before you can get the gains that some of those other modern organizations are now able to express.

There’s another constraint, which is really key. For most of the customers we speak to, it tends to be in and around the operating model. In a lot of conversations that I have with customers, they over-invested in technology. They are on every cloud platform available. They are using every kind of digital technology to gain a level of competitive advantage.

Yet, at the heart of any organization are the people. It’s the culture of the people and the innovation of your people that really makes the difference. So, not least of all, the supply chain agility, right in the heart of this conversation. It is the fundamental operating model — not just of IT, but the operating model of the entire organization.

So have they unticked their value chain? Have they looked at the key activities? Have they thought when they implement new technology, and how that might replace or augment activities? And what does that mean to the staff? Can you bring them with you, and have you empowered them? Have you re-skilled them along the way? Have you driven those cultural change programs to force that digital-first mindset, which is really the key to success in all of this?

Gardner: So many interdependencies, so much complexity, frankly, when we’re thinking about transacting across the external edge to cloud, to consumer, and to data center. And we’re talking about business processes that need to extend into new supply chains or new markets.

Given that complexity, tell us how to progress beyond understanding how difficult this all can be and to adopt proven ways that actually work.

Universal model has the edge 

Partridge: For everything that we’ve talked about, we have figured out that there is a universal model that organizations can use to methodologically go off into this space.

We found out that organizations are very quickly pivoting to exploring their digital edge. I think the digital agenda is an edge-in conversation. Again, I think that marks it out from the preceding cloud era, which was much more about core-out. That was get scale, efficiency, and cost optimization out of service delivery models in-place. But that was a very core-out conversation. When you think digital, you have to begin to think about the use case of where value is created or exchanged. And, that’s an edge-in conversation.

And we managed to find that there are two journeys behind that discussion. The first one is about deciding to whom you are looking to deliver that digital experience. So when you think about digital engagement, really caring passionately about who the beneficiary persona is behind that experience. You need to describe that person in terms of what’s their day-in-the-life. What pains do they face today? What gains could you develop that could deliver better outcomes for them? How can you walk in their shoes, and how do you describe that?

We found that is a key journey, typically led by kind of chief digital officer-type character who is responsible for driving new digital engagement with customers. If the persona is external to the customer, if it’s a revenue-generating persona, we might think of revenue as the essential key performance indicator (KPI). But you can apply similar techniques to drive internal personas’ productivity. So productivity becomes the KPI.

That journey is inspired by initiatives that are trying to use digital to connect to people in new, innovative, and differentiated ways. And you’ll find different stakeholders behind that journey.

And we found another journey, which is reshaping the edge. And that’s much more about using technology to digitize the physical world. So let’s hear about the experience, about business efficiency and effectiveness at the edge — and using the insights of instrumenting and digitizing the physical world to give you a sense of how that space is being used. How is my manufacturing floor performing? The KPI is overall equipment effectiveness (OEE) in the manufacturing space and it becomes key. Behind this journey you’ll see big Industry 4.0-type and Internet of Things (IoT)-type of initiatives under way.

If organizations are able to stitch these two journeys together — rather than treat them as siloed sandpits for innovation – and if they can connect them together, they tend to get compound benefits. 

You asked about where the constraint comes in. As we said, it is about getting agility into the supply chain. And again, we’ve actually found that there are two connected journeys, but with very different stakeholders behind them, which drive that agenda. 

We have a journey, too, that describes a core renovation agenda that will occupy 70 to 80 percent of every IT budget every year. It’s the constant need to challenge the price performance of legacy environments and constantly optimize and move the workloads and data into the right part of the supply chain for strategic advantage. 

That is coupled with yet another journey, that of the cloud-enabled constraint and that’s very much developer-led more than it is led by IT. IT is typically holding the legacy footprint, the technical debt footprint, but the developer is out there looking to exploit cloud-native architectures to write the next wave of applications and experiences. And they are just as impactful when it comes to equipping the organization with the cloud scale that’s necessary to mine those opportunities on the edge. 

So, there is a balance in this equation, to your point. There is innovation at the edge, very much line of business-driven, very much about business efficiency and effectiveness, or revenue and productivity, the real tangible dollar value outcomes. And on the other side, it’s more about agility and the supply chain. It’s getting that balance right so that I have my agility and that allows me to go and explore the world digitally at the edge. 

So they sort of overlap. And the implication there is that there are three core enablers and they are true no matter which of the big four agenda items customers are trying to drive through their initiative programs. 

In digital, data is everything 

Two of those enablers very much relate to data. Again, Dana, I know in the digital era data is everything. It is the glue that holds this new digital engagement model together. In there we found two key enablers that constantly come up, no matter which agenda you are driving. 

The first one is surely you need intelligence from that data; data for its own sake is of no use, it’s about getting intelligence from that dataset. And that’s not just to make better decisions, but actually to innovate, to create differentiated value propositions in your market. That’s really the key agenda behind that intelligence enabler. 

And the second thing, because we are dealing with data, is a huge impact and emphasis on being trusted with that data. And that doesn’t just mean being compliant to regulatory standards or having the right kind of resiliency and cybersecurity approach, it means going beyond that. 

You need to gain intelligence from the data; data for its own sake is of no use, it’s about getting intelligence for the datasets. And that’s not just to make better decisions, it’s to innovate and create differentiated value propositions in your market.

In this digitally enabled world, we want to trust brands with our data because often that data is now extremely personal. So beyond just General Data Protection Regulation

(GDPR) compliance, trust here means, “Am I being ethical? Am I being transparent about how I use that data?” We all saw the Cambridge Analytica-type of impact and what happens when you are not transparent and you are not ethical about how you use data. 

Now, one thing we haven’t touched on and I will just throw it up as a bit of context, Dana. There is a consideration, a kind of global consideration behind all of this agenda and that’s the shift toward everything-as-a-service (EaaS).

A couple of key attributes of that consideration includes the most obvious one; it’s the financial flexibility one. For sure, as you reassemble your supply chain — as you continue to press on that cloud-enabled side of the map — what you are paying, what you consume, and doing that in a strategic way helps get the right mix in that supply chain, and paying only for that as you consume, is kind of obvious.

But I think the more important thing to understand is that our customers are being equally innovative at the edge. So they are using that everything-as-a-service momentum to change their industry, their market, and the relationship they have with their customers. It helps especially as they pivot into a digital customer experience. Can that experience be constructed around a different business model?

We found that that’s a really useful way of deconstructing and simplifying what is actually quite a complex landscape. And if you can abstract — if you can use a model to abstract away the chaos and create some simplicity — that’s a really powerful thing. We all know that good models that abstract away complexity and create simplicity are hugely valuable in helping organizations reconstruct themselves. 

Gardner: Clearly, before the pandemic, some organizations dragged their feet on digital transformation as you’ve described it. They had a bit of inertia. But the pandemic has spurred a lot of organizations, both public and private, on. 

Hopefully, in a matter of some months or even a few years, the pandemic will be in the rearview mirror. But we will be left with the legacy of it, which is an emerging business paradigm of being flexible, agile, and more productive. 

Are we going to get a new mode of business agility where the payoff is it commensurate with all the work?

Agility augurs well post-pandemic 

Partridge: That’s the $6 million question, Dana. I would love to crystal ball gaze with you on that one because agility is key to any organization. We all know that there are constraints in traditional customer experiences — making widgets, selling products, transactional relationships, relationships that don’t lend themselves to having digital value added to them. I wonder how long that model goes on for as we are experiencing this shift toward digital value. And that means not just selling the widget or the product, but augmenting that with digital capabilities, with digital insights, and with new ways of adding value to the customer’s experience beyond just the capital asset. 

I think that was being fast-tracked before this global pandemic. And it’s the organizations now that are in the midst of doubling down on that — getting that digital experience right, ahead of product and prices – that’s the key differentiator when you go to market. 

And, for me, that customer experience increasingly now is the digital customer experience. I think that move was well under way before we hit this big crisis. And I can see customers now doubling down, so that if they didn’t get it right pre-pandemic, they are getting it right as they accelerate out of the pandemic. They recognize that that platform is the only way forward. 

You will hear a lot of commentators talk about the digital agenda as being driven by what they call the platform-driven economy. Can you create a platform in which your customers are willing to participate, maybe even your ecosystem of partners who are willing to participate and create that kind of shared experience and shared value? Again, that’s something that HPE is very much invested in. As we pivot our business model, to EaaS outcomes, we are having to double down on our customer experience and increasingly that means digitizing that experience through that digital platform agenda. 

Gardner: I would like to explore some examples of how this is manifesting itself. How are organizations adjusting to the new normal and leveraging that to a higher level of business capability?

Also, why is a third-party organization like HPE Pointnext Services working within an ecosystem model with many years of experience behind it? How are you specifically gearing up to help organizations manage the process we have been describing? 

HPE digital partnerships 

Partridge: This whole revolution requires different engagement models. The relationship HPE shares with its customers is becoming a technologically enabled partnership. Whenever you partner with a customer to help advance their business outcomes, you need a different way to engage with them.

We can continue to have our product-led engagement with customers, because many of them enjoy that relationship. But as we continue to move up the value stack we are going to need to swing to more of an advisory-led engagement model, Dana, where we are as co-invested in the customers’ outcomes as they are. 

We understand what they are trying to drive from a business perspective. We understand how technology is opening up and enabling those kinds of outcomes to be materialized, for the value to be realized. 

A year ago, we set out to reshape the way we engage with customers around this conversation. To drive that kind of digital partnership, that means sitting down with a customer and to co-innovate, going through workshops of how we as technologists can bring our expertise to the customer as the expert in their industry. Those two minds can meld to create more than one plus one equals two. By using design thinking techniques and co-design techniques, we can analyze the customers’ business problem and shape solutions that manufacture really, really big outcomes for our customers. 

For 15 years I have been a consultant inside of HP and HPE and we have always had that strong consulting engine. But now with HPE Pointnext Services we are gearing it around making sure that we are able to address the customers’ business outcomes, enabled through technology. 

Never has there been a time when technology has been so welded into a customer’s underlying value proposition. … There has never been a more open-door policy from our partners and customers.

And the timing is right-on. Never has there been a time when technology has been so welded into a customer’s underlying value proposition. I have been 25 years in IT. In the past, we could have gotten away with being a good partner to IT inside of our customer accounts. We could have gotten away with constantly challenging that price and performance ratio and renovating that agenda so that it delivers better productivity to the organization. 

But as technology makes its way into the underlying business model — as it becomes the differentiating business model — it’s no longer just a productivity question. Now it’s about how partners work to unlock new digital revenue streams. Well, that needs a new engagement model. 

And so that’s the work that we have been doing in my team, the Digital Advisory and Transformation Practice, to engage customers in that value-based discussion. Technology has made its way into that value proposition. There has never been a more open-door policy from our partners and customers who want to engage in that dialogue. They genuinely want to get the benefit of a large tech company applying itself to the customers’ underlying business challenges. That’s the partnership that they want, and there is no excuse for us not to walk through that door very confidently. 

Gardner: Craig, specifically at HPE Pointnext Services, what’s the secret sauce that allows you to take on this large undertaking of digital transformation? 

Mapping businesses’ DX ambition

Partridge: The development of this model has led to a series of unique pieces of intellectual property (IP) we use to help advance the customer ambition. I don’t think there has ever been a moment in time quite like this with the digital conversation. 

Customers recognize that technology is the fundamental weapon to transform and differentiate themselves in the market. They are reaching out to technology partners to say, “Come and participate with me using technology to fundamentally change my value proposition.” So we are being invited in now as a tech company to help organizations move that value proposition forward in a way that we never were before. 

In the past, HPE’s pedigree has been constantly challenging the optimization of assets and the price-performance, making sure that platform services are delivered in a very efficient and effective way. But now customers are looking to HPE to uniquely go underneath the covers of their business model — not just their operating model, but their business model. 

Now, we are not writing the board-level strategy for digital ambition because there is a great sweet spot for us, rather it’s where customers have a digital North Star, some digital ambition, but are struggling to realize it. They are struggling to land those initiatives that are, by definition, technology-enabled. That’s where tech companies like HPE are at the forefront of driving digital ambition. 

So we have this unique IP, this model we developed inside of HPE Pointnext Services, and the methodology of how to apply it. We can use it as a visualization tool, as a storytelling tool to be able to better communicate, and onward to further communicate your businesses’ digital ambitions.

We can use it to map out the initiatives and look at where those overlap and duplications occur inside organizations. We are truly looking at this from edge to cloud and as-a-service — that holistic side of the map helps us unpick the risks, dependencies, and prerequisites. We can use the map to inspire new ideas and advance a customer’s new thinking about how technology might be enabled. 

We can also deploy the map with our building blocks behind each of the journeys, knowing what digital capabilities need to be brought on-stream and in what sequence. Then we can de-risk a customer’s path to value. That’s a great moment in time for us and it’s uniquely ours. Certainly, the model is uniquely ours and the way we apply it is uniquely ours.

But it’s also a timing thing, Dana. There has never been a better time in the industry where customers are seeking advice from a technology giant like HPE. So it’s a mixture of having the right IP, having the right opportunity, and the right moment as well. 

Gardner: So how should such organizations approach this? We talked about the methodology but initiating something like this map and digital ambition narrative can be daunting. How do we start the process?

How to pose the right questions 

Partridge: It begins by understanding a description of this complex landscape, as we have explored in this discussion. Begin to visualize your own digital ambition. See if you can take two or three top initiatives that you are driving and explore them across the map. So what’s the overriding KPI? Where does it start? 

Then ask yourself the questions in the middle of the map. What are the key enablers? Am I addressing a shared intelligence backbone? How am I handling trust, security, and resiliency? What am I doing to look at the operating model and the people? How is the culture central to all of this? How am I going to provide it as-a-service? Am I going to consume component parts of the service? How to stress over into the supply chain? How is it addressing the experience? 

HPE Pointnext Services’ map is a beautiful tool to help any customer today start to plot their own initiatives and say, “Well, am I thinking of this initiative in a fully 360° way.”

If you are stuck, come and ask HPE. A lot of my advisors around the world map their customers initiatives over to this framework. And we start to ask questions. We start to unveil some of the risks and dependencies and prerequisites. As you put in more and more initiatives and programs, you can begin to see duplication in the middle of the model play out. That enables customers to de-risk and be quicker to path of value because they can deduplicate what they can now see as a common shared digital backbone. Often customers are running those in isolation but seeing it through this lens helps them deduplicate that effort. That’s a quicker path to value. 

We engage customers around one- to two-day ideation workshops. Those are very structured ways of having creative, outside-of-the-box type thinking and putting in enough of a value proposition behind the idea to excite people.

We do a lot around ideation and design thinking. If customers have yet to figure out a digital initiative, what’s their North Star, where should they start? We engage customers around one- to two-day ideation workshops. Those are very structured ways of having creative, outside-of-the-box-type thinking and putting in enough of a value proposition behind the idea to excite people.

We had a customer in Italy come to us and say, “Well, we think we need to do something with AI, but we are not quite sure where the value is.”

Then we have a way of engaging to help you accelerate, and that’s really about identifying what the critical digital capabilities are. Think of it at the functional level first. What digital functions do I need to be able to achieve some level of outcome? And then get that into some kind of backlog so you know how to sequence it. And again, we work with customers to help do that as well. 

There are lots of ways to slice this, but, ultimately, dive in, get an initiative on the map, and begin to look at the risks and dependencies as you map it through the framework. Are you asking the right questions? Is there a connection to another part of the map that you haven’t examined yet that you should be examining? Is there a part of the initiative that you have missed? That is the immediate get-go start point. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. See the video. Sponsor: Hewlett Packard Enterprise.

Posted in artificial intelligence, Business intelligence, Cloud computing, Cyber security, data analysis, Data center transformation, digital transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning, professional services, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

How an SAP ecosystem partnership reduces risk and increases cost-efficiency around taxes management

The next BriefingsDirect data-driven tax optimization discussion focuses on reducing risk and increasing cost efficiency as businesses grapple with complex and often global spend management challenges.

We’ll now explore how end-to-end visibility of almost any business tax, compliance, and audit functions allows for rapid adherence to changing requirements — thanks to powerful new tools. And we’ll learn directly from businesses how they are pursuing and benefiting from advances in intelligent spend and procurement management.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To uncover how such solutions work best, we welcome Sean Thompson, Executive Vice-President of Network and Ecosystem at SAP Procurement Solutions; Chris Carlstead, Head of Strategic Accounts and Partnerships and Alliances at Thomson Reuters, and Poornima Sadanandan, P2P IT Business Systems Lead at Stanley Black and Decker. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts: 

Gardner: Sean, what’s driving the need for end-to-end visibility when it comes to the nitty-gritty details around managing taxes? How can businesses reduce risk and increase cost efficiency — particularly in difficult, unprecedented times like these — when it comes to taxation?

Thompson: It’s a near-and-dear topic for me because I started off my career in the early ‘90s as a tax auditor, so I was doing tax accounting before I went into installing SAP ERP systems. And now here I am at SAP at the confluence of accounting systems and tax.

We used to talk about managing risk as making sure you’re compliant with the various different regulatory agencies in terms of tax. But now in the age of COVID-19 compliance is also about helping governments. Governments more than ever need companies to be compliant. They need solutions that drive compliancy because taxes these days are not only needed to fund governments in the future, but also to support the dynamic changes now in reacting to COVID-19 and encouraging economic incentives.

There’s also a dynamic nature to changes in tax laws. The cost-efficiency now being driven by data-driven systems helps ensure compliancy across accounting systems to all of the tax authorities. It’s a fascinating time because digitization brings together business processes thanks to the systems and data that feeds the continuing efficiency.

It’s a great time to be talking about tax, not only from a compliance perspective but also from a cost perspective. Now that we are in the cloud era — driving data and business process efficiency through software and cloud solutions — we’re able to drive efficiencies unlike ever before because of artificial intelligence (AI) and the advancements we’ve made in open systems and the cloud.

Gardner: Chris, tax requirements have always been with us, but what’s added stress to the equation nowadays?

Carlstead: Sean hit on a really important note with respect to balance. Oftentimes people think of taxation as a burden. It’s often overlooked that the other side of that is governments use that money to fund programs, conduct social welfare, and help economies run. You need both sides to operate effectively. In moments like COVID-19 — and Dana used the word “unprecedented,” I might say that’s an understatement.

I don’t know in the history of our time if we have ever had an event that affected the world so quickly, so instantly, and uniformly like we have had in the past few months. When you have impacts like that, they generally drive government reaction, whether it was 9/11, the dot-com bubble, or the 2008 financial crisis. And, of course, there are also other instances all over the globe when governments need to react.

But, again, this latest crisis is unprecedented because almost every government in the world is acting at the same time and has moved to change the way we interact in our economies to help support the economy itself. And so while pace of change has been increasing, we have never seen such a moment like we have in the last few months.

Think of all the folks working at home, and the empathy we have for them dealing with this crisis. And while the cause was uniform, the impact from country to country — or region to region — is not equal. To that end, anything we can do to help make things easier in the transition, we’re looking to do.

While taxes may not be the most important thing in people’s lives, it’s one last thing they have to worry about when they are able to take advantage of a system such as SAP Ariba and Thomson Reuters have to help them deal with that part of their businesses. 

Gardner: Poornima, what was driving the need for Stanley Black and Decker to gain better visibility into their tax issues even before the pandemic?

Visibility improves taxation compliance

Sadanandan: At Stanley Black and Decker, SAP Ariba procurement applications are primarily used for all indirect purchases. The user base spans across buyers who do procurement activities based on organizational requirements and on up to the C-level executives who look into the applications to validate and approve transactions based on specific thresholds.

So providing them with accurate data is of utmost importance for us. We were already facing a lot of challenges concerning our legacy applications due to numerous challenges like purchasing categories, federated process-controlled versions of the application integrated with multiple SAP instances, and a combination of solutions including tax rate files, invoice parking, and manual processing of invoices.

There were a lot of points where manual touch was necessary before an invoice could even get posted to the backend ERP application due to these situations, including all the payback on return, tax penalties, and supplier frustrations, and so on.

So we needed to have end-to-end visibility with accuracy and precision to the granular accounting and tax details for these indirect procurement transactions without causing any delay due to the manual involvement in this whole procurement transaction process.

Gardner: Poornima, when you do this right, when you get that visibility and you can be detail-oriented, what does that get for you? How does that improve your situation?

Sadanandan: There are many benefits out of these automated transactions and due to the visibility of data, but I’d like to highlight a few.

Basically, it helps us ensure we can validate the suppliers’ charge tax, that suppliers are adhering to their local tax jurisdiction rules, and that any tax exemptions are, in fact, applicable for tax policies at Stanley Black and Decker.

Secondly, there comes a lot of reduction of manual processes. That happened because of automation, the web services, and as part of the integration framework we adopted. So tax calculation and determination became automated, and the backend ERP application, which is SAP at our company, receives accurate posting information. That then helps the accounting team to capture accounting details in real-time. They gain good visibility on financial reconciliations as well.

Tax calculations became automated, and the backend ERP, which is SAP, receives accurate posting information. That helps the accounting team capture details in real-time. They gain good visibility on financial reconciliations as well.

We also achieved better exception handling. Basically any exceptions that happen due to tax mismatches are now handled promptly based on thresholds set up in the system. Exception reports are also available to provide better visibility, not just to the end users but even to the technical team who are validating any issues that helps them in the whole analysis process.

Finally, the tax calls happen twice in the application, whereas earlier in our legacy application that only happened at the invoicing stage. Now this happens during the requisition phase in the whole procurement transaction process so it provides more visibility to the requisitioners. They don’t have to wait until the invoice phase to gain visibility on what’s being passed from the source system. Essentially, requesters as well as the accounts payable team are getting good visibility into the accuracy and precision of the data.

Gardner: Sean, as Poornima pointed out, there are many visibility benefits to using the latest tools. But around the world, are there other incentives or benefits?

Thompson: One of the things the pandemic has shown is that whether you are a small, medium-size, or large company, your supply chains are global. That’s the way we went into the pandemic, with the complexity of having to manage all of that compliance and drive efficiency so you can make accounting easy and remain compliant.

The regional nature of it is both a cost statement and a statement regarding regional incentives.  Being able to manage that complexity is what software and data make possible.

Gardner: And does managing that complexity scale down as well up based on the size of the companies?

Thompson: Absolutely. Small- to medium-sized businesses (SMBs) need to save money. And oftentimes SMBs don’t have dedicated departments that can handle all the complexity.

And so from a people perspective, where there’s less people you have to think about the end-to-end nature of compliance, accounting, and efficiency. When you think about SMBs, if you make it easy there, you can make it easy all the way up to the largest enterprises. So the benefits are really size-agnostic, if you will.

Gardner: Chris, as we unpack the tax visibility solution, what are the global challenges for tax calculation and compliance? What biggest pain points are people grappling with?

Challenges span globe, businesses

Carlstead: If I may just take a second and compliment Poornima. I always love it when I hear customers speak about our applications better than we can speak about them ourselves; so, Poornima, thank you for that.

And to your question, because the impact is the same for SMBs and large companies, the pain centers around the volume of change and the pace of that change. This affects domestic companies, large and small, as well as multinationals. And so I thought I’d share a couple of data points we pulled together at Thomson Reuters.

There are more than 15,000 jurisdictions that impact just this area of tax alone. Within those 15,000 jurisdictions, in 2019 we had more than 50,000 changes to the tax rules needed to comply within those jurisdictions. Now extrapolate that internationally to about 190 countries. Within the 190 countries that we cover, we had more than two million changes to tax laws and regulations.

At that scale, it’s just impossible to maintain manual processes and many companies look to do that either decentralized or otherwise — and it’s just impossible to keep pace with that.

With the COVID-19 pandemic impact, we expect that supply chains are going to be reevaluated. You’re changing processes, moving to new jurisdictions, and into new supply routes. And that has huge tax implications.

And now you introduce the COVID-19 pandemic, for which we haven’t yet seen the full impact. But the impact, along the lines where Sean was heading, is that we also expect that supply chains are going to get reevaluated. And when you start to reevaluate your supply chains you don’t need government regulation to change, you are changing. You’re moving into new jurisdictions. You are moving into new supply routes. And that has huge tax implications.

And not just in the area of indirect tax, which is what we’re talking about here today on the purchase and sale of goods. But when you start moving those goods across borders in a different route than you have historically done, you bring in global trade, imports, duties, and tariffs. The problem just magnifies and extrapolates around the globe.

Gardner: How does the Thomson Reuters and SAP Ariba relationship come together to help people tackle this?

Thompson: Well, it’s been a team sport all along. One of the things we believe in is the power of the ecosystem and the power of partnerships. When it comes down to it, we at SAP are not tax data-centric in the way we operate. We need that data to power our software. We’re about procurement, and in those procurement, procure-to-pay, and sales processes we need tax data to help our customers manage the complexity. It’s like Chris said, an amazing 50,000 changes in that dynamic within just one country.

And so, at SAP Ariba, we have the largest business network of suppliers driving about $3 trillion of commerce on a global basis, and that is a statement regarding just the complexity that you can imagine in terms of a global company operating on a global basis in that trade footprint.

Now, when the power of the ecosystem and Thomson Reuters come together we can become the tax-centric authorities. We do tax solutions and help companies manage their tax data complexity. When you can combine that with our software, that’s a beautiful interaction because it’s the best of both worlds.

It’s a win, win, win. It’s not only a win for our two companies, Thomson Reuters and SAP, but also for the end customer because they get the power of the ecosystem. We like to think you choose SAP Ariba for its ecosystem, and Thomson Reuters is one of our — if not the most — successful extensions that we have.

Gardner: Chris, if we have two plus two equaling five, tell us about your two. What does Thomson Reuters bring in terms of open APIs, for example? Why is this tag team so powerful?

Partner to prioritize the customer

Carlstead: A partnership doesn’t always work. It requires two different parties that complement each other. It only works when they have similar goals, such as the way they look at the world, and the way they look at their customers. I can, without a doubt, say that when Thomson Reuters and SAP Ariba came together, the first and most important focus was the customer. That relentless focus on the customer really helped keep things focused and drive forward to where we are today.

When you bring two large organizations together to help solve a large problem it’s a complex relationship and takes a lot of hard work. I’m really proud of the work we have done.

And that doesn’t mean that we are perfect by any means. I’m sure we have made mistakes along the way, but it’s that focus that allowed us to keep the patience and drive to ultimately bring forth a solution that helps solve a customer’s challenges. That seems simple in its concept, but when you bring two large organizations together to help try to solve a large organization’s problems, it’s a very complex relationship and takes a lot of hard work.

And I’m really proud of the work that the two organizations have done. SAP Ariba has been amazing along the way to help us solve problems for customers like Stanley Black and Decker.

Gardner: Poornima, you are the beneficiary here, the client. What’s been powerful and effective for you in this combination of elements that both SAP Ariba and Thomson Reuters bring to the table?

Sadanandan: With our history of around 175 years, Stanley Black and Decker has always been moving along with pioneering projects, with a strong vision of adopting the intelligent solutions for society. As part of this, adopting advanced technologies that help us fulfill all of the company’s objectives has always been in the forefront. 

As part of that tradition, we have been leveraging the integration framework consisting of the SAP Ariba tax APIcommunicating with the Thomson Reuters ONESOURCE tax solution in real-time using web services. The SAP Ariba tax API is designed to make a web service call to the external tax service provider for tax calculations, and in turn it receives a response to update the transactional documents. 

During the procurement transactions, the API makes an external tax calculation. Once the tax gets determined, the response is converted back per the SAP Ariba message format and XML format and it gets passed on by the ONESOURCE integration and sends that over to the SAP application.

The SAP Ariba tax API receives the response and updates the transactional documents in real time and that provides a seamless integration between the SAP Ariba procurement solution and the global tax. That’s exactly what helps us in automating our procurement transactions.

Gardner: Sean, this is such a great use case of what you can do when you have cloud services and the right data available through open APIs to do real-time calculations. It takes such a burden off of the end user and the consumer. How is technology a fundamental underpinning of what ONESOURCE is capable of?

Cloud power boosts business outcomes

Thompson: It’s wonderful to hear Poornima as a customer. It’s music to my ears to hear the real-life use case of what we have been able to do in the cloud. And when you look at the architecture and how we are able to drive, not only a software solution in the cloud, but power that with real-time data to drive efficiencies, it’s what we used to dream of back in the days of on-premises systems and even, God bless us, paper reconciliations and calculations.

It’s an amazing time to be alive because of where we are and the efficiencies that we can drive on a global basis, to handle the kind of complexity that a global company like Stanley Black and Decker has to deal with. It’s an amazing time. 

And it’s still the early days of what we will doing in the future around predictive analytics, of helping companies understand where there is more risk or where there are compliance issues ahead.

That’s what’s really cool. We are going into an era now of data-driven intelligence, machine learning (ML), applying those to business processes that combine data and software in the cloud and automate the things that we used to have to do manually in the past.

And so it’s a really amazing time for us.

Gardner: Chris, anything more to offer on the making invisible the technology but giving advanced business outcomes a boost?

Carlstead: What’s amazing about where we are right now is a term I often use, I certainly don’t believe I coined it, but best-of-breed suite. In the past, you used to have to choose. You had to go best-of-breed or you could go with the suite, and there were pros and cons to both approaches.

Now, with the proliferation of APIs, cloud, and the adoption of API technology across software vendors, there’s more free flow of information between systems, applications, and platforms. You have the ability as a customer to be greedy — and I think that’s a great thing.

Stanley Black and Decker can go with the number-one spend management system in the world and they can go with the number -one tax content player in the world. And they can expect these two applications to work seamlessly together.

As a consumer, you are used to downloading an app and it just works. And we are a little bit behind on the business side of the house, but we are moving there very quickly so that now customers like Stanley Black and Decker can go with the number-one spend management system in the world. And they can also go with the number-one tax content player in the world. And they can have the expectation that those two applications will work seamlessly together without spending a lot of time and effort on their end to force those companies together, which is what we would have done in an on-premise environment over the last several decades.

From an outcome standpoint, and as I think about customers like Stanley Black and Decker, getting tax right, in and of itself is not a value-add. But getting it wrong can be very material to the bottom line of your business. So for us and with the partnership with SAP Ariba, our goal is to make sure that customers like Stanley Black and Decker get it right the first time so that they can focus on what they do best.

Gardner: Poornima, back to you for the proof. Do you have any anecdotes, qualitative or quantitative measurements, of how you have achieved more of what you have wanted to do around tax processing, accounts payable, and procurement?

Accuracy spells no delayed payments

Sadanandan: Yes, all the challenges we had with our earlier processes with respect to our legacy applications got diminished with respect to incorrect VAT returns, wrong payments, and delayed payments. It also strengthened the relationship between our business and our suppliers. Above all, troubleshooting any issues became so much easier for us because of the profound transparency of what’s being passed from the source system.

And, as I mentioned, this improves the supplier relationship in that payments are not getting delayed and there is improvement in the tax calculation. If there are any mismatches, we are able to understand easily how that happened, as the integration layer provides us with the logs for accurate analysis. And the businesses themselves can answer supplier queries on a timely manner as they have profound visibility to the data as well.

From a project perspective, we believe that the objective is fulfilled. Since we started and completed the initial project in 2018, Stanley Black and Decker has been moving ahead with transforming the source-to-pay process by establishing a core model, leveraging the leading practices in the new SAP Ariba realm, and integrated to the central finance core model utilizing SAP S/4HANA.

So the source-to-pay core model includes leading practices of the tax solution by leveraging ONESOURCE Determination by integrating to the SAP Ariba source-to-pay cloud application. So with a completion of the project, we were able to achieve that core model and now the future roadmaps are also getting laid out to have this model adopted for the rest of our Stanley Black and Decker entities.

Gardner: Poornima, has the capability to do integrated tax functionality had a higher-level benefit? Have you been able to see automation in your business processes or more strategic flexibility and agility?

Sadanandan: It has particularly helped us in these uncertain times. Just having an automated tax solution was the primary objective with the project, but in these uncertain times this automated solution is also helping us ensure business continuity.

Having real-time calls that facilitate the tax calculation with accuracy and precision without manual intervention helped the year-end accounts payable transactions to occur without any interruptions.

Having real-time calls that facilitate the tax calculation with accuracy and precision without manual intervention helped the year-end accounts payable transactions to occur without any interruptions.

And above all, as I was mentioning, even in this pandemic time, we are able to go ahead with any future projects already in the roadmap because they are not on a standstill, we are able to leverage the standard functionalities provided by ONESOURCE and that’s easier to adopt in our environment.

Gardner: Chris, when you hear how Stanley Black and Decker has been able to get these higher-order risk-reduction benefits, do you see that more generally? What are some of the higher-order business benefits you see across your clientele?

Risk-reduction for both humans and IT

Carlstead: There are two broad categories. I will touch on the one that Poornima just referenced, which is more the human capital, and then also the IT side of the house. 

The experience that Stanley Black and Decker is having is fairly uniform across our customer base. We are in a situation where in almost every single tax department, procurement department, and all the associated departments, nobody has extra capacity walking around. We are all constrained. So, when you can bring in applications that work together like SAP Ariba and Thomson Reuters, it helps to free up capacities. You can then shift those resources into higher-value-add activities such as the ones Poornima referenced. We see it across the board. 

We also see that we are able to help consolidate resourcing from a hardware and a technology standpoint, so that’s a benefit. 

And the third benefit on the resource side is that as you are better able to track your taxation, not only do you get it right the first time, when it comes to areas of taxation like VAT recovery, you have to show very stringent documentation in order to receive your money back from governments, so there is a cash benefit as well.

And then on the other side, more on the business side of the relationship, there is a benefit we have just started to better understand in the last couple of years. Historically folks either chose not to move forward with an application like because they felt they could handle it manually, or even worse, they would say, “We will just have it audited, and we will pay the fine because the cost to fix the problem is greater than the penalties or fines I might pay.”

But they didn’t take into consideration the impact on the business relationship that you have with your vendors and your suppliers. If you think about every time you have had a tax issue between them, and then in the case in many European countries and around the world, where VAT recovery would not allow that supplier to recover their taxation because of a challenge they might have had with their buyer, that hurts your relationship. That ultimately hurts your ability to do commerce with that partner and in general with any partner around the world.

So, the top-line impact is something we have really started to focus on as a value and it’s something that really drives business for companies.

Gardner: Poornima, what would you like to see next? Is there a level of more intelligence, more automation? 

Post-pandemic possibilities and progress

Sadanandan: Stanley Black and Decker is a global company spanning across more than 60 countries. We have a wide range of products, including tools, hardware, security, and so on. Irrespective of these challenging times, all our priorities regard the safety of the employees and the families and keeping the momentum of business continuity responding to the needs of the community … these all remain as the top consideration. 

We feel that we are already equipped technology-wise to keep the business up and running. What we are looking forward to is, as the world tries to come back to the earlier normal life, continuing to provide pioneering products with intelligent solutions.

Gardner: Chris, where do you see technology and the use of data going next in helping people reach a new normal or create entirely new markets?

Carlstead: From a Thomson Reuters standpoint, we largely focus on helping businesses work with governments at the intersection of regulation and commerce. As a result, we have, for decades, amassed an extensive amount of content in categories around risk, legal, tax, and several other functional areas as well. We are relentlessly focused on how to best open up that content and free it, if you will, from even our own applications.

When we can leverage ecosystems such as SAP Ariba, we can leverage APIs and provide a more free-flowing path for our content to reach our customers. The number of use cases and possibilities is infinite.

What we are finding is that when we can leverage ecosystems such as SAP Ariba, we can leverage APIs and provide a more free-flowing path for our content to reach our customers; and when they are able to use it in the way they would like, the number of use cases and possibilities is infinite. 

We see now all the time our content being used in ways we would have never imagined. Our customers are benefiting from that, and that’s a direct result of the corporations coming together and suppliers and software companies freeing up their platforms and making things more open. The customer is benefiting, and I think it’s great.

Gardner: Sean, when you hear your partner and your customer describing what they want to come next, how can we project a new vision of differentiation when you combine network and ecosystem and data?

Thompson: Well, let me pick up where Chris said, “free and open.” Now that we are in the cloud and able to digitize on a global basis, the power for us is that we know that we can’t do it all ourselves. 

We also know that we have an amazing opportunity because we have grown our network across the globe, to 192 countries and four million registered buyers or suppliers, all conducting a tremendous amount of commerce and data flow. Being able to open up and be an ecosystem, a platform way of thinking, that is the power.

Like Chris said, it’s amazing the number of things that you never realized were possible. But once you open up and once you unleash a great developer experience, to be able to extend our solutions, to provide more data — the use cases are immense. It’s an incredible thing to see.

That’s what it’s really about — unleashing the power of the ecosystem, not only to help drive innovation but ultimately to help drive growth, and for the end customer a better end-to-end process and end-to-end solution. So, it’s an amazing time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

Posted in Cloud computing | Leave a comment

AWS and Unisys join forces to accelerate and secure the burgeoning move to cloud

A powerful and unique set of circumstances are combining in mid-2020 to make safe and rapid cloud adoption more urgent and easier than ever.

Dealing with the novel coronavirus pandemic has pushed businesses to not only seek flexible IT hosting models, but to accommodate flexible work, hasten applications’ transformation, and improve overall security while doing so.

This next BriefingsDirect cloud adoption best practices discussion examines how businesses plan to further use cloud models to cut costs, manage operations remotely, and gain added capability to scale their operations up and down.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the latest on-ramps to secure an agile cloud adoption, please welcome  Anupam Sahai, Vice President and Cloud Chief Technology Officer at Unisys, and Ryan Vanderwerf, Partner Solutions Architect at Amazon Web Services (AWS). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Anupam, why is going to the public cloud an attractive option more now than ever? 

Sahai: There are multiple driving factors leading to these tectonic shifts. One is that the whole IT infrastructure is moving to the cloud for a variety of business and technology reasons. And then, as a result, the entire application infrastructure — along with the underlying application services infrastructure — is also moving to the cloud.

The reason is very simple because of what cloud brings to the table. It brings a lot of capabilities, such as providing scalability in a cost-effective manner. It makes IT and applications behave as a utility and obviates the need for every company to host local infrastructure, which otherwise becomes a huge operations and management challenge.

So, a number of business and technological factors, along with the COVID-19 pandemic situation, which essentially makes us work remotely, and having cloud-based services and applications available as a utility makes them easy to consume and use.

Public cloud on everyone’s horizon 

Gardner: Ryan, have you seen in your practice over the past several months more willingness to bring more apps into the public cloud? Are we seeing more migration to the cloud?

Vanderwerf: We’ve definitely had a huge uptick in migration. As people can’t be in an office, things like workspaces and doing remote desktops, have also seen a huge increase. People are trying to find ways to be elastic, cost-efficient, and make sure they’re not spending too much money.

Following up on what Anupam said, the reasons people are moving in the cloud haven’t changed. They have just been accelerated because they need agility and to speed-up access to the resources they need. They need cost savings by not having to maintain data centers by themselves.

By being more elastic, they can provision only for what they’re using and not have stuff running and costing money when you don’t need to. They can also deploy globally in minutes, which is a big deal across many regions, and allows people to innovate faster. 

And right now, there’s a need to innovate faster, get more revenue, and cut costs – especially in times where fluctuation in demand goes up and down. You have to be ready for it.

Gardner: Yes, I recently spoke with a CIO who said that when the pandemic hit, they had to adjust workloads and move many from a certain set of apps that they weren’t going to be using as much to a whole other set that they were going to be using a lot more. And if it weren’t for the cloud, they just never would have been able to do that. So agility saved them a tremendous amount of hurt.

Anupam, why when we seek such cloud agility do we also have to think about lower risk and better security?

Sahai: Risk and security are critical because you’re talking about commercial, mission-critical workloads that have potentially sensitive data. As we move to the cloud, you should think three different trajectories. And some of this, of course, is being accelerated because of the COVID-19 pandemic.

Learn More About 

Unisys CloudForte 

One of the cloud-migration trajectories, as Ryan said earlier, is the need for elastic computing, cost savings, performance, and efficiencies when building, deploying, and managing applications. But as we move applications and infrastructure to the cloud, there is a need to ensure that the infrastructure falls under what is called the shared responsibility model, where the cloud service provider protects and secures infrastructure up to a certain level and then the customers have their responsibility, a shared responsibility, to ensure that they’re protecting their workloads, applications, and critical data. They also have to comply with the regulations that those customers need to adhere to.  

In such a shared responsibility model, customers need to work very closely with the service providers, such as AWS, to ensure they are taking care of all security and compliance-related issues.

You know, security breaches in the cloud — while less than compared to on-premises-related deployments — are still pretty rampant. That’s because some of the cloud security hygiene-related issues are still not being taking care of. That’s why solutions have to manage security and compliance for both the infrastructure and the apps as they move from on-premises to the cloud.

Gardner: Ryan, shared responsibility in practice can be complex when it’s hard to know where one party’s responsibility begins and ends. It cuts across people, process, and even culture.

When doing cloud migrations, how should we make sure there are no cracks for things to fall through? How do we make sure that we segue from on-premises to cloud in a way that the security issues are maintained throughout?

Stay safe with best-practices

Vanderwerf: Anupam is exactly right about the shared responsibility model. AWS manages and controls the components from the host operating system and virtualization layer down to physically securing the facilities. But it is up to AWS customers to build secure applications and manage their hygiene.

We have programs to help customers make sure they’re using those best practices. We have a well-architected program. It’s available on the AWS Management Console, and we have several lenses if you’re doing specific things like serverless, Internet of things (IoT), or analytics, for example.

Solutions architects can help the customer review all of their best practices and do a deep-dive examination with their teams to raise any flags that people might not be aware of and help find solutions. 

Things like that have to be focused toward the business, but solutions architects can help the customer review all of their best practices and do a deep-dive examination with their teams to raise any flags that people might not be aware of and help them find solutions to remedy them.

We also have an AWS Technical Baseline Review that we do for partners. In it we make sure that partners are also following best practices around security and make sure that the correct things are in place for a good experience for their customers as well. 

Gardner: Anupam, how do we ensure security-as-a-culture from the beginning and throughout the lifecycle of an application, regardless of where it’s hosted or resides? DevSecOps has become part of what people are grappling with. Does the security posture need to be continuous?

Sahai: That’s a very critical point. But first I want to double-click on what Ryan mentioned about the shared responsibility model. If you look at the overall challenges that customers face in migrating or moving to the cloud, there is certainly the security and compliance part of it that we mentioned.

There is also the cost governance issue and making sure it’s a well-architected framework architecture. The AWS Well-Architected Framework (WAF), for example, is supported by Unisys.

Additionally, there are a number of ongoing issues around optimization, cost governance, security, compliance governance, and optimization of workloads that are critical for our customers. Unisys does a Cloud Success Barometer study every year and, and what we find is very interesting.

One thing is clear, about 90 percent of organizations are transitioned to the cloud. So no surprise there. But in the journey to the cloud what we also found is that 60 percent of the organizations are unable to move to the cloud, or hold on to their cloud migrations, because of some of these unexpected roadblocks. And so that’s where partners like Unisys and AWS are coming together to offer visibility and solutions to address them. Those challenges remain, and, of course, we are able to help address them.

Coming back to the DevSecOps question, let’s take a step back and understand why DevOps came into being. It was basically because of the migration to the cloud that we had the need to break down the silos between development and operations to deploy infrastructure-as-code. That’s why DevOps essentially brings about faster, shorter development cycles; faster deployment, faster innovation.

Studies have shown that DevOps leads to at least 60 percent faster innovation and turnaround time compared to traditional approaches, not to mention the cost savings and the IT headcount savings when you merge the dev and ops organizations.

As DevOps goes mainstream, and as cloud-centric applications are becoming mainstream, there is a need to inject security into the DevOps cycle. Having DevSecOps is key.

But as DevOps goes mainstream, and as cloud-centric applications are becoming mainstream, there is a need to inject security into the DevOps cycle. So, having DevSecOps is key. You want to enable developers, operations, and security professionals to work together on yet another silo, to break them down and merge with the DevOps team.

But we also need to provide tools that are amenable to the DevOps processes, continuous integration/continuous delivery (CI/CD) tools that enable the speed and agility needed for DevOps, but also injecting security — without slowing them down. It is a challenge, and that’s why the all-new field of DevSecOps enables security and compliance injection into the DevOps cycle. It is very, very critical.

Gardner: Right, you want to have security but without giving up agility and speed. How have Unisys and AWS come together to ease and reduce the risk of cloud adoption while greasing the skids to the on-ramps to cloud adoption?

Smart support on the cloud journey

Sahai: Unisys in December 2019 announced CloudForte capabilities with the AWS cloud. A number of capabilities were announced that help customers adopt cloud without worrying about security and compliance.

CloudForte today provides a comprehensive solution to help customers manage their customer cloud journeys, whether it’s greenfield or brownfield; and there is hybrid cloud support, of course, for the AWS cloud along with multi-cloud support from a deployment perspective.

The solution combines production services that enable three primary use cases: Cloud migration, as we talked about, and apps migration using DevSecOps. We’ve codified that in terms of best practices, reference architecture, and well-architected principles, and we have wrapped that in advisory services and deployment services as well.

Learn More About 

Unisys CloudForte 

The third use case is around cloud posture management, which is understanding and optimizing existing deployments, including hybrid cloud deployments, to ensure you’re managing costs, managing security and compliance, and also taking care of any other IT-related issues around governance of resources to make sure that you migrate to the cloud in a smart and secure manner.

Gardner: Ryan, why did AWS get on-board with CloudForte? What was it about it that was attractive to you in helping your customers?

Vanderwerf: We are all about finding solutions that help our customers and enabling our partners to help their customers. With the shared responsibility model, that’s on the customer, and CloudForte has really good risk management and a portfolio of applications and services to help people get ahold of that responsibility themselves.

Instead of customers trying to go on their own — or just following general best practices – Unisys also has the tooling in place to help customers. That’s pretty important because with DevSecOps, people suffer from a lack of business agility, security agility, and face the risks around change to their businesses. People fear that.

With the shared responsibility model, that’s on the customer, and CloudForte has really good risk management and a portfolio of apps and services to help people get ahold of that responsibility themselves.

These tools have really helped customers manage that journey. We have a good feeling about being secure and being compliant, and the dashboards they have inside of it are very informative, as a matter of fact.

Gardner: Of course, Unisys has been around for quite a while. They have had a very large and consistent installed base over the years. Are the tooling, services, and value in CloudForte bringing in a different class of organization, or different parts of organizations, into AWS?

Vanderwerf: I think so, especially in the enterprise area where they have a lot of things to wrangle on the journey to the cloud — and it’s not easy. When you’re migrating as much as you can to a cloud setting – seeking to keep control over assets and making sure there are no rogue things running — it’s a lot for an enterprise IT manager to handle. And so, the more tools they have in their tool-belt to manage that is way better than them trying to cook up their own stuff.

Gardner: Anupam, did you have a certain type of organization, or part of an organization, in mind when you crafted CloudForte for AWS?

Sahai: Let’s take a step back and understand the kind of services we offer. Our services are tailored and applicable for both enterprises and the public sector. We offer advisory services to begin with, which essentially allows us to pass-through products. You have the CloudForte Navigator product, which allows us to assess the current posture of the customer and understand the application capabilities the customer has, whether it needs a transformation, and, of course, this is all driven by business outcomes that the customers desires.

Second, through CloudForte we bring best practices, reference architectures, and blueprints for the various customer journeys that I mentioned earlier. Greenfield or brownfield opportunities, whatever the stage of adoption, we have created a template to help with the specific migration and customer journey.

Once customers are able and ready to get on-boarded, we enable DevSecOps using CI/CD tools, best practices, and tools to ensure the customers use a well-architected framework. We also have a set of accelerators provided by Unisys that enable customers to get on-boarded with guardrails provided. So, in short, the security policies, compliance policies, organizational framework, and the organizational architectures are all reflected in the deployment. 

Then, once it’s up and running, we manage and operate the hybrid cloud security and compliance posture to ensure that any deviations, any drifts, are monitored and remediated to ensure they are continuously having an acceptable posture. 

Finally, we also have AIOps capabilities, which include AI-enabled outcomes that the customer is looking for. We use artificial intelligence and machine learning (AI/ML) technologies to optimize the resources. We drive cost savings through resource optimization. We also have an instant management capability to bring down costs dramatically using some those analytics and AIOps capabilities. 

So our objective is to drive digital transformation for customers using a combination of products and services that CloudForte has, and working in close conjunction with what AWS offers, so that we create a compelling offering that’s complementary to each other, but very compelling from a business outcomes perspective. 

Gardner: The way you describe them, it sounds like these services would be applicable to almost any organization, regardless of where they are on their journey to the cloud. Tell us about some of the secret sauce under the hood. The Unisys Stealth technology, in particular, is unique in how it maintains cloud security.

Stealth solutions for hybrid security 

Sahai: The Unisys Stealth technology is very compelling, especially in the hybrid cloud security sense. As we discussed earlier, the shared responsibility model requires customers to take care of and share the responsibility to make sure that workloads in the cloud infrastructure are compliant and secure. 

And we have a number of tools in that regard. One is the CloudForte Cloud Compliance Director solution, which allows you to assess and manage your security and compliance posture for the cloud infrastructure. So it’s a cloud security posture management solution. 

Then we also have the Stealth solution, essentially a zero trustmicro-segmentation capability that leverages the identity, or the user roles, in an organization to establish a community that’s trusted and is capable of doing certain actions. It creates communities of interest that allow and secure through a combination of micro-segmentation and identity management. 

Think of that as a policy management and enforcement solution that essentially manipulates the OS native stacks to enforce policies and rules that otherwise are very hard to manage. 

If you take Stealth and marry that with CloudForte compliance, some of the accelerators, and Navigator, you have a comprehensive Unisys solution for hybrid cloud security, both on-premises and in the AWS cloud infrastructure and workloads environment. 

Gardner: Ryan, it sounds like zero trust and micro-segmentation augment the many services that AWS already provides around identity and policy management. Do you agree that the zero trust and micro-segmentation aspects of something like Stealth dovetail very well with AWS services? 

Vanderwerf: Oh, yes, absolutely. And in addition to that, we have a lot of other security tools like AWS WAFAWS ShieldSecurity HubMacieIAM Access Analyzer and Inspector. And I am sure under the hood they are using some of these services directly. 

The more power you have the better. And it’s tough to manage. Some people are just getting into cloud and they have challenges. It’s not always technical, sometimes it’s just communications issues at a company or lack of sponsorship or resource allocation or undefined key performance indicators (KPI). So all these things, or even just timing, those are all important for a security situation. 

Gardner: All those spinning parts, those services, that’s where the professional services come in so that organizations don’t have to feel like they are doing it alone. How does the professional services and technical support fit into helping organizations go about these cloud journeys? 

Sahai: Unisys is trusted by our customers to get things right. So we say that we do cloud correctly, and we do cloud right, and that includes a combination of trusted advisory services. That means everything from identifying legacy assets, to billing, and to governance, and then using a combination of products and services to help customers transform as they move to the cloud.

Our cloud-trained people and expertise speeds up the migrations, gives visibility, and provides operational improvements. Thereby we are able to do cloud right and in a secure fashion by establishing security practices, trust through security and compliance, and AIOps. 

Our cloud-trained people and expertise speeds up the migrations, gives visibility, and provides operational improvements. Thereby we are able to do the cloud right and in a secure fashion by establishing security practices, establishing trust through a combination of micro-segmentation, security, and compliance ops, AIOps, and that certainly is the combination of products and services that we offer today. 

And our customers tell us we are rated very highly, 95 percent-plus in terms of customer satisfaction. It’s a testament to the fact that our professional services — along with our products – complements the AWS services and products that customers need to deliver their business outcomes. 

Gardner: Anupam, do you have any examples of organizations that leveraged both AWS and Unisys CloudForte? What have they been doing and what did they get from it? 

Student success supported 

Sahai: I have a number of examples where a combination of CloudForte and AWS deployments are happening. One is right here where I live in the San Francisco Bay Area. The business challenge they faced was to enhance the student learning experience and deliver technology services critical to student success and graduation initiatives. And given the COVID-19 scenario, you can understand why cloud becomes an important factor in that. 

Unisys cloud and infrastructure services, using CloudForte, helped them deploy a hybrid cloud model with AWS. We had Ansible for automation, ServiceNow for IT service management (ITSM), AIOps, and we deployed a logarithm and a portfolio of tools and services. 

They were then able to accelerate their capability to offer critical administrative services, such as student scheduling and registration, to about half-a-million students and 52,000 faculty and staff members across 23 campuses. It delivered 30 percent better performance while realizing about 33 percent cost savings and 40 percent growth in usage of these services. So, great outcomes, great cost savings, and you are talking about reduction of about $4.5 million in computed storage costs and about $3 million in cost avoidance.

Learn More About 

Unisys CloudForte 

So this is an example of a customer who leveraged the power of the AWS Cloud and the CloudForte products and services to deliver these business outcomes, which is a win-win situation for us. So that’s one example.

Gardner: Ryan, what do you expect for the next level of cloud adoption benefits? Is the AIOps something that we are going to be doubling-down on? Or are there other services? How do you see the future of cloud adoption improving?

The future is integrated 

Vanderwerf: It’s making sure everything is able to integrate. Like, for example, with a hybrid cloud situation we now have AWS Outposts. Now people can run a rack of servers in their data center and  be connected directly to the cloud.

Some things don’t make sense always to go to cloud. Perhaps machinery running analytics, for example, has very low latency requirements. You could still write native applications to work with the cloud in AWS and run those apps locally. 

Also, AIOps is huge because so many people are doing AI/ML in their workloads, from deciding security posture threats, to finding whether machines are breaking down. There are so many options in data analytics and then wrangling all these things together with data lakes. Definitely, the future is about better integrating all of these things.

AI/MLOps is really popular now because there are so many data scientists and people integrating ML into things. They need some sort of organizational structure to keep that organized, just like CI/CD did for DevOps. And all of those areas continue to grow. At AWS, we have 175-plus services, and they are always coming up with new ones every day. I don’t see that slowing down anytime soon.

Gardner: Anupam, for your future outlook, to this point that Ryan raised about integration, how do you see organizations like Unisys helping to manage the still growing complexity around the adoption and operations in the cloud and hybrid cloud environments?

Sahai: Yes, that is a huge challenge. As Ryan mentioned, hybrid cloud is here to stay. Not everything will move to the cloud. And while cloud migration trends will continue, there will be some core set of apps that will be staying on-premises. So leveraging AWS Outposts, as he said, to help with the hybrid cloud journeys will be important. And Unisys offers hybrid cloud and multi-cloud offerings that we are certainly committed to.

Security and compliance issues are not going away, unfortunately. Cloud breaches are out there. And so there is a need to actively manage and be proactive about managing your security and compliance posture. Customers are going to work with AWS and Unisys to fortify both their defense and offense proactively.

The other thing is that security and compliance issues are not going away, unfortunately. Cloud breaches are out there. And so there is a need to actively manage and be proactive about managing your security and compliance posture. And so that’s another area that I think our customers are going to be working together with AWS and Unisys to help them fortify not just their defenses, but also the offense — to be proactive in dealing with these threats and breaches and preventing them.

The third area is around AIOps, and this whole notion of AI-enabled CloudForte, and we see AI and ML to be prorating every path of the customer journey. Not just in AIOps, which is the operations and management piece, which is a critical part of what we do, but AI in enabling the customer journeys in terms of predicting.

So let’s say a customer is trying to move to the cloud, we want to be able to use predictions to predict what their customer journey would look like if they move to the cloud and to be proactive about predicting and remediating issues that might come up.

And, of course, AI is fueled by the data revolution — the data lakes, the data buses — that we have today to transport data seamlessly across applications, across hybrid cloud infrastructures, and to tie all of this together. You have the app migration, the CI/CD, and the DevSecOps capabilities that are part of the CloudForte advisory and product services. 

We are enabling customers to move to the cloud without compromising speed, agility, and security and compliance, whether they are moving infrastructure to the cloud, using infrastructure as code, or moving applications to the cloud using applications as code by leveraging the micro-services infrastructure, the cloud native infrastructure that AWS provides — and Kubernetes included. 

We have support for a lot of these capabilities today, and we will continue to evolve them to make sure no matter where the customer is in their customer journey to the cloud — whatever the stage of evolution — we have a compelling set of production services that customers can use to get to the cloud and stay there with the help of Unisys and AWS.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Amazon Web Services.

Posted in AIOps, application transformation, AWS, Cloud computing, containers, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, managed services, Security, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

How REI used automation to cloudify infrastructure and rapidly adjust its digital pandemic response

Like many retailers, Recreational Equipment, Inc. (REI) was faced with drastic and rapid change when the COVID-19 pandemic struck. REI’s marketing leaders wanted to make sure that their online e-commerce capabilities would rise to the challenge. They expected a nearly overnight 150 percent jump in REI’s purely digital business.

Fortunately REI’s IT leadership had already advanced their systems to heightened automation, which allowed the Seattle-based merchandiser to turn on a dime and devote much more of its private cloud to the new e-commerce workload demands.

The next BriefingsDirect Voice of Innovation interview uncovers how REI kept its digital customers and business leadership happy, even as the world around them was suddenly shifting.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To explore what works for making IT agile and responsive enough to re-factor a private cloud at breakneck speed, we’re joined by Bryan Sullins, Senior Cloud Systems Engineer at REI in Seattle. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: When the pandemic required you to hop-to, how did REI manage to have the IT infrastructure to actually move at the true pace of business? What put you in a position to be able to act as you did?

Digital retail demands rise 

Sullins: In addition to the pandemic stay-at-home orders a couple months ago, we also had a large sale previously scheduled for the middle of May. It’s the largest sale of the year, our anniversary sale.

And ramping up to that, our marketing and sales department realized that we would have a huge uptick in online sales. People really wanted to get outside, because people could go outside without breaking any of the social distancing rules.

For example, bicycle sales were up 310 percent compared to the same time last year. So in ramping up for that, we anticipated our online presence at rei.com was going to go up by 150 percent, but we wanted to scale up by 200 percent to be sure. In order to do that, we had to reallocate a bunch of ESXi hosts in VMware vSphere. We either had to stand up new ones or reallocate from other clusters and put them into what we call our digital retail presence.

As a result of our fully automated process, using Hewlett Packard Enterprise (HPE) OneViewSynergy, and Image Streamer, we were able to reallocate 6 out of the 17 total hosts needed. We were able to do that in 18 minutes, all at once — and that’s single touch, that’s launching the automation and then pulling them from one cluster and decommissioning them and placing them all the way into the digital retail clusters.

We also had to move some from our legacy platform, they aren’t at HPE Synergy yet, and those took an additional three days. But those are in transition, we are moving through to that fully automated platform all around. 

Gardner: That’s amazing because just a few years ago that sort of rapid and automated transition would have been unheard of. Even at a slow pace you weren’t guaranteed to have the performance and operations you wanted.

If you were not able to do this using automation – if the pandemic had hit, heaven forbid, five or seven years ago – what would have been the outcome?

We needed to make sure we had the infrastructure capacity so that nothing failed under a heavy load. We were able to do it in the time-frame, and be able to get some sleep.

Sullins: There were actually two outcomes from this. The first is the fairly obvious issue of not being able to handle the online traffic on our rei.com retail presence. It could have been that people weren’t able to put stuff into a shopping cart, or inventory decrement, and so on. It could have been a very broad range of things. We needed to make sure we had the infrastructure capacity so that none of that fails under a heavy load. That was the first part.

Gardner: Right, and when you have people in the heat of a purchasing moment, if you’re not there and it’s not working, they have other options. Not only would you lose that sale, you might lose that customer, and your brand suffers as well.

Sullins: Oh, without a doubt, without a doubt.

The other issue, of course, would have been if we did not meet our deadline. We had just under a week to get this accomplished. And if we had to do this without a fully automated approach, we would have had to return to our managers and say, “Yeah, so like we can’t do it that quickly.” But with our approach, we were able to do it all in the time frame — and be able to get some sleep in the interim. So it was a win-win.

Gardner: So digital transformation pays off after all?

Sullins: Without a doubt.

Gardner: Before we learn more about your journey to IT infrastructure automation, tell us about REI, your investments in advanced automation, and why you consider yourself a data-driven digital business?

Automation all the way 

Sullins: Well, a lot of that precedes me by quite a bit. Going back to the early 2000s, based on what my managers tell me, there was a huge push for REI become an IT organization that just happens to do retail. The priority is on IT being a driving force behind everything we do, and that is something that, at the time, REI really needed to do. There are other competitors, which we won’t name, but you probably know who they are. REI needed to stay ahead of that curve.

So since then there have been constant sweeping and cyclical changes for that digital transformation. The most recent one is the push for automating all things. So that’s the priority we have. It’s our marching orders.

Gardner: In addition to your company, culture, and technology, tell us about yourself, Bryan. What is it about your background and personal development that led you to be in a position to act so forthrightly and swiftly?

Sullins: I got my start in IT back in 1999. I was a public school teacher before that, and then I made the transition to doing IT training. I did IT training from 1999 to about 2012. During those years, I got a lot of technology certifications,because in the IT training world you have to.

I began with what was, at the time, called the Microsoft Certified Solutions Expert (MCSE) certification. Then I also did the Linux Professional Institute. I really glommed on to Linux. I wanted to set myself apart from the rest of the field back then, so I went all-in on Linux.

And then, 2008-2009-ish, I jumped on the VMware train and went all-in on VMware and did the official VMware curriculum. I taught that for about three years. Then, in 2012, I made the transition from IT training into actually doing this for real as an engineer working at Dell. At the time, Dell had an infrastructure-as-a-service (IaaS) healthcare cloud that was fairly large – 1,200-plus ESXi hosts. We were also responsible for the storage and for the 90-plus storage area network (SAN) arrays as well.

In a large environment, you really have to automate. It’s been the focus of my career. I typically jump right into new technology. 

In an environment that large, you really have to automate. I cut my teeth on automating through PowerCLI and Ansible. Since then, about 2015, it’s been the focus of my career. I’m not saying I’m a guru, by any means, but it’s been a focus of my career.

Then, in 2018, REI came calling. I jumped on that opportunity because they are a super-awesome company, and right off the bat I got free reign over: if you want to automate it, then you automate it. And I have been doing that ever since August of 2018.

Gardner: What helped you make the transition from training to cloud engineer? 

Sullins: I typically jump right into new technology. I don’t know if that comes from the training or if that’s just me as a person. But one of the positives I’ve gotten from the training world is that you learn a 100 percent of the feature base that’s available with said technology. I was able to take what I learned and knew from VMware and then say, “Okay, well, now I am going to get the real-world experience to back that up as well.” So it was a good transition.

Gardner: Let’s look at how other organizations can anticipate the shift to automation. What are some of the challenges that organizations typically face when it comes to being agile with their infrastructure?

Manage resistance to cloud 

Sullins: The challenges that I have seen aren’t usually technical. Usually the technology that people use to automate things are ready at hand. Many are free; like Ansible, for example, is free. PowerCLI is free. Jenkins is free. 

So, people can start doing that tomorrow. But the real challenge is in changing people’s mindset about a more automated approach. I think that it’s tough to overcome. It’s what I call provisioning by council. More traditional on-premises approaches have application owners who want to roll out x number of virtual machines (VMs), with all their particular specs and whatnot. And then a council of people typically looks at that and kind of scratches their chin and says, “Okay, we approve.” But if you need to scale up, that council approach becomes a sort of gate-keeping process.

With a more automated approach, like we have at REI, we use a cloud management platform to automate the processes. We use that to enable self-service VMs instead of having a roll out by council, where some of the VMs can take days or weeks roll out because you have a lot of human beings touching it along the way. We have a lot of that process pre-approved, so everybody has already said, “Okay, we are okay with the roll out. We are okay with the way it’s done.” And then we can roll that out in 7 to 10 minutes rather than having a ticket-based model where somebody gets to it when they can. Self-service models are able to do that much better.

But that all takes a pretty big shift in psychology. A lot of people are used to being the gatekeeper. It can make them uncomfortable to change. Fortunately for me, a lot of the people at REI are on-board with this sort of approach. But I think that resistance can be something a lot of people run into.

Gardner: You can’t just buy automation in a box off of a shelf. You have to deal with an accumulation of manual processes and habits. Why is moving beyond the manual processes culture so important?

Sullins: I call it a private cloud because that means there is a healthy level of competition between what’s going in the public cloud and what we do in that data center.

The public cloud team has the capability of “selling” their solution side-by-side with ours. When you have application owners who are technically adept — and pretty much all of them are at REI — they can be tempted to say, “Well, I don’t want to wait a week or two to get a VM. I want to create one right now out on the public cloud.”

There is a healthy level of competition between what’s going in the public cloud and what we do in the date center. We offer our customers a spectrum of services. And now they can do that in an automated way. That’s a big win. 

That’s a big challenge for us. So what we are trying to accomplish — and we have had success so far through the transition – is to offer our customers a spectrum of services. So that’s great.

The stakeholders consuming that now gain flexibility. They can say, “Okay, yeah, I have this application. I want to run it in the public cloud, but I can’t based on the needs for that application. We have to run it on-premises.” And now they can do that in an automated way. That’s a big win, and that’s what people expect now, quite honestly.

Gardner: They want the look and feel of a public cloud but with all the benefits of the private cloud. It’s up to you to provide that. Let’s find out how you did.

How did you overcome the challenges that we talked about and what are the investments that you made in tools, platforms, and an ecosystem of players that accomplished it?

Sullins: As I mentioned previously, a lot of our utilities are “free,” the Ansibles of the world, PowerCLI, and whatnot. We also use Morpheus to do self-service and the implications behind automating things on what I call the front end, the customer face. The issue you have there is you don’t get that control of scaling up before you provision the VM. You have to monitor and then roll it out on the backend. So you have to monitor for usage and then scale up on the backend, and seamlessly. The end users aren’t supposed to know that you are scaling up. I don’t want them to know. It’s not their job to know. I want to remain out of their way.

In order to do that, we’ve used a combination of technologies. HPE actually has a GitHub link for a lot of Ansible playbooks that plug right in. And then the underlying hardware adjacent management ecosystem platform is HPE OneView with HPE Synergy and Image Streamer. With a combination of all of those technologies we were able to accomplish that 18-minute roll-out of our various titles.

Gardner: Even though you have an integrated platform and solutions approach, it sounds like you have also made the leap from ushering pets through the process into herding cattle. If you understand my metaphor, what has allowed you to stop treating each instance as a pet into being able to herd this stuff through on an automated basis?

From brittle pets to agile cattle 

Sullins: There is a psychological challenge with that. In the more traditional approach – and the VMware shop listeners are going to be very well aware of this — I may need to have a four-node cluster with a number of CPUs, a certain amount of RAM, and so on. And that four-node cluster is static. Yes, if I need to add a fifth down the line I can do that, but for that four-node cluster, that’s its home, sometimes for the entire lifecycle of that particular host.

With our approach, we treat our ESXi hosts as cattle. The HPE OneView-Synergy-Image Streamer technology allows us to do that in conjunction with those tools we mentioned previously, for the end point in particular.

So rather than have a cluster, and it’s static and it stays that way — it might have a naming convention that indicates what cluster it’s in and where — in reality we have cattle-based DNS names for ESXi hosts. At any time, the understanding throughout the organization, or at least for the people who need to know, is that any host can be pulled from one cluster automatically and placed into another, particularly when it comes to resource usage on that cluster. My dream is that the robots will do this automatically.

So if you had a cluster that goes into the yellow, with its capacity usage based on a threshold, the robot would interpret that and say, “Oh, well, I have another cluster over here with a host that is underutilized. I’m going to pull it into the cluster that’s in the yellow and then bring it back into the green again.” This would happen all while we sleep. When we wake up in the morning, we’d say, “Oh, hey, look at that. The robots moved that over.”

Gardner: Algorithmic operations. It sounds very exciting.

Automation begets more automation 

Sullins: Yes, we have the push-button automation in place for that. It’s the next level of what that engine is that’s going to make those decisions and do all of those things.

Gardner: And that raises another issue. When you take the plunge into IT automation, you are making your way down the Chisholm Trail with your cattle, all of a sudden it becomes easier along the way. The automation begets more automation. As you learn and grow, does it become more automated along the way?

Sullins: Yes. Just to put an exclamation point on this topic, imagine the situation we opened the podcast with, which is, “Okay, we have to reallocate a bunch of hosts for rei.com.” If it’s fully automated, and we have robots making those decisions, the response is instantaneous. “Oh, hey, we want to scale up by 200 percent on rei.com.” We can say, “Okay, go ahead, roll out your VM. The system will react accordingly. It will add physical hosts as you see fit, and we don’t have to do anything, we have already done the work with the automation.” Right?

But to the automation begetting automation, which is a great way of putting it, by the way, there are always opportunities for more automation. And on a career side note, I want to dispel the myth that you automate your way out of a job. That is a complete and total myth. I’m not saying it doesn’t happen, where people get laid off as a result of automation. I’m not saying that doesn’t happen, but that’s relatively rare because when you automate something, that automation is going to need to be maintained because things change over time.

The other piece of that is a lot of times you have different organizations at various states of automation. Once you get your head above water to where it’s, “Okay, we have this process and now it’s become trivial because it’s been automated.” We can now concentrate on automating either more things — or you have new things that need to be automated. And whether that’s the process for only VMs, a new feature base, monitoring, or auto-scaling — whatever it is — you have the capability of from day one to further automate these processes.

Gardner: What was it specifically about the HPE OneView and Synergy that allowed you to move past the manual processes, firefighting, and culture of gatekeeping into more herding of cattle and being progressively automated?

Sullins: It was two things. The Image Streamer was number one. To date, we don’t run PXE boots infrastructure, not that we can’t, it’s just not something that we have traditionally done. We needed a more standard process for doing that, and Image Streamer fit that and solved that problem.

The second piece is the provided Ansible playbooks that HPE has to kick off the entire process. If you are somewhat versed in how HPE does things through OneView, you have a server profile that you can impose on a blade, and that can be fully automated through Ansible.

Image Streamer allows us to say, “Okay, we build a gold image. We can apply that gold image to any frame in the cluster.” We needed a more standard process, and Image Streamer solved that problem.

And, by the way, you don’t have to use Image Streamer to use Ansible automation. This is really more of an HPE OneView approach, whereby you can actually use it to do automated profiles and whatnot. But the Image Streamer is really what allows us to say, “Okay, we build a gold image. We can apply that gold image to any frame in the cluster.” That’s the first part of it, and the rest is configuring the other side.

Gardner: Bryan, it sounds like the HPE Composable Infrastructure approach works well with others. You are able to have it your way because you like Ansible, and you have a history of certain products and skills in your organization. Does the HPE Composable Infrastructure fit well into an ecosystem? Is it flexible enough to integrate with a variety of different approaches and partners?

Sullins: It has been so far, yes. We have anticipated leveraging HPE for our bare metal Linux infrastructure. One of the additional driving forces and big initiatives right now is Kubernetes. We are going all-in on Kubernetes in our private cloud, as well as in some of our worker nodes. We eventually plan on running those as bare metal. And HPE OneView, along with Image Streamer, is something that we can leverage for that as well. So there is flexibility, absolutely, yes.

Coordinating containers 

Gardner: It’s interesting, you have seen the transition from having VMware and other hypervisor sprawl to finding a way to manage and automate all of that. Do you see the same thing playing out for containers, with the powerful endgame of being able to automate containers, too?

Sullins: Right. We have been utilizing Rancher as part of our coordination tool for our Kubernetes infrastructure and utilizing vSphere for that. So we are using that.

As far as the containerization approach, REI has been doing containers before containers was a big thing. Our containerization platform has been around since at least 2015. So REI has been pretty cutting edge as far as that is concerned.

And now that Kubernetes has won the orchestration wars, as it were, we are looking to standardize that for people who want to do things online, which is to say, going back to the digital transformation journey.

Basically, the industry has caught up with what our super-awesome developers have done with containerization. But we are looking to transition the heavy lifting of maintaining a platform away from the developers. Now that we have a standard approach with Kubernetes, they don’t have to worry so much about it. They can just develop what they need to develop. It will be a big win for us.

Gardner: As you look back at your automation journey, have you developed a philosophy about automation? How this should this best work in the future?

Trust as foundation of automation 

Sullins: Right. Have you read Gene Kim’s The Unicorn Project? Well, there is also his The Phoenix ProjectMy take from that is the whole idea of trust, of trusting other people. And I think that is big.

I see that quite a bit in multiple organizations. For REI, we are going to work as a team and we trust each other. So we have a pretty good culture. But I would imagine that in some places that is still big challenge.

And if you take a look at The Unicorn Project, a lot of the issues have to do with trusting other human beings. Something happened, somebody made a mistake, and it caused an outage. So they lock it up and lock it away and say only certain people can do that. And then if you multiply that happening multiple times — and then different individuals walking that down — it leads to not being able to automate processes without somebody approving it, right?

Gardner: I can’t imagine you would have been capable, when you had to transition your private cloud for more online activity, if you didn’t have that trust built into your culture.

Sullins: Yes, and the big challenge that might still come up is the idea of trusting your end users, too. Once you go into the realm of self-service, you come up on the typical what-ifs. What if somebody adds a zero and they meant to only roll out 4 VMs but they roll out 40? That’s possible. How do you create guardrails that are seamless? If you can, then you can trust your users. You decrease the risk and can take that leap of faith that bad things won’t happen.

Gardner: Tell us about your wish list for what comes next. What you would like HPE to be doing? 

Small steps and teamwork rewards 

Sullins: My approach is to first automate one thing and then work out from there. You don’t have to boil the ocean. Start with something small and work your way up.

As far as next steps, we want auto scaling a physical layer and having the robots do all of that. The robots will scale up and down our requesters while we sleep.

We will continue to do application programming interface (API)-capable automation with anything that has a REST API. If we can connect to that and manipulate it, we can do pretty much whatever automation we want. 

We are also containerizing all things. So if any application can be containerized properly, containerize it if you can.

As far as what decision-making engine we have to do the auto-scaling on the physical layer, we haven’t really decided upon what that is. We have some ideas but we are still looking for that.

Gardner: How about more predictive analytics using artificial intelligence (AI) with the data that you have emanating from your data center? Maybe AIOps?

Sullins: Well, without a doubt. I, for one, haven’t done any sort of deep dive into that, but I know it’s all the rage right now. I would be open to pretty much anything that will encompass what I just talked about. If that’s HPE InfoSight, then that’s what it is. I don’t have a lot of experience quite honestly with InfoSight as of yet. We do have it installed in a proof of concept (POC) form, although a lot of the priorities for that have been shifted due to COVID-19. We hope to revisit that pretty soon, so absolutely.

Gardner: To close out, you were ahead of the curve on digital transformation. That allowed you to be very agile when it came time to react to the COVID-19 pandemic.  What did that get you? Do you have any results?

Sullins: Yes, as a matter of fact, our boss’s boss, his boss — so three bosses up from me — he actually sits in on our load testing. It was an all-hands-on-deck situation during that May online sale. He said that it was the most seamless one that he had ever seen. There were almost no issues with this one.

We had done what we needed on the infrastructure side to make sure that we met dynamic demands. It was very successful. We went past our goals, so it was a win-win all the way around.

What I attribute that to is, yes, we had done what we needed on the infrastructure side to make sure that we met dynamic demands. Also, everybody worked as a team. Everybody, all the way up the stacks, from our infrastructure contribution, to the hypervisor and hardware layer, all the way on up to the application layer and the containers, and all of our DevOps stuff. It was very successful. We went past our goals of what we had thought for the sale, so it was a win-win all the way around.

Gardner: Even though you were going through this terrible period of adjustment, that’s very impressive.

Sullins: Yes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, data center, Data center transformation, Enterprise architect, Hewlett Packard Enterprise, marketing, retail, User experience, Virtualization, VMware | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

How the right data and AI deliver insights and reassurance on the path to a new normal

The next BriefingsDirect Voice of AI Innovation podcast explores how businesses and IT strategists are planning their path to a new normal throughout the COVID-19 pandemic and recovery.

By leveraging the latest tools and gaining data-driven inferences, architects and analysts are effectively managing the pandemic response — and giving more people better ways to improve their path to the new normal. Artificial intelligence (AI) and data science are proving increasingly impactful and indispensable.

Stay with us as we examine how AI forms the indispensable pandemic response team member for helping businesses reduce risk of failure and innovate with confidence. To learn more about the analytics, solutions, and methods that support advantageous reactivity — amid unprecedented change — we are joined by two experts.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Please welcome Arti Garg, Head of Advanced AI Solutions and Technologies, at Hewlett Packard Enterprise (HPE), and Glyn Bowden, Chief Technologist for AI and Data, at HPE Pointnext Services. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We’re in uncharted waters in dealing with the complexities of the novel coronavirus pandemic. Arti, why should we look to data science and AI to help when there’s not much of a historical record to rely on?  

Garg: Because we don’t have a historical record, I think data science and AI are proving to be particularly useful right now in understanding this new disease and how we might potentially better treat it, manage it, and find a vaccine for it. And that’s because at this moment in time, raw data that are being collected from medical offices and through research labs are the foundation of what we know about the pandemic.

This is an interesting time because, when you know a disease, medical studies and medical research are often conducted in a very controlled way. You try to control the environment in which you gather data, but unfortunately, right now, we can’t do that. We don’t have the time to wait.

And so instead, AI — particularly some of the more advanced AI techniques — can be helpful in dealing with unstructured data or data of multiple different formats. It’s therefore becoming very important in the medical research community to use AI to better understand the disease. It’s enabling some unexpected and very fruitful collaborations, from what I’ve seen.

Gardner: Glyn, do you also see AI delivering more, even though we’re in uncharted waters?

Bowden: The benefits of something like machine learning (ML), for example, which is a subset of AI, is very good at handling many, many features. So with a human being approaching these projects, there are only so many things you can keep in your head at once in terms of the variables you need to consider when building a model to understand something.

But when you apply ML, you are able to cope with millions or billions of features simultaneously — and then simulate models using that information. So it really does add the power of a million scientists to the same problem we were trying to face alone before.

Gardner: And is this AI benefit something that we can apply in many different avenues? Are we also modeling better planning around operations, or is this more research and development? Is it both?

Data scientists are collaborating directly with medical science researchers and learning how to incorporate subject matter expertise into data science models. 

Garg: There are two ways to answer the question of what’s happening with the use of AI in response to the pandemic. One is actually to the practice of data science itself.

One is, right now data scientists are collaborating directly with medical science research and learning how to incorporate subject matter expertise into data science models. This has been one of the challenges preventing businesses from adopting AI in more complex applications. But now we’re developing some of the best-practices that will help us use AI in a lot of domains.

In addition, businesses are considering the use of AI to help them manage their businesses and operations going forward. That includes things such as using computer vision (CV) to ensure that social distancing happens with their workforce, or other types of compliance we might be asked to do in the future.

Gardner: Are the pressures of the current environment allowing AI and data science benefits to impact more people? We’ve been talking about the democratization of AI for some time. Is this happening more now?

More data, opinions, options 

Bowden: Absolutely, and that’s both a positive and a negative. The data around the pandemic has been made available to the general public. Anyone looking at news sites or newspapers and consuming information from public channels — accessing the disease incidence reports from Johns Hopkins University, for example — we have a steady stream of it. But those data sources are all over the place and are being thrown to a public that is only just now becoming data-savvy and data-literate.

As they consume this information, add their context, and get a personal point of view, that is then pushed back into the community again — because as you get data-centric you want to share it.

So we have a wide public feed — not only from universities and scholars, but from the general public, who are now acting as public data scientists. I think that’s creating a huge movement. 

Garg: I agree. Making such data available exposes pretty much anyone to these amazing data portals, like Johns Hopkins University has made available. This is great because it allows a lot of people to participate.

It can also be a challenge because, as I mentioned, when you’re dealing with complex problems you need to be able to incorporate subject matter expertise into the models you’re building and in how you interpret the data you are analyzing.

And so, unfortunately, we’ve already seen some cases — blog posts or other types of analysis — that get a lot of attention in social media but are later found to be not taking into account things that people who had spent their careers studying epidemiology, for example, might know and understand.

Gardner: Recently, I’ve seen articles where people now are calling this a misinformation pandemic. Yet businesses and governments need good, hard inference information and data to operate responsibly, to make the best decisions, and to reduce risk.

What obstacles should people overcome to make data science and AI useful and integral in a crisis situation?

Garg: One of the things that’s underappreciated is that a foundation, a data platform, makes data managed and accessible so you can contextualize and make stronger decisions based on it. That’s going to be critical. It’s always critical in leveraging data to make better decisions. And it can mean a larger investment than people might expect, but it really pays off if you want to be a data-driven organization.

Know where data comes from 

Bowden: There are a plethora of obstacles. The kind that Arti is referring to, and that is being made more obvious in the pandemic, is the way we don’t focus on the provenance of the data. So, where does the data come from? That doesn’t always get examined, and as we were talking about a second ago, the context might not be there.

All of that can be gleaned from knowing the source of the data. The source of the data tends to come from the metadata that surrounds it. So the metadata is the data that describes the data. It could be about when the data was generated, who generated it, what it was generated for, and who the intended consumer is. All of that could be part of the metadata.

Organizations need to look at these data sources because that’s ultimately how you determine the trustworthiness and value of that data.

We don’t focus on the provenance of the data. Where does the data come from? That doesn’t always get examined and he context might not be there.

Now it could be that you are taking data from external sources to aggregate with internal sources. And so the data platform piece that Arti was referring to applies to properly bringing those data pieces together. It shouldn’t just be you running data silos and treating them as you always treated them. It’s about aggregation of those data pieces. But you need to be able to trust those sources in order to be able to bring them together in a meaningful way.

So understanding the provenance of the data, understanding where it came from or where it was produced — that’s key to knowing how to bring it together in that data platform.

Gardner: Along the lines of necessity being the mother of invention, it seems to me that a crisis is also an opportunity to change culture in ways that are difficult otherwise. Are we seeing accelerants given the current environment to the use of AI and data?

AI adoption on the rise 

Garg: I will answer that question from two different perspectives. One is certainly the research community. Many medical researchers, for example, are doing a lot of work that is becoming more prominent in people’s eyes right now.

I can tell you from working with researchers in this community and knowing many of them, that the medical research community has been interested and excited to adopt advanced AI techniques, big data techniques, into their research. 

It’s not that they are doing it for the first time, but definitely I see an acceleration of the desire and necessity to make use of non-traditional techniques for analyzing their data. I think it’s unlikely that they are going to go back to not using those for other types of studies as well.

In addition, you are definitely going to see AI utilized and become part of our new normal in the future, if you will. We are already hearing from customers and vendors about wanting to use things such as CV to monitor social distancing in places like airports where thermal scanning might already be used. We’re also seeing more interest in using that in retail.

So some AI solutions will become a common part of our day-to-day lives.

Gardner: Glyn, a more receptive environment to AI now?

Bowden: I think so, yes. The general public are particularly becoming used to AI playing a huge role. The mystery around it is beginning to fade and it is becoming far more accepted that AI is something that can be trusted.

It does have its limitations. It’s not going to turn into Terminator and take over the world.

The fact that we are seeing AI more in our day-to-day lives means people are beginning to depend on the results of AI, at least from the understanding of the pandemic, but that drives that exception.

The general public are particularly becoming used to AI playing a huge role. The mystery around it is beginning to fade and it is becoming far more accepted that AI is something that can be trusted.

When you start looking at how it will enable people to get back to somewhat of a normal existence — to go to the store more often, to be able to start traveling again, and to be able to return to the office — there is that dependency that Arti mentioned around video analytics to ensure social distancing or temperatures of people using thermal detection. All of that will allow people to move on with their lives and so AI will become more accepted.

I think AI softens the blow of what some people might see as a civil liberty being eroded. It softens the blow of that in ways and says, “This is the benefit already and this is as far as it goes.” So it at least forms discussions whenever it was formed before.

Garg: One of the really valuable things happening right now are how major news publications have been publishing amazing infographics, very informative, both in terms of the analysis that they provide of data and very specific things like how restaurants are recovering in areas that have stay-in-place orders.

In addition to providing nice visualizations of the data, some of the major news publications have been very responsible by providing captions and context. It’s very heartening in some cases to look at the comments sections associated with some of these infographics as the general public really starts to grapple with the benefits and limitations of AI, how to contextualize it and use it to make informed decisions while also recognizing that you can go too far and over-interpret the information.

Gardner: Speaking of informed decisions, to what degree you are seeing the C-suite — the top executives in many businesses — look to their dashboards and query datasets in new ways? Are we seeing data-driven innovation at the top of decision-making as well?

Data inspires C-suite innovation 

Bowden: The C-suite is definitely taking a lot of notice of what’s happening in the sense that they are seeing how valuable the aggregation of data is and how it’s forwarding responses to things like this.

So they are beginning to look internally at what data sources are available within their own organizations. I am thinking now about how do we bring this together so we can get a better view of not only the tactical decisions that we have to make, but using the macro environmental data, and how do we now start making strategic decisions, and I think the value is being demonstrated for them in plain sight.

So rather than having to experiment, to see if there is going to be value, there is a full expectation that value will be delivered, and now the experiment is how much they can draw from this data now.

Garg: It’s a little early to see how much this is going change their decision-making, especially because frankly we are in a moment when a lot of the C-suite was already exploring AI and opening up to its possibilities in a way they hadn’t even a year ago.

And so there is an issue of timing here. It’s hard to know which is the cause and which is just a coincidence. But, for sure, to Glyn’s point, they are dealing with more change.

Gardner: For IT organizations, many of them are going to be facing some decisions about where to put their resources. They are going to be facing budget pressures. For IT to rise and provide the foundation needed to enable what we have been talking about in terms of AI in different sectors and in different ways, what should they be thinking about?

How can IT make sure they are accelerating the benefits of data science at a time when they need to be even more choosy about how they spend their dollars?

IT wields the sword to deliver DX 

Bowden: With IT particularly, they have never had so much focus as right now, and probably budgets are responding in a similar way. This is because everyone has to now look at their digital strategy and their digital presence — and move as much as they can online to be able to be resistant to pandemics and at-risk situations that are like this.

So IT has to have the sword, if you like, in that battle. They have to fix the digital strategy. They have to deliver on that digital promise. And there is an immediate expectation of customers that things just will be available online.

With the pandemic, there is now an AI movement that will get driven purely from the fact that so much more commerce and business are going to be digitized. We need to enable that digital strategy. 

If you look at students in universities, for example, they assume that it will be a very quick fix to start joining Zoom calls and to be able to meet that issue right away. Well, actually there is a much bigger infrastructure that has to sit behind those things in order to be able to enable that digital strategy.

So, there is now an AI movement that will get driven purely from the fact that so much more commerce and business is going to be digitized.

Gardner: Let’s look to some more examples and associated metrics. Where do you see AI and data science really shining? Are there some poster children, if you will, of how organizations — either named or unnamed — are putting AI and data science to use in the pandemic to mitigate the crisis or foster a new normal?

Garg: It’s hard to say how the different types of video analytics and CV techniques are going to facilitate reopening in a safe manner. But that’s what I have heard about the most at this time in terms of customers adopting AI.

In general, we are at very early stages of how an organization is going to decide to adopt AI. And so, for sure, the research community is scrambling to take advantage of this, but for organizations it’s going to take time to further adopt AI into any organization. If you do it right, it can be transformational. Yet transformational usually means that a lot of things need to change — not just the solution that you have deployed.

Bowden: There’s a plethora of examples from the medical side, such as how we have been able to do gene analysis, and those sorts of things, to understand the virus very quickly. That’s well-known and well-covered.

The bit that’s less well covered is AI supporting decision-making by governments, councils, and civil bodies. They are taking not only the data from how many people are getting sick and how many people are in hospital, which is very important to understand where the disease is but augmenting that with data from a socioeconomic situation. That means you can understand, for example, where an aging population might live or where a poor population might live because there’s less employment in that area.

The impact of what will happen to their jobs, what will happen if they lose transport links, and the impact if they lose access to healthcare — all of that is being better understood by the AI models.

As we focus on not just the health data but also the economic data and social data, we have a much better understanding of how society will react, which has been guiding the principles that the governments have been using to respond.

So when people look at the government and say, “Well, they have come out with one thing and now they are changing their minds,” that’s normally a data-driven decision and people aren’t necessarily seeing it that way.

So AI is playing a massive role in getting society to understand the impact of the virus — not just from a medical perspective, but from everything else and to help the people.

Gardner: Glyn, this might be more apparent to the Pointnext organization, but how is AI benefiting the operational services side? Service and support providers have been put under tremendous additional strain and demand, and enterprises are looking for efficiency and adaptability.

Are they pointing the AI focus at their IT systems? How does the data they use for running their own operations come to their aid? Is there an AIOps part to this story? 

AI needs people, processes 

Bowden: Absolutely, and there has definitely become a drive toward AIOps.

When you look at an operational organization within an IT group today, it’s surprising how much of it is still human-based. It’s a personal eyeball looking at a graph and then determining a trend from that graph. Or it’s the gut feeling that a storage administrator has when they know their system is getting full and they have an idea in the back of their head that last year something happened seasonally from within the organization making decisions that way. 

We are therefore seeing systems such as HPE’s InfoSight start to be more prominent in the way people make those decisions. So that allows plugging into an ecosystem whereby you can see the trend of your systems over a long time, where you can use AI modeling as well as advanced analytics to understand the behavior of a system over time, and how the impact of things — like everybody is suddenly starting to work remotely – does to the systems from a data perspective. 

So the models-to-be need to catch up in that sense as well. But absolutely, AIOps is desirable. If it’s not there today, it’s certainly something that people are pursuing a lot more aggressively than they were before the pandemic. 

Gardner: As we look to the future, for those organizations that want to be more data-driven and do it quickly, any words of wisdom with 20/20 hindsight? How do you encourage enterprises — and small businesses as well — to better prepare themselves to use AI and data science?

Garg: Whenever I think about an organization adopting AI, it’s not just the AI solution itself but all of the organizational processes — and most importantly the people in an organization and preparing them for the adoption of AI. 

I advise organizations that want to use AI and corporate data-driven decision-making to, first of all, make sure you are solving a really important problem for your organization. Sometimes the goal of adopting AI becomes more important than the goal of solving some kind of problem. So I always encourage any AI initiative to be focused on really high-value efforts. 

Use your AI initiative to do something really valuable to your organization and spend a lot of time thinking about how to make it fit into the way your organization currently works. Make it enhance the day-to-day experience of your employees because, at the end of the day, your people are your most valuable assets. 

Photo of light bulbs with shining fibers in a shape of CHANGE MANAGEMENT concept related words isolated on black background

Those are important non-technical things that are non-specific to the AI solution itself that organizations should think about if they want the shift to being AI-driven and data-driven to be successful. 

For the AI itself, I suggest using the simplest-possible model, solution, and method of analyzing your data that you can. I cannot tell you the number of times where I have heard an organization come in saying that they want to use a very complex AI technique to solve a problem that if you look at it sideways you realize could be solved with a checklist or a simple spreadsheet. So the other rule of thumb with AI is to keep it as simple as possible. That will prevent you from incurring a lot of overhead. 

Gardner: Glyn, how should organizations prepare to integrate data science and AI into more parts of their overall planning, management, and operations? 

Bowden: You have to have a use case with an outcome in mind. It’s very important that you have a metric to determine whether it’s successful or not, and for the amount of value you add by bringing in AI. Because, as Arti said, a lot of these problems can be solved in multiple ways; AI isn’t the only way and often isn’t the best way. Just because it exists in that domain doesn’t necessarily mean it should be used.

AI isn’t an on/off switch; it’s an iteration. You can start with something small and then build into bigger and bigger components that bring more data to bear on the problem, and then add new features that lead to new functions and outcomes.

The second part is AI isn’t an on/off switch; it’s an iteration. You can start with something small and then build into bigger and bigger components that bring more and more data to bear on the problem, as well as then adding new features that lead to new functions and outcomes.

The other part of it is: AI is part of an ecosystem; it never exists in isolation. You don’t just drop in an AI system on its own and it solves a problem. You have to plug it into other existing systems around the business. It has data sources that feed it so that it can come to some decision.

Unless you think about what happens beyond that — whether it’s visualizing something to a human being who will make a decision or automating a decision – it could really just be hiring the smartest person you can find and locking them in a room.

Pandemic’s positive impact

Gardner: I would like to close out our discussion with a riff on the adage of, “You can bring a horse to water but you can’t make them drink.” And that means trust in the data outcomes and people who are thirsty for more analytics and who want to use it.

How can we look with reassurance at the pandemic as having a positive impact on AI in that people want more data-driven analytics and will trust it? How do we encourage the perception to use AI? How is this current environment impacting that? 

Garg: The fact that so many people are checking the trackers of how the pandemic is spreading and learning through a lot of major news publications as they are doing a great job of explaining this. They are learning through the tracking to see how stay-in-place orders affect the spread of the disease in their community. You are seeing that already.

We are seeing growth and trust in how analyzing data can help make better decisions. As I mentioned earlier, this leads to a better understanding of the limitations of data and a willingness to engage with that data output as not just black or white types of things. 

As Glyn mentioned, it’s an iterative process, understanding how to make sense of data and how to build models to interpret the information that’s locked in the data. And I think we are seeing that.

We are seeing a growing desire to not only view this as some kind of black box that sits in some data center — and I don’t even know where it is — that someone is going to program, and it’s going to give me a result that will affect me. For some people that might be a positive thing, but for other people it might be a scary thing.

People are now much more willing to engage with the complexities of data science. I think that’s generally a positive thing for people wanting to incorporate it in their lives more because it becomes familiar and less other, if you will. 

Gardner: Glyn, perceptions of trust as an accelerant to the use of yet more analytics and more AI?

Bowden: The trust comes from the fact that so many different data sources are out there. So many different organizations have made the data available that there is a consistent view of where the data works and where it doesn’t. And that’s built up the capability of people to accept that not all models work the first time, that experimentation does happen, and it is an iterative approach that gets to the end goal. 

I have worked with customers who, when they saw a first experiment fall flat because it didn’t quite hit the accuracy or targets they were looking for, they ended the experiment. Whereas now I think we are seeing in real time on a massive scale that it’s all about iteration. It doesn’t necessarily work the first time. You need to recalibrate, move on, and do refinement. You bring in new data sources to get the extra value.

What we are seeing throughout this pandemic is the more expertise and data science you throw in an instance, the much better the outcome at the end. It’s not about that first result. It’s about the direction of the results, and the upward trend of success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cyber security, data analysis, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, machine learning, User experience, video delivery | Leave a comment

Data science helps hospitals improve patient payments and experiences while boosting revenue

The next BriefingsDirect healthcare finance insights discussion explores new ways of analyzing healthcare revenue trends to improve both patient billing and services.

Stay with us as we explore new approaches to healthcare revenue cycle management and outcomes that give patients more options and providers more revenue clarity.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the next generation of data-driven patient payments process improvements, we’re joined by Jake Intrator, Managing Consultant for Data and Services at Mastercard, and Julie Gerdeman, CEO of HealthPay24. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Julie, what’s driving healthcare providers to seek new and better ways of analyzing data to better manage patient billing? What’s wrong with the status quo?

Gerdeman: Dana, we are in such an interesting time, particularly in the US, with this being an election time. There is such a high level of visibility — really a spotlight on healthcare. There is a lot of change happening, such as in regulations, that highlights interoperability of data and price transparency for patients.

And there’s ongoing change on the insurance reimbursement side, with payer plans that seem to change and evolve every year. There are also trends changing provider compensation, including value-based care and pay-for-performance.

Gerdeman

On the consumer-patient side, there is significant pressure in the market. Statistics show that 62 percent of patients say knowing their out-of-pocket costs in advance will impact their likelihood of pursuing care. So the visibility and transparency of costs — that price expectation — is very, very important and is driving consumerism into healthcare like we have never seen before due to rising costs to patients.

Finally, there is more competition. Where I live in Pennsylvania, I can drive a five-mile radius and access a multitude of different health providers in different systems. That level of competition is unlike anything we have seen before.

Healthcare’s sea change

Gardner: Jake, why is healthcare revenue management difficult? Is it different from other industries? Do they lag in their use of technology? Why is the healthcare industry in the spotlight, as Julie pointed out?

Intrator: The word that Julie used that was really meaningful to me was consumerism. There is a shift across healthcare where patients are responsible for a much larger proportion of their bills than they ever used to be.

And so, as things shift away from hospitals working with payers to receive dollars in an efficient, easy process — now the revenue is coming from patients. That means there needs to be new processes and new solutions to make it a more pleasant experience for patients to be able to pay. We need to enable people to pay when they want to pay, in the ways that they want to pay.

Intrator

That’s something we have keyed on to, as a payments organization. That’s also what led us to work with HealthPay24. 

Gardner: It’s fascinating. If we are going to a consumer-type model for healthcare, why not take advantage of what consumers have been doing with their other financing, such as getting reports every month on their bills? It seems like there is a great lesson to be learned from what we all do with our credit cards. Julie, is that what’s going to happen?

IConsumer in driver’s seat 

Gerdeman: Yes, definitely. It’s interesting that healthcare has been sitting in a time warp. Historically, there remain many manual processes and functions in the health revenue cycle. That’s attributed to a piecemeal approach — different segments of the revenue cycle were tackled either at different times or acquisitions impacted that. I read recently that there are still eight billion faxes happening in healthcare.

So that consumer-level experience, as Jake indicated, is where it’s going — and where we need to go even faster.

Technology provides the transparency and interoperability of data. Investment in IT is happening, but it needs to happen even more.

Gardner: Wherever there is waste, inefficiency, and a lack of clarity is an opportunity to fix that for all involved. But what are the stakes? How much waste or mismanagement are we talking about? 

Intrator: The one statistic that sticks out to me is that care providers aren’t collecting as much as 80 percent of balances from older bills. So that’s a pretty substantial amount — and a large opportunity. Julie, do you have more? 

Gerdeman: I actually have a statistic that’s staggering. There is waste of $265 billion spent on administrative complexity. And then another $230 to $240 billion attributed to what’s termed pricing failure, which means price increases that aren’t in line with the current market. The stakes are very high and the opportunity is very large.

We have data that shows more than 50 percent of chief financial officers (CFOs) want better access to data and better dashboards to understand the scope of the problem. As we were talking about consumerism, Mastercard is just phenomenal in understanding consumer behavior. Think about the personalized experiences that organizations like Mastercard provide — or GoogleAmazonDisney, and Netflix. Everything is becoming so personalized in our consumer lives.

But healthcare? We are not there yet. It’s not a personalized experience where providers know in advance what a consumer or patient wants. HealthPay24 and Mastercard are coming together to get us much closer to that. But, truly, it’s a big opportunity.

Intrator: I agree. Payers and providers haven’t figured out how they enable personalized experiences. It’s something that patients are starting to expect from the way they interact with companies like Netflix, Disney, and Mastercard. It’s becoming table-stakes. It’s really exciting that we are partnering to figure out how to bring that to healthcare payers and providers alike.

Gardner: Julie, you mentioned that patients want upfront information about what their procedures are going to cost. They want to know their obligation before they go through a medical event. But oftentimes the providers don’t know in advance what those costs are going to be.

So we have ambiguity. And one of the things that’s always worked great for ambiguity in other industries is to look at the data, extrapolate, and get analytics involved. So, how are data-driven analytics coming to the rescue? How will that help?

Data to the rescue 

Gerdeman: Historical data allows for a forward-looking view. For HealthPay24, for example, we have been involved in patient payments for 20 years. It makes us a pioneer in the space. It gives us 20 years of data, information, and trends that we can look at. To me, data is absolutely critical.

Having come out of the spend management technology industry I know that in the categories of direct and indirect materials there have long been well-defined goods and services that are priced and purchased accordingly.

But, the ambiguity of patient healthcare payments and patient responsibility presents a new challenge. What artificial intelligence (AI) and algorithms provide are the capability to help anticipate and predict. That offers something much more applicable to a patient at a consumer level.

Gardner: Jake, when you have the data you can use it. Are we still at the point of putting the data together? Or are we now already able to deliver those AI- and machine learning (ML)-driven outcomes?

Intrator: Hospitals still don’t feel like they are making the best use of data. They tie that both to not having access to the data and not yet having the talent, resources, and tools to leverage it effectively. This is top of mind for many people in healthcare.

In seeking to help them, there are two places where I divide the use of analytics. The first is ahead of time. By using patient estimator tools, can you understand what somebody might owe? That’s a really tricky question. We are grappling with it at Mastercard.

By working with HealthPay24, we have developed a solution that is ready and working today. Answering the questions gets a lot smarter when you incorporate the data and analytics.

By working with HealthPay24, we have developed a solution that is ready and working today on the other half of the process. For example, somebody comes to the hospital. They know that they have some amount of patient payment responsibility. What’s the right way for a hospital to interact with that person? What are the payment options that should be available to them? Are they paying upfront? Are they paying over a period of time? What channels are you using to communicate? What options are you giving to them? Answering those questions gets a lot smarter when you incorporate data and analytics. And that’s exactly what we are doing today.

Gardner: Well, we have been dancing around and alluding to the joint-solution. Let’s learn more about what’s going on between HealthPay24 and Mastercard. Tell us about your approach. Are we in a proof of concept (POC) or is this generally available?

Win-win for patients and providers 

Gerdeman: We are currently in a POC phase, working with initial customers on the predictive analytic capability that marries the Mastercard Test and Learn platform with HealthPay24’s platform and executing what’s recommended through the analytics in our platform.

Jake, go ahead and give an overview of Test and Learn, and then we can talk about how we have come together to do some great work for our customers.

Intrator: Sure. Test and Learn is a platform that Mastercard uses with a large number of partner clients to measure the impact of business decisions. We approach that through in-market experiments. You can do it in a retail context where you are changing prices or you can do it in the healthcare context where you are trying different initiatives to focus on patient payments. 

That’s how we brought it to bear within the HealthPay24 context. We are working together along with their provider partners to understand the tactics that they are using to drive payments. What’s working, what’s working for the right patient, and what’s working at the right time for the right patients? 

Gerdeman: It’s important for the audience to understand that the end-goal is revenue collection and the big opportunity providers have to collect more. The marriage of Test and Learn with HealthPay24 provides the intelligence to allow providers to collect more, but it also offers more options to patients based on that intelligence and creates a better patient experience in the end.

The marriage of Test and Learn with HealthPay24 provides the intelligence to allow providers to collect more, but it also offers more options to patients based on that intelligence, and creates a better patient experience.

If a particular patient will always take a payment plan and make those payments consistently – that is versus when they are presented with a big amount and wouldn’t pay it off – the intelligence through the platform will say, “This patient should be offered a payment plan consistently,” and the provider ends up collecting all of the revenue.

That’s what we are super-excited about. The POC is showing greater revenue collection by offering flexibility in the options that patients truly want and need.

Gardner: Let’s unpack this a little bit. So we have HealthPay24 as chocolate and Mastercard’s Test and Learn platform as peanut butter, and we are putting them together to make a whole greater than the sum of the parts. What’s the chocolate? What’s the peanut butter? And what’s the greater whole?

Like peanut butter and chocolate 

Intrator: One of the things that’s made working with HealthPay24 so exciting for us is that they sit in the center of all of the data and the payment flows. They have the capability to directly guide the patient to the best possible experience.

They are hands-on with the patients. They can implement all of these great learnings through our analytics. We can’t do that on our own. We can do the analytics, but we are not the infrastructure that enables what’s happening in the real world.

That’s HealthPay24. They are in the real world. When you have the data flowing back and forth, we can help measure what’s working and come up with new ideas and hypotheses about how to try different payment programs. 

It’s been a really important chocolate and peanut butter combination where you have HealthPay24 interacting with patients and us providing the analytics in the background to inform how that’s happening.

Gerdeman: Jake said it really well. It is a beautiful combination because years ago, the hot thing was propensity to pay. And, yes, providers still talk about that. It was best practice many years ago, of pulling a soft or even hard credit check on a patient to determine their propensity to pay and potentially offer financial assistance, even charity, given the needs of the patient.

But this takes it to a whole other level. That’s why the combination is magical. What makes it so different is there doesn’t need to be that old way of thinking. It’s truly proactive through the data we have in working with providers and the unique capabilities of Mastercard Test and Learn. We bring those together and offer proactively the right option for that specific patient-consumer.

It’s super exciting because payment plans are just one example. The platform is phenomenal and the capabilities are broad. The next financial application is discounts.

Through HealthPay24, providers could configure discounts based on their own policies and thresholds. But, if you know that a particular patient will pay the amount when offered the discount through the platform, that should be offered every time. The intelligence gives us the capability to know that, to offer it, and for the provider to collect that discounted amount, which might be more than that amount going to bad debt and never being collected.

Intrator: If you are able to drive behavior with those discounts, is it 10 percent or 20 percent? If you give away an additional 10 percent, how does that change the number of people reacting to it? If you give away more, you had better hope that you are getting more people to pay more quickly.

Those are exactly the sorts of analytical questions we can answer with Test and Learn and with HealthPay24 leading the charge on implementing those solutions. I am really excited to see how this continues to solve more problems going forward.

Gardner: It’s interesting because in the state of healthcare now, more and more people, at least in the United States, have larger bills regardless of their coverage. There are more co-pays, more often there are large deductibles, with different deductibles for each member of a family, for example, and varying deductibles depending on the type of procedures. So, it seems like many more people will be facing more out-of-pocket items when it comes to healthcare. This impacts literally tens of millions of people. 

So we have created this new chocolate confection, which is wonderful, but the proof is in the eating. When are patient-consumers going to get more options, not only for discounts, but perhaps for financing? If you would like to spread the payments out, does it work in both ways, both discounts as well as in payment plans with interest over time? 

Flexibility plus privacy

Gerdeman: In HealthPay24, we currently have all of the above — depending on what the provider wants to offer, their patient base, and the needs and demographics. Yes, they can offer payment plans, discounts, and lines of credit. That’s already embedded in the platform. It creates an opportunity for all the different options and the flexibility we talked about. 

Earlier I mentioned personalization, and this gets us much closer to personalization of the financial experience in healthcare. There is so much happening on the clinical side, with great advances around clinical care and how to personalize it. This combination gets us to the personalization of offers and options for patients and payments like we have never seen in the past.

Gardner: Jake, for those listening and reading, who maybe are starting to feel a little concerned that all this information — about not just their healthcare, but now their finances — being bandied about among payers, providers, and insurers, are we going to protect that financial information? How should people feel about this in terms of a privacy or a comfort level?

We aspire and really do put a lot of work and effort into being a leader in data privacy and allowing people to have ownership of their data and to feel comfortable. 

Intrator: That is a question and a problem near and dear to Mastercard. We aspire and really do put a lot of work and effort into being a leader in data privacy and allowing people to have ownership of their data and to feel comfortable. I think that’s something that we deeply believe in. It’s been a focus throughout our conversations with HealthPay24 to make sure that we are doing it right on both sides.

Gardner: Now that you have this POC in progress, what have been some of the outcomes? It seems to me over time the more you deal with more data, the more benefits, and then the more people adopt it, and so on. Where are we now, and do we have some insight into how powerful is this?

Gerdeman: We do. In fact, one example is a 400-bed hospital in the Northeast US that, through the combination of Mastercard Test and Learn and HealthPay24, were able to look at and identify 25,000 unpaid accounts. Just by targeting 5,000 of the 25,000, they were able to identify an incremental $1 million in collections to the hospital.

That is very significant in that they are just targeting the top 5,000 in a conservative approach. They now know that they have the capability through this intelligence and by offering the right plans to the right people to be able to collect $1 million more to their bottom line.

Intrator: That certainly captures the big picture and the big story. I can also zoom in on a couple of specific numbers that we saw in the POC. As we tackled that, we wanted to understand a couple of different metrics, such as increases in payments. We saw substantial increases from payment plans. As a result, people are paying more than 60 percent more on their bills compared to similar patients that haven’t received the payment plans. 

Then we zoomed in a step farther. We wanted to understand the specific types of patients who benefited more from receiving a payment plan and how that potentially could guide us going forward. We were able to dig in, to build a predictive model, and that’s exactly what Julie was talking about. Those top 25,000 accounts, how much we think they are going to pay and the relative prioritization. Hospitals have limited resources. So how do you make sure that you are focusing most appropriately?

Gardner: Now that we have gotten through this trial period, does this scale? Is this something you can apply to almost any provider organization? If I am a provider organization, how might I start to take advantage of this? How does this go to market?

Personalized patient experiences 

Gerdeman: It absolutely does scale. It applies across all providers; actually, it applies across many industries as well. Any provider who wants to collect more wants additional intelligence around their patient behavior, patient payments and collection behavior — it really is a terrific solution. And it scales as we integrate the technologies. I am a huge believer in best-of-breed ecosystems. This technology integrates into the HealthPay24 solution. The recommendations are intelligent and already in the platform for providers.

Gardner: And how about that grassroots demand? Should people start going into their clinics and emergency departments and say, “Hey, I want the plan that I heard about. I want to have financing. I want you to give me all my options.” Should people be advocating for that level of consumerism now when they go into a healthcare environment?

Gerdeman: You know, Dana, they already are. We are at a tipping point in the disruption of healthcare. This kind of grassroots demand of consumerism and a consumer personalized experience — it’s only a matter of time. You mentioned data privacy earlier. There is a very interesting debate happening in healthcare around the balance between sharing data, which is so important for care, billing, and payment, with the protection of privacy. We take all of that very seriously. 

Nonetheless, I feel the demand from providers as well as patients will only get greater.

Gardner: Before we close out let’s extrapolate on the data we have. How will things be different in two or three years from now when more organizations embrace these processes and platforms?

Intrator: The industry is going to be a lot smarter in a couple of years. The more we learn from these analytics, the more we incorporate it into the decisions that are happening every day, then it’s going to feel like it fits you as a patient better. It’s going to improve the patient experience substantially.

The industry is going to be a lot smarter in a couple of years. The more we learn from these analytics, the more we incorporate it into the decisions that are happening every day. It’s going to improve the patient experience substantially. 

Personally, I am really excited to see where it goes. There are going to be new solutions that we haven’t heard about yet. I am closely following everything that goes on.

Gerdeman: This is heading to an experience for patients where from the moment they seek care, they research care, they are known. They are presented with a curated, personalized experience from the clinical aspect of their encounter all the way through the billing and payment. They will be presented with recommendations based on who they are, what they need, and what their expectations are. 

That’s the excitement around AI and ML and how it’s going to be leveraged in the future. I am with Jake. It’s going to look very different in healthcare experiences for consumers over the next few years.

Gardner: And for those interested in learning more about this pilot program, about the Mastercard Test and Learn platform and HealthPay24’s platform, where might they go? Are there any press releases, white papers? What sort of information is available?

Gerdeman: We have a great case study from the POC that we are currently running. We are happy to work with anyone who is interested, just contact us via our website at HealthPay24 or through Mastercard.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

Posted in artificial intelligence, big data, Business intelligence, data analysis, electronic medical records, healthcare, machine learning, risk assessment, Security, User experience | Leave a comment

How IT modern operational services enables self-managing, self-healing, and self-optimizing

General digital business transformation and managing the new normal around the COVID-19 pandemic have hugely impacted how businesses and IT operate. Faced with mounting complexity, rapid change, and striking budgets, IT operational services must be smarter and more efficient than ever.

The next BriefingsDirect Voice of Innovation podcast examines how Hewlett Packard Enterprise (HPE) Pointnext Services is reinventing the experience of IT support to increasingly rely on automation and analytics to help enable continued customer success as IT enters a new era. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the HPE Pointnext Services vision for the future of IT operational services are Gerry Nolan, Director of Portfolio Product Management, Operational Service Portfolio, at HPE Pointnext Services, and Ronaldo Pinto, Director of Portfolio Product Management, Operational Service Portfolio, at HPE Pointnext Services. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gerry, is it fair to say that IT has never had a more integral part of nearly all businesses and that therefore the intelligent support of IT has never been more critical?

Nolan: We’ve never seen a time like this, Dana. Pretty much every aspect of our life has now moved to digital. It was already moving that way. Everyone is spending more hours per day in various collaboration platforms, going through various digital interactions, and we’re seeing that in our business as well.

That applies to whether you are ordering a pizza, booking time at your gym, getting your morning coffee — pretty much your life has changed forever. We see that dramatically impacting the IT space and the customers we deal with.

Nolan

So, yes, it’s a unique time, we have never seen it before, and we believe things will never be the same again.

Gardner: So, we are reliant on technology for commerce, healthcare, finance, all across many of these scientific activities to combat the pandemic, not to mention more remote education and more remote work — basically every facet of our modern life.

Consequently, how enterprise IT uses services and support has entered a new phase, a new era. Please explain why a digital environment requires more tools and opportunity to the people delivering the new operational services.

Nolan: The IT landscape is very dynamic. There is an expanding array of technology choices, which brings more complexity. Of course, the move to cloud and edge computing introduces new requirements from an IT operations point of view.

Then we got hit with COVID-19 and a whole new set of challenges — huge increases in remote workforce, and all creating problems with networks, performance, and security.

For example, a retail customer that I just met with — they don’t even have a four-walls data center anymore, most of their IT is distributed throughout their retail stores — and another customer, a large telco, is installing edge-related servers on their electricity pylons on the sides of mountains in very remote areas. These types of use-cases need very different operational processes, approaches, and skills. 

Then we got hit with COVID-19, and that brings a whole new set of challenges, with locking down of IT environments, huge increases in remote workforces, all creating problems with network capacity, performance, and security challenges.

As a result, we are seeing customers needing more help than ever while they try and maintain their businesses. At the same time, they need to plan and evolve for the medium- to long-term. They need solutions both for today — to help in this unique lockdown mode — but also to accelerate transformation efforts to move to a digitally enabled customer experience.

Gardner: Ronaldo, this obviously requires more than a traditional helpdesk and telephone support. Where does the operational experience, of even changing the culture around support, kick in? How do we get to a new experience?

Pinto: Dana, many people associate traditional support to telephone support, but today it needs to be much more. As Gerry described, we are moving toward a very distributed, remote, low-touch to no-touch world, and COVID-19, the pandemic, just accelerated that.

To operate in such an environment, companies depend on an increasing number of tools and technologies. You have more variables today, just to control and maintain your performance. So it’s extremely important to arm the people that provide technical support with the latest artificial intelligence (AI) tools and digital infrastructure so they continue to be effective in the work they do.

Pinto

Gardner: Gerry, how has the pandemic and emphasis on remote services accelerated people’s willingness to delve into the newer technologies around automation, AI, predictive analytics and AIOps? Are people more open now to all of that?

Nolan: No question, Dana. Consider any great customer experience that you have today — from dealing with your mobile phone provider to, in my case recently, my utility company. The great experiences offer a variety of ways to access the information and the help you may need on a 24-7 basis. Typically, this has involved a whole range of different elements — from a portal or an app, to some central hub — for how you engage. That can include getting a more personalized dashboard of information and help. Those experiences also often have different engagement options, including access to live people who can answer questions and provide guidance to solve issues. That central hub also provides a wealth of helpful, useful information and can be AI-enabled to provide predictive alerts via dashboards.

There are companies that still provide only a single channel, such as, for example, the utility company I had to call yesterday, which kept me on hold for 45 minutes until I hung up. I tried the website, and they had multiple websites. I sent an e-mail; I am still waiting for a response!

The great customer experiences have multiple elements and dimensions to them. They have great people you can talk to. You have multiple ways of getting to those people. They have a great app or website with all sorts of information and help available, personalized to your needs.

That’s the way of the future. Those companies that are successful and have already started on that path are seeing great success. Those that have not are struggling — especially in this climate. Now, not only is there more need to go digital, the pressure on revenue limits the investment dollars available to move in that direction if you haven’t already done so.

So, yes, there’s a multitude of different challenges here we are dealing with.

Gardner: It’s amazing nowadays when you deal, as a customer, with companies, how you can recognize almost instantly the ones that have invested in digital business transformation and are able to do a lot of different things under duress — and those who didn’t. It’s rather stark.

Ronaldo, dealing with these complexities isn’t just a technology issue. Oftentimes it includes a multi-vendor aspect, a large ecosystem of suppliers. Pointing fingers isn’t going to help if you’re in a time-constrained situation, a crisis situation.

How do the new operational experiences include the capability to bring in many vendors and even so provide a seamless experience back to that customer?

Seamless collaborations succeed 

Pinto: HPE has historically collaborated. If you look at our customers today, they have best-of-breed environments and there are many emerging tools that make those environments more efficient. We also have several startups.

So, it’s extremely important for us to serve our customers by being able to collaborate seamlessly with all of those companies. We have done that in the past and we are expanding the operational capabilities, including tools we have today, to better understand performance, integration between our products, and with third-party products. We can streamline all of that collaboration.

Gardner: And, of course, the complexity extends across hybrid environments, from edge to cloud — multi-cloud, private cloud, hybrid cloud. Is that multi-vendor and multi-architecture mix something that you’re encountering a lot?

Nolan: Today, every customer has a multi-vendor IT landscape. There are various phases of maturity in terms of dealing with legacy environments. But they are dealing with new IT on-premises technologies, they are trying to deploy cloud, or they may be moving to public cloud. There’s a plethora of use cases we see globally with our customers.

The classic issue is when there’s a problem, the finger-pointing or blame-game starts. Even triaging and isolating problems in these environments can be a challenge, let alone the expertise to fix the issue. The more vendors you work with the more dimensions you have to manage.

And the classic issue, as you point out, is when there’s a problem, the finger-pointing or the blame-game starts. Even triaging and isolating problems in these types of environments can be a challenge, let alone having the expertise to fix the issue. Whether it’s in the hardware, software layer, or on somebody else’s platform, it’s difficult. Most vendors, of course, have different service level agreements (SLAs), different role names, different processes, and different contractual and pricing structures.

So, the whole engagement model, even the vocabulary they use, can be quite different; ourselves included, by the way. So, the more vendors you have to work with, the more dimensions you have to manage.

And then, of course, COVID-19 hits and our customers working with multiple vendors have to rely on how all those vendors are reacting to the current climate. And they’re not all reacting in a consistent fashion. The more vendors you have, the more work and time it’s going to take — and the more cost involved.

We call it the power of one. Our customers see huge value in working with a partner who provides a single point of contact, that single throat to choke or hand to shake, and a single focal point for dealing with issues. You can have a single contract, a single invoice, and a single team to work with. It saves a lot of time and it saves a lot of money.

Organizations already in that position are seeing significant benefits. Our multi-vendor business is growing very, very well. And we see that moving into the future as companies try to focus on their core business, whatever that might be, and let IT take care of itself.

Edge to cloud to data center 

Pinto: To your question, Dana, on hybrid environments, it’s not only hybrid, it’s edge to cloud and to the data center. I can give you two examples.

We have a large department store customer with the technology in each of the many stores. We support not only the edge environments in those stores but all the way through to their data center. There are also hybrid environments for data management where you typically have primary storage, secondary storage, and your archiving strategy. All of that is managed by a multitude of backup and data-movement software.

The customer should not be worried with component by component, but with a single, end-to-end solution. We help customers abstract that by supporting the end-to-end data environment and collaborating with the third-party software vendors or platform vendors that will inevitably be a part of the solution.

Gardner: Gerry, earlier you mentioned your own experience with a utility company. You were expecting a multi-channel opportunity to engage with them. How does the IT operational services as an experience become inclusive of such things? Why does that need to be all-inclusive across the solutions and support approaches?

Have it your way

Nolan: An alternative example that I can give is my bank. I have a couple of different banks that I work with, but one in particular invested early in a digital platform. They didn’t replace their brick and mortar models. They still have lots of branches, lots of high-tech ATMs that allow for all types of self-serve.

But they also have a really cool app and website, which they’ve had for a number of years. They didn’t introduce digital as a way of closing down their branches, they keep all of those options available because different people like to integrate and work with their service providers in different ways, and we see that in IT, too.

The key elements to delivering a successful experience in the IT space, an AI-enabled experience, includes having lots of expertise and knowledge available across the IT environment, not just on a single set of products.

Of course, a digital platform provides that personalized view. It includes things like dashboards of what’s in my environment, ongoing alerts and predictions — maybe capacity is running out or maybe costs are beyond what was forecast. Or maybe I have ways of optimizing my costs, some insights around updates to my software, licenses or some systems might be reaching the end of their support life. There is all sorts of information that should be available to me in a personalized way.

And then in terms of accessing experts, the old model is to get on the phone, like I was yesterday for 45 minutes talking to somebody, and in my case, I wasn’t successful. But customers, in some cases, they like to deal with the experts through a chat window or maybe live on the phone. Others like to watch expert technical tips and technique videos. So, we have developed an extensive video library of experts wherein you can pick and choose and listen to some tips and techniques about how to deal with certain key topics we see that customers are interested in.

Moderated forums: Customers actually like sharing their experiences with each other. And then our experts get involved and you mix and match with partners and end-customers and you get this very rich dialogue that goes on around particular topics, best practices, ideas, or there could be problems that somebody else has solved.

AI is at the heart of all of this because it’s constantly learning. It’s like a self-propelling mechanism that just gets better over time. The more knowledge it gains, the more answers are provided.

AI is at the heart of all of this because it’s constantly learning. It’s like a self-propelling mechanism that just gets better over time. The more people come on board, the more knowledge it gains, the more questions they ask, the more answers are provided.

The whole thing just gets better and better over time. It’s key, of course, to have that wide portfolio of help for customers. If they have a strategy, make it work better; if they don’t have a strategy and need help building one, we can help them do that all the way through to designing and implementing those solutions.

And then they can get the ongoing support, which is where Ronaldo and I spend most of our life. But it’s important as a vendor or as a partner to be able to offer customers help across the value chain or across the lifecycle, depending on where they need that help.

Gardner: Ronaldo, let’s dig more deeply into the specifics of the new HPE Pointnext Services’ operational services’ approach, modernizing operations for the future of IT. What does it include?

Meet customers’ modernization terms 

Pinto: We are doing all of this modernization with the customer in mind. What is really important for us is not only accomplishing something, but how you accomplish it. At the end of any interaction the customer needs to feel that their time was used effectively. HPE shows a legitimate concern with the customer success and in feeling positive at the end of the interaction. 

Gerry mentioned the AI tools and alerts. We are integrating all of the sensors, telemetry we get from products in the field, all the way up to our operational processes in the back end so that customers can accomplish whatever they need with us on their own terms.

For example, if there’s an alert or a performance degradation in a product, we provide tools to dig deeper and understand why. “Hey, maybe it’s a component in the infrastructure that needs to be updated or replaced?” We are integrating all of that. We see into our back end operational processes so that we can even detect issues before the customer does. Then we just notify the customer that an action needs to be performed and, if needed, we dispatch the part replacement.

If the customer needs someone at the site to do the replacement, no problem. The customer can schedule that visit easily in a web interface and we will show up in the window that the customer chooses.

It’s offering the customer, as Gerry mentioned, multiple channels and multiple ways to interact. For customers, it means they may prefer a remote automated web interface or the personal touch of a support engineer, but it should be on the customers’ own terms. 

Gardner: I have seen in the release information you provide to analysts like myself the concept of a digital customer platform. What do you mean by a digital customer platform when it comes to operational services?

A focused digital platform 

Nolan: It’s all of the things that Ronaldo just mentioned coming together in a single place. Going back to my bank example, they give you a credit card where you typically have a single place that you go from a digital point of view. It’s either an app and/or a website and that provides you all of this personalized information that’s honed to your specific needs and your specific use case.

For us, from a digital point of view and from a customization platform, we want to provide a single place regardless of your use case. So, whether you are a warranty level customer or a consumption customer, buying your IT on a pay-as-you-go basis, all of the help you need, all of the information, dashboards, all of the ways of engaging with us as a partner, it’s all through a single portal. That’s what we mean when we say the digital platform, that central place that brings it all to life for you as a customer.

Gardner: Why is the consumption-based approach important? How has that changed the game?

Pinto: It’s the same idea, to provide customers the option to consume IT and to use IT on their own terms. HPE pioneered the hybrid IT consumption model. Behind that is Pointnext through all the services we provide — whether the customer chooses to consume or not, on an as-a-service basis, consuming an outcome, or if the customer wants to consume the traditional way, where the customer takes ownership of their underlying infrastructure. We automate those more transactional, repeatable tasks and help the customer focus on innovation and meeting their business objectives through IT. So that is going to be consistent across all the consumption models.

Nolan: What’s important to recognize here is, as a customer, you want choice and choice is good. If the only option you have is, for example, a public cloud solution, then guess what? Every problem you as a customer have, then that public cloud provider has one toolbox. It’s a public cloud solution. 

I have just been speaking with a large insurance company and they are moving toward a cloud-first strategy, which their board decided they needed. So, everything in their mind needs to move to the cloud. And it’s interesting because they decided the way they are going to partner to get that done is directly with a public cloud vendor. And guess what? Every problem, every workload in that organization is now directed toward moving to public cloud, even where that may not be the best outcome for the customer. To Ronaldo’s point, you want to be assessing all of your workloads and deciding where is the best placement of that workload.

You might want to do that work inside your firewall and on your network because certain work will get done better, more cost effectively, and for all sorts of security, network latency, and data regulatory reasons. Having multiple different choices — on-premises, you can do CAPEX, you can do as-a-service — is important. Your partner should be able to offer all those choices. We at HPE, as Ronaldo said, pioneered the IT as-a-service mode. We already have that in place.

Our HPE GreenLake offering allows you to buy and consume all of your IT on a pay-as-you-go basis. We just send you a monthly bill for whatever IT you have used. Everything is included in that bill — your hardware, software, and all of the services, including support. You don’t really need to worry about it.

You care instead about the outcomes. You just want the IT to take care of itself, and you get your bill. Then you can easily align that cost with the revenue coming in. As the revenue goes up, you are using more IT, you pay more; revenue goes down, you are using less IT, you pay less. Fairly simple, but very powerful and very popular.

Gardner: Yes, in the past we have heard so many complaints about unexpected bills, maintenance add-ons, and complex licensing. So, this is something that’s been an ongoing issue for decades. 

Now with COVID-19 and so many people working remotely, can you provide an example of bringing the best minds on the solutions side to wherever a problem is?

Room with a data center view 

Nolan: One that comes to mind sounds like a simplistic use case, but it’s valuable in today’s climate, with the IT lockdown. Inside of HPE, we use multiple collaboration environments. But we own our own collaboration platform, HPE MyRoom.

We launched a feature in that collaboration platform called Visual Remote Guidance, which allows us to collaborate like we are in the customer’s data center virtually. We can turn on the smart device on the customer side, and they can be enabled, through the camera, to actually see the IT situation they are dealing with. 

It might be an installation of some hardware. It could be resolving some technical problem. There are a variety of different use cases we are seeing. Of course, when a system causes a problem and the company has locked-down their entire IT department, they don’t want to see engineers coming in from either HPE or one of our partners.

Visual Remote Guidance allows us to collaborate like we are in the customer’s data center virtually. We can turn on the smart device on the customer side and they can be enabled to see the IT situation that they are all dealing with.

This solution immediately became very useful in helping customers because we now have thousands of remote experts available in various locations around the world. Now, they can instantly connect with the customer. They can be the hands and eyes working with the customer. Then the customer can perform the action, guided all the way through the process by their remote HPE expert. And that’s using a well-proven digital collaboration platform that we have had for years. By just introducing that one new additional feature, it has helped tremendously. 

Many customers were struggling with installing complex solutions. Because they needed to get it done and yet didn’t want to bring anybody onto their site, we can take advantage of our remote experts and get the work done. Our experts guide them through, step by step, and test the whole thing. It’s proving to be very effective. It’s used extensively now around the world. All of our agents have this on their desktop and they can initiate with any customer, in any conversation. So, it’s pretty powerful.

Gardner: Yes, so you have socialized isolation, but you have intense technology collaboration at the same time. 

Ronaldo, HPE InfoSight and automation have gone a long way to helping organizations get in front of maintenance issues, to be proactive and prescriptive. Can you flesh out any examples of where the combination of automation, AI, AIOps, and HPE InfoSight have come together in a way that helps people get past a problem before it gets out of hand?

Stop problems before they start

Pinto: Yes, absolutely. We are integrating all our telemetry from the sensors in our technology with our back-end operational processes. That is InfoSight, a very powerful, AI and machine learning (ML) tool. By collecting from sensors — more than 100 data points from our products every few seconds — and processing all of that data on the back end, we can be informed by trends we see in our installed base and gather knowledge from our product experts and data scientists. 

That allows us to get in front of situations that could result in outages in the environment. For example, a virtual storage volume could be running out of capacity. That could lead to storage devices going offline, bringing down the whole environment. So, we can get ahead of that and fix the problem for the customer before it gets to a business-degradation situation. 

We are expanding the InfoSight capabilities on a daily basis throughout the HPE portfolio. We also should be able to identify, based on the telemetry of the products, what workloads the customer is running and help the customer better utilize all those underlying resources in the context of a specific workload. We also could even identify an improvement opportunity in the underlying infrastructure to improve the performance of that workload. 

Gardner: So, it is problem solving as well as a drive for continual IT improvement, refinement, and optimization, which is a lot different than a break-fix mentality. How will the whole concept of operational services shift in your opinion from break-fix to more of optimization and continuous improvement? 

Pinto: I think you just touched on probably the most important point, Dana. Data centers today and technology are increasingly redundant and resilient. So really break-fix is becoming table stakes very quickly. 

The metaphor that I use many times is airlines. In the past, security or safety of the airline was something very important. Today it’s basically table stakes. You assume that the airline operates at the highest standards of safety. So, with break-fix it’s the same. HPE is automating all of the break-fix operations to allow customers to focus on what adds the most value to their businesses, which is delivering the business outcomes based on the technology — and further innovating. 

The pace of innovation in this business is unprecedented, both in terms of tools and technologies available to operate your environment as well as time-to-market of the digital applications that are the revenue generators for our customers. 

Gardner: Gerry, anything additional to offer in terms of the vision of an optimization drive rather than a maintenance drive? 

Innovate to your ideal state 

Nolan: Totally. It’s all about trying to stay ahead of the business requirements.

For example, last night Ronaldo and I were speaking with a customer with a global footprint. They happen to be in a pretty good state, but it was interesting talking to them about what would a new desired state look like. We work closely with customers as we innovate and build better service capabilities. We are trying to find out from our customers what is their ideal state, because it’s all about delivering the customer experience that maps to each customer’s use case — and every customer is different. 

I also just met with a casino operator, which at the moment is in a bit of a tough space, but they have been acquiring other casinos and opening new casinos in different parts of the world. Their challenge is completely different than my friend in the insurance industry who was going to cloud-first.

The casino business is all about security and regulation. They are really not in the business of IT — but IT is still critical to their success. They are trying to understand all the IT that they have.

The casino business is all about security, and a lot of regulation. In his case, they were buying companies, so they are also buying all of this IT. They need help controlling it. They are in the casino business, they are not really in the business of IT, but IT is still critical to their success. And now they are in a pandemic-driven shutdown, so they are trying to figure out how to manage and understand all of the IT they have.

For others in this social isolation climate, they need to keep the business running. Now as they are starting to open up, they need help with all sorts of issues around how to monitor customers coming into their facilities. How do they keep staff safe in terms of making sure they stay six feet apart? And HPE has a wealth of new offerings in that space. We can help customers deal with opening up and getting back to work. 

Whether you are operating an old environment, a new environment, or are in a post COVID-19 journey — trying to get through this pandemic situation, which is going to take years — there are all sorts of different aspects you need to consider as an organization. Trying to paint your new vision for what an ideal IT experience feels like — and then finding partners like HPE who can help deliver that — is really the way forward. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, Cyber security, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, professional services, Security, User experience | Leave a comment

Work in a COVID-19 world: Back to the office won’t mean back to normal

Businesses around the globe now face uncharted waters when it comes to planning the new normal for where and how their employees return to work.

The complex maze of risk factors, precautions, constantly changing pandemic impacts — and the need for boosting employee satisfaction and productivity — are proving a daunting challenge.

The next BriefingsDirect new normal of work discussion explores how companies can make better decisions and develop adept policies on where and how to work for safety, peace of mind, and economic recovery.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To share their recent findings and chart new ways to think about working through a pandemic, we’re joined by Donna Kimmel, Executive Vice President and Chief People Officer at Citrix, and Tony Gomes, Executive Vice President and Chief Legal Officer at Citrix. The discussion is moderated by  Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Donna, the back-to-work plans are some of the most difficult decisions businesses and workers have faced. Workers are not only concerned for themselves; they are worried about the impacts on their families and communities. Businesses, of course, are facing unprecedented change in how they manage their people and processes.

So even though there are few precedents — and we’re really only in the first few months of the COVID-19 pandemic — how has Citrix begun to develop guidelines for you and your customers for an acceptable return to work.

Move forward with values

Kimmel: It really starts with a foundation that’s incredibly important to Tony, me, and our leadership team. It starts with our values and culture. Who are we? How do we operate? What’s important to us? Because that enables us to frame the responses to everything we do. As you indicated, this is a daunting task — and it’s a humbling task.

When we focus on our culture and our values — putting our people, their health, and their safety first — that enables us to focus on business continuity and ultimately our customers and communities. Without that focus on values we wouldn’t be able to make as easily the decisions we’re making. We also realized as a part of this that there are no easy answers and no one-size-fits-all solutions.

We recognized that creating a framework that we utilize around the world has to be adopted based on the various sites, locations, and business needs across our organization. And, as we’ve acknowledged in the past, we also realized that this really means, it “takes a village.”

The number of employees partnering with us across multiple disciplines in the organization is tremendous. We have partnership not only from legal and human resources (HR), but also our finance organization, IT, real estate and facilities, our global risk team, procurement, communications, our travel organization, and all of our functional leaders and site leaders. It’s a tremendous effort to put this together, to create what we believe is the right thing to do — in terms of managing the health and safety of our employees — as we bring them back into the workforce.

Gomes: Yes, we’ve tapped the entire Citrix global organization for talent. And we have found two things. One, when it comes to going through this, be open to innovation and answers from all parts of the organization.

For example, our sites in the Asia-Pacific region that have been dealing the longest with COVID-19 and are in the process of returning to the office, they have been innovative. They are teaching the rest of our site leaders the best ways to go about reopening their sites. So even as the corporate leaders are here in the US, we’re learning an awful lot from our colleagues on the ground in the Asia-Pacific region.

Ready They’re Not–Survey Reveals

Employees Reluctant to Return to Office 

Two, be aware that this process is going to call upon you to have skills on your team that you may not have had on your team before. So that means experts on business continuity, for example, but also medical experts.

One of the decisions Donna and I made early on is that we needed to bring medical expertise to our team. Donna, through her relationships and her team, along with a top-notch benefits consultant, found great medical resources and expertise for us to rely on. That’s an example of calling upon new talents, and it’s causing us to look for innovation in every corner of the organization.

Gardner: Citrix has conducted some recent studies that help us understand where the employees are coming from. Tell us about the state of their thinking right now.

Get comfortable to get to work

Kimmel: Citrix did a study with one poll of about 2,000 US workers. We found that at least 67 percent of the respondents did not feel comfortable returning to the office for at least one month.

And in examining the sentiment of what it would take for them to feel comfortable coming back into the office, some 51 percent indicated that there has to be testing and screening. Another 46 percent prefer to wait until a [novel coronavirus] vaccine is ready. And 82 percent were looking for some kind of contact-tracing to make sure that we could at least connect with other individuals if there was an issue.

This was an external study, but as we talk with our own employees — our own surveys of roundtable discussions, group dialogues, and feedback we get from our managers — we are finding similar results. Many of our employees, though they would like to be able to come back to the office, recognize that coming back immediately, post-COVID-19, is not going to be to the same office that they left. We recognize that we need to make sure we’re creating a safe environment, one conducive for them to be productive in the office.

Gardner: Tony, what jumped out to you as interesting and telling in these recent findings?

Gomes: Donna hit on it, which is how aligned the results of this external study are coming in with our own experiences; what we’re listening for and hearing from our global workforce and what our own internal surveys are telling us.

We’ve been taking that feedback and building that into the way we’re approaching the reopening decision-making process.

For example, we know that employees are concerned about whether the cities, states, and countries they live and work in have adequate testing. Is there adequate contact-tracing? Are the medical facilities capable of supporting COVID-19 patients in a non-crisis mode?

So we built all of that into our decision-making. Every time we analyze whether an office or campus is ready for a phased reopening approach, we first look for those factors, along with governmental lifting and governmental lockdown orders.

We’re trying to be clear, communicating with employees that, “Hey, we are looking at all of this.” In that way it becomes a feedback loop. We hear the concern. We build the concern into our processes. We communicate back to the employees that our decisions are being made on the basis of what they express to us and are concerned about.

But it’s really amazing to see the alignment of the external study and what we’re hearing internally.

Kimmel: What Tony is acknowledging is right-on about understanding the concerns of our employees. They want to have a sense of confidence that the setup of the office will be appropriate for them.

We’re also trying to provide choice to our employees. Even as we’ll be looking at the critical roles that need to come back, we want to make sure that employees have the opportunity to self-select in terms of understanding what it will be like to work in the office in that environment.

Back to Office Won’t Mean Back to Normal. 

Poll Shows Workers Demand Strict Precautions 

We also know that employees have specific concerns: Maybe they have their own health concerns, or family members that live with them have health issues where they would be at greater risk, or they’re not back to normal societal functioning so at-home caregiving is still an issue. Parents just came through homeschooling, but they still may need to provide summer day camps or provide other support for elder care.

We also recognize that some people are just nervous and don’t feel comfortable. So we’re trying to put our employees’ minds at ease by providing them a good look at what it will be like — and feel like — to come back to the office. They should know the safety and security that we’re putting into place on their behalf, but still also providing them with a feeling of comfort to make a decision on what they think is right based on their own circumstances.

Gardner: It strikes me that organizations, while planning, need to also stay agile, to be flexible, and perhaps recognize that being able to react is more important than coming up with the final answer quickly. Is that your understanding as well, that organizations need to come up with new ways of being able to adapt rapidly and do whatever the changing circumstances require?

Cross-train your functionality 

Gomes: Absolutely, Dana. What Donna and I have tried to do is build a strong cross-functional team that has a lot of capacity across all of the functional areas. Then we try to create decision-making frameworks from the top down.

We then set some basic planning assumptions, or answer some of the big questions, especially in terms of the level of care that we’re going to provide to each employee across the globe. Those include areas such as social distancing, personal protective equipment (PPE), things like that, that we’re going to make sure that every employee has across the globe. 

But then it’s a different decision based on how that gets implemented at each and every site, when, where, and who leads. Who has a bigger or smaller team, and how do they influence or control the process? How much support from corporate headquarters versus local initiatives are taken?

Those are very different from site-to-site, along with the conditions they are responding to. The public health conditions are dynamic and different in every location — and they are constantly changing. And that’s where you need to give your teams the ability to respond and foster that active-response capacity.

Kimmel: We’ve worked really hard to make sure that we’re making faster, timely decisions, but we also recognize that we may not have all the information. We’ve done a lot of digging, a lot of research, and have great information. We’re very transparent with our employees in terms of where we are, what information we have at the time that we’re making the decisions, and we recognize that because it’s moving so quickly we may have to adapt those decisions.

As Tony indicated, that can be based on a site, a region, a country, or medical circumstances and new medical information. So, again, it goes back to our ability to live our values and what’s important to us. That includes transparency of decisions, of bringing employees along on the journey so that they understand how and why we’ve arrived at those decisions. And then when we need to shift them, they will understand why we’ve made a shift.

One of the positive byproducts or outcomes of this situation is being able to pivot to make good and fast decisions and being transparent about where and why we need to make them so that we can continue to pivot if necessary.

One of the positive byproducts of the situation is being able to pivot to make good and fast decisions and being transparent about where and why we need to make them so that we can continue to pivot if necessary.

Gardner: Of course, some of those big decisions initially meant having more people than ever working remotely and from their homes. A lot of business executives weren’t always on board with that. Now that we’ve gone through it, what have we learned?

Are people able to get their work done? They seem to be cautious about wanting to come back without the proper precautions in place. But even if we continue to work remotely, the work seems to be getting done.

Donna, what’s your impression about letting people continue to work at home? Has that been okay at Citrix?

Work from home, the office, or hybrid 

Kimmel: Tony and I and the rest of the leadership team certainly recognized as we were all thrust into this that we would be 100 percent work-from-home (WFH). We all realized and learned very quickly that there were very few, if any, roles that were so critical that they had to be done in the office.

Because remote work is part of the Citrix brand, we were able to enable employees to work securely and access their information from anywhere, anytime. We recognized, all of a sudden, that we were capable of doing that in more areas than we had recognized.

Help All Employees Feel Safe,

It Matters More Than Ever 

We’re now able to say, “Okay, what might be the new normal beyond this?” We recognize that there will be re-integration back into our worksites done in the current COVID-19 environment.

But beyond COVID, post-vaccines, as we think about our business continuity going forward, I do think that we will be moving into, very purposefully, a more hybrid work arrangement. That means new, innovative, in-office opportunities because we still want people to be working face-to-face and have those in-person sort of collisions, as we call them. Those you can’t do at all or they are harder to do on videoconferencing.

But there can be a new balance between in-office and remote work — and fine-tuning our own practices – that will enable us to be as effective as possible in both environments.

So, no doubt, we have already started to undertake that as a post-COVID approach. We are asking what it will look like for us, and then how do we then make sure from a philosophical and a strategy perspective that the right practices are put into place to enable it.

This has been a big, forced experiment. We looked at it and said, “Wow, we did it. We’ve done really well. We’ve been very fortunate.”

Home is where the productivity is

Gomes: Donna’s team has designed some great surveys with great participation across the global workforce. It’s revealed that a very high percentage of our employees feel as productive — if not even more productive — working from home rather than working from the office.

And the thing is, when you peel back the onion and you look at specific teams and specific locations, and what they can accomplish through this, it’s just really amazing.

For example, Donna and I, earlier this morning, were on a videoconference with our site leadership team in Bangalore, India where we have our second-largest office, which has quite a few functions. That campus represents all of the Citrix functions, spread across a number of buildings. We were looking at detailed information about the productivity of our product engineering teams over their last agile planning interval, their continuous integration interval, and how they are planning for their next interval.

We looked at information coming from our order processing team in Bangalore and also from our support team. And what we saw is increased productivity across those teams. We’re looking at not just anecdotal information, but objective data showing that more co-checks occurred, fewer bugs, and more on-time delivery of new functionality occurred within the interval that we had just completed.

We are just tremendously proud of what our teams are accomplishing during this time of global, personal, family, and societal stress. But there is something more here. Donna has put her finger on it, which is there is a way to drive increased productivity by creating these environments where more people can work from home.

Ready They’re Not–Survey Reveals

Employees Reluctant to Return to Office 

There are challenges, and Donna’s team is especially focused on the challenges of remote management. How do you replace the casual interactions that can lead to innovation and creative thinking? How do you deal with team members or teams that rely on more in-person interaction for their team dynamic?

So there are challenges we need to address. But we have also uncovered something I think here that’s pretty powerful — and we are seeing it, not just anecdotally, but through actual data. 

Gardner: As we query more companies about their productivity over the past few months, we will probably see more instances where working at home has actually been a benefit. 

I know from the employee perspective that many people feel that they save money and time by not commuting. They are not paying for transportation. They have more of a work balance with their lives. They have more control, in a sense, over their lives.

The technology has been there for some time, Donna, to allow this. It was really a cultural hurdle we had to overcome that the pandemic has endowed us with. Not that a pandemic is a good thing, but the results allow us to test models that now show how technology and processes can allow for higher productivity when working from home. 

Will what you are experiencing at Citrix follow through to other companies? 

Kimmel: Oh, yes, definitely. I have been on a number of calls with my peers at other companies. Everyone is talking about what’s next and how they design this into their organizations.

We recognize all of the benefits, Dana, that you just indicated. We recognize that those benefits are things that we want to be able to capture. New employees coming into the workforce, the Gen Zs and the Millennials, are looking for flexibility to be able to balance that work and life and integrate it in a more productive way for themselves. Part of that is a bit of a push in terms of what we are hearing from employees. 

It also enables us to tap into new talent pools. Folks that may not live near a particular office but have tremendous skills that they can offer. There are those who may have varying disabilities who may not be able to commute or don’t live near offices. There are a number of ways for us to tap into more workers that have the skills that we are looking for who don’t actually live in near offices. So again, all of that I think is quite helpful to us. 

Legal lessons for employers

Gardner: Tony, what are some of the legal implications if we have a voluntary return to work? What does that mean for companies? Are there issues about not being able to force people, or not being able to fire them, or flexibly manage them? 

Gomes: One of things that we have seen, Dana, during this pandemic, is significant changes in employee relations laws around the globe. This is not just in the United States, but everywhere. Governments are trying to both protect employees, preserve jobs, and provide guidance to employers to clarify how existing legal requirements apply in this pandemic. 

For example, here in the United States both the Occupational Safety and Health Administration (OSHA) and the Equal Employment Opportunity Commission (EEOC) have put out guidelines that address things such as PPE. What criteria do employers need to meet when they are providing PPE to employees? How do you work within the Americans with Disabilities Act (ADA) requirements when offering employees the ability to come back to the office? How do you permit them to opt out without calling them out, without highlighting that they may have an underlying medical condition that you as an employer are obligated to maintain as confidential and allow the employee to keep confidential?

Back to Office Won’t Mean Back to Normal. 

Poll Shows Workers Demand Strict Precautions 

Another big area that impacts the employer-employee relationship, that is changing in this environment, is privacy laws – especially laws and regulatory requirements that impact the way that employers request, manage, and store personal health information about employees. 

Just recently a new bill was introduced in the US Congress to try and address these issues and provide employees greater protection, and provide employers more certainty, especially in areas such as the digital processing and storage of personal health information through things such as contact-tracing apps

Gardner: Donna, we have only been at this for a few months, adjusting to this new world, this new normal. What have we learned about what works and what doesn’t work?

Is there anything that jumps out to you that says this is a definite thing you want to do – or something you should probably avoid — when it comes to managing the work-balance new normal?

Place trust in the new normal 

Kimmel: One, we learned that this can be done. That shifts the mental models that some had come into, that for any employment engagement you would prefer to have face-to-face-only interactions. And so this taught us something. 

It also helped us build trust in each other, and trust in leadership, because we continue to make decisions based on our values. We have been very transparent with employees, with phenomenal amounts of communication we put out there — two-way, with high empathy, and building better relationships. That also means better collaboration and relationship-building, not only between team members, but between managers and employees. It has been a really strong outcome.

It helped us build trust in each other and in leadership because we continue to make decisions based on our values. We have been very transparent with employees, with high empathy, and building better relationships. That also means better collaboration. It has been a really strong outcome.

And again, that’s part of the empathy, the opportunity for empathy, as you learn more about each other’s families. You are meeting them as they run by on the video. You are hearing about the struggles that people face. And so managers, employees, and team members are working with each other to help mitigate those as much as possible. 

Those are some big aspects of what we have learned. And, as I mentioned earlier, we have benefitted from our ability to make decisions faster, acknowledging various risks, and using the detailed information such as what Tony’s team brings to the table to help us make good decisions at any given time. Those are some of the benefits and positive outcomes we have seen. 

The challenges are when we go into the post-COVID-19 phase, we recognize that children may be back to school. Caregiving resources may be in place, so we may not be dealing with as many of those challenges. But we recognize there is sometimes still isolation and loneliness that can arise from working remotely.

People are human. We are creatures who want to be near each other and with each other. So we still need to find that balance to make sure everyone feels like they are included, involved, and contributing to the success of the organization. We must increase and improve our managers’ ability to lead productively in this environment. I think that is also really important.

And we must look for ways to drive collaboration, not only when people come back into the office — because we know how to do that well — but how do we have the right technology tools to enable us to collaborate well while we are away – from white-boarding techniques and things that enable us to collaborate even more from a WFH and remote perspective. 

So it will be about the fine-tuning of enabling success, stronger success, more impactful success in that environment.

Gardner: Tony, what do you see as things that are working and maybe some things that are not that people should be thinking about? 

Level-up by listening to people 

Gomes: One of the things that’s really working is a high level of productivity that we are seeing — unexpectedly high – even though about 98 percent of our company has been working from home for eight weeks-plus. So that’s one. 

The other thing that is really working is our approach to investing in our employees and listening to our employees. I mean this very tangibly, whether it’s the stipend that we provide our employees to go out and buy the equipment that they need to more comfortably and more productively work from home or to support charities and organizations or small businesses. This is truly tangibly investing in employees, truly tangibly, in integrated, multichannel ways, listening to the feedback from employees and processing that, putting that into your processes and feeding it back to them. That’s really worked. 

And again, the proof is in the high-level productivity and the very high level of satisfaction despite the very challenging environment. Donna mentioned some of them. One of the bigger challenges that we see right now is obviously the challenge of employees who have families, who have childcare, and other family care responsibilities in the middle of this pandemic while trying to work and many times being even more productive than they ever have been for us when working in the office.

So again, it’s nice to say we invest in our employees and we expect our employees to reciprocate, but we are actually seeing this in action. We have made very tangible investments and we see it coming back to us. 

Mind and body together win the race 

On the other hand, we have to be really careful about a couple of things. One, this is a long-term game, an ultramarathon, where we are only in the first quarter, if you will. It feels like we should be down at the two-minute warning, but we are really in the first quarter of this game. We have a long way to go before we get to viable therapeutics and viable, widely available effective vaccines that will allow us to truly come back to the work and social life we had before this crisis. So we have got to be prepared mentally to run this ultramarathon, and we have to help and coach our teams to have that mindset.

Help All Employees Feel Safe,

It Matters More Than Ever 

As Donna alluded to, this is also going to be a challenge in mental health. This is going to be very difficult because of its length, severity, and multifaceted impact — not just on employees but across society. So being supportive and empathetic to the mental health challenges many of us are going to face is going to be very important.

View this as a long-term challenge and pay attention to the mental health of your employees and teams as much as you are paying attention to their physical health. 

Kimmel: It’s been incredibly important for us to focus on mental health for our employees. We have tried to pull together as many resources as possible, not only for our employees but for our managers who tend to be in the squeeze point, because they themselves may be experiencing some of these same issues and pressures. 

And then they also carry that caring sense of responsibility for their employees, which adds to the pressure. So, for us, paying attention to that and making sure we have the right resources is really important to our strategy. I can’t agree more, this is absolutely a marathon, not a sprint. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

Posted in Citrix, Cloud computing, cloud messaging, Cyber security, Enterprise transformation, healthcare, risk assessment, Security, User experience | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

How HPE Pointnext Services ushers businesses to the new normal via an inclusive nine-step plan

HPE COVIDThe next edition of the BriefingsDirect Voice of Innovation podcast series explores a new model of nine steps that IT organizations can take amid the COVID-19 pandemic to attain a new business normal.

As enterprises develop an IT response to the novel corona virus challenge, they face both immediate and longer-term crisis management challenges. There are many benefits to simultaneously steadying the business amid unprecedented disruption — and readying the company to succeed in a changed world.

Join us as we examine a Hewlett Packard Enterprise (HPE) Pointnext Services nine-step plan designed to navigate the immediate crisis and — in parallel — plan for your organization’s optimum future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to share the Pointnext model and its positive impact on your business’ ongoing health are Rohit Dixit, Senior Vice President and General Manager, Worldwide Advisory and Professional Services at HPE Pointnext Services, and Craig Partridge, Senior Director, Worldwide Advisory and Transformation Practice, HPE Pointnext Services. The timely discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Rohit, as you were crafting your nine-step model, what was the inspiration? How did this come about?

Dixit: We had been working, obviously, on engaging with our customers as the new situation was evolving, with conversations about how they should react. We saw a lot of different customers and clients engaging in very different ways. Some showed some best practices, but not others.

Rohit Dixit

Dixit

We heard these conversations and observed how people were reacting. We compared that to our experiences managing large IT organizations and with working with many customers in the past. We then put all of those learnings together and collated them into this nine-step model.

It comes a bit out of our past experience, but with a lot of input and conversations with customers now, and then structuring all of that into a set of best practices.

Gardner: Of course, at Pointnext Services you are used to managing risk, thinking a lot about security incident management, for example. How is reacting to the pandemic different? Is this a different type of risk?

Dixit: Oh, it’s a very different kind of risk, for sure, Dana. It’s hitting businesses from so many different directions. Usually the risk is either a cyber threat, for example, or a discontinuity, or some kind of disruption you are dealing with. This one is coming at us from many, many different directions at the same time.

Then, on top of that, customers are seeing cybersecurity issues pop up. Cyber-attacks have actually increased. So yeah, it’s affecting everybody — from end-users all the way to the core of the business and to the supply chain. It’s definitely multi-dimensional.

Gardner: You are in a unique position, working with so many different clients. You can observe what’s working and what’s not working and then apply that back rather quickly. How is that going? Are you able to turn around rapidly from what you are learning in the field and apply that to these steps?

Dixit: Dana, by using the nine steps as a guide, we have focused immediately to what we call the triage step. We can understand what is the most important thing that we should be doing right now for the safety of employees, and how we can contribute that back to the community and keep the most essential business operations running.

Nine Steps to the New Normal

For IT to Follow in Two Phases 

That’s been the primary area of focus. But now as that triage step stabilizes a little bit, what we are seeing is the customers trying to think, if not long-term, at least medium-term. What does this lead to? What are the next steps? Those are the two conversations we are having with our customers — and within ourselves as well, because obviously we are as impacted as everybody else is. Working through that in a step-by-step manner is the basis of the nine steps for the new normal model.

Gardner: Craig, I imagine that as these enterprises and IT departments are grappling with the momentary crisis, they might tend to lose that long-term view. How do you help them look at both the big picture in the long term as well as focus on today’s issues?

Partridge: I want to pick up on the something that Rohit alluded to. We have never seen this kind of disruption before. And you asked why this is different. Although a lot of the responses learned by HPE from helping customers manage things like their security posture and cyber threats, you have to understand that for most customers that’s an issue for their organization alone. It’s about their ability to maintain a security posture, what’s vulnerable in that conversation, and the risks they are mitigating for the impact that is directly associated with their organization.

We have never seen the global economy being put on pause. It’s not just the effect on being able to transact, protect revenue and core services, and continue to be viable. It’s all of their ecosystems and supply chain being put on hold.

What we have never seen before is the global economy being put on pause. So it’s not just the effect on how an individual organization continues to be able to transact and protect revenue, protect core services, and continue to be able to be viable. It’s all of their ecosystem, it’s their entire supply chain, and it’s the global economy that’s being put on hold here.

When Rohit talks to these different dimensions, this is absolutely different. So we might have learned methods, have pragmatic ways to get through the forest fire now, and have ways to think about the future. But this is on a completely different scale. That’s the challenge customers are having right now and that’s why we are trying to help them out.

Gardner: Rohit, you have taken your nine steps and you have put them into two buckets, a two-mode approach. Why was that required and the right way to go?

One step at a time, now to the future 

Dixit: The model consists of the nine steps and it has two modes. The first one being immediate crisis management and then the second one is bridging to the new normal.

In the first step, the immediate crisis management, you do the triage that we were talking about. You adjust your operations to the most critical, life-sustaining kinds of activities. When you are in that mode, you stabilize and then finally you sustain on an ongoing basis.

And then the second mode is the bridge to the new normal, and here we are adjusting in parallel to what you are observing in the world around you. But you also start to align to a point of view with the business. Within IT, it means using that observation and that alignment to design a new point of view about the future, about the business, and where it’s going. You ask, how should IT be supporting the production of the new businesses?

logoNext comes a transformation to that new end-state and then optimizing that end-state. Honestly, in many ways, that means preparing for whatever the next shock is going to be because at some point there will be another disruption on the horizon.

So that’s how we divided up the model. The two modes are critical for a couple of reasons. First, you can’t take a long-term approach while a crisis unfolds. You need to keep your employees safe, keep the most critical functions going, and that’s priority number one.

The governance you put around the crisis management processes, and the teams you put there, have to be very different. They are focused on the here and the now.

In parallel, though, you can’t live in crisis-mode forever. You have to start thinking about getting to the new normal. If you wait for the crisis to completely pass before you do that, you will miss the learnings that come out of all of this, and the speed and expediency you need to get to the new normal — and to adapt to a world that has changed.

That’s why we talk about the two-mode approach, which deals with the here and the now — but at the same time prepares you for the mid- to long term as well.

Gardner: Craig, when you are in the heat of firefighting you can lose track of governance, management, planning architecture, and the methodologies. How are your clients dealing with keeping this managed even though you are in an intense moment?  How does that relate to what we refer to as minimum viable operations? How do we keep at minimum-viable and govern at the same time?

Security and speed needed 

Partridge: That’s a really key point, isn’t it? We are trained for a technology-driven operating model, to be very secure, safe, and predictable. And we manage change very carefully — even when we are doing things at an extreme pace, we are doing it in a very predictable way.

Craig PartridgeWhat this nine steps model introduces is that when you start running to the fire of immediate crisis management, you want to go in and roll with the governance model because you need extreme speed in your response. So you need small teams that can act autonomously – with a light governance model — to go to those particular fires and make very quick decisions.

And so, you are going to make some wrong decisions — and that’s okay because speed trumps perfection in this mode. But it doesn’t take away from that second team coming onstream and looking at the longer term. That’s the more traditional cadence of what we do as technologists and strategists. It’s just that now, looking forward, it’s a future landscape that is a radically different one.

And so ideas that might have been on hold or may not have been core to the value proposition before suddenly spring up as ideas that you can start to imagine your future being based around.

Those things are key in the model, the idea of two modes and two speeds. Don’t think about getting it right, it’s more about protecting critical systems and being able to continue to transact. But in the future, start looking at the opportunities that may not have been available to you in the past.

Gardner: How about being able to maintain a culture of innovation and creativity? We have seen in past crises some of the great inventions of technology and science. People when placed in a moment of need actually dig down deep in their minds and come up with some very creative and new thinking. How do we foster that level of innovation while also maintaining governance and the capability to react quickly?

Creativity on the rise 

Partridge: I couldn’t agree more. As an industry and as individuals, we are typically very creative. Certainly technologists are very creative people in the application of technologies, of different use cases, and business outcomes. That creativity doesn’t go away. I love the phrase, “Necessity is the mother of invention,” the idea that in a crisis those are the moments when you are most innovative, you are most creative, and people are coming to the fore.

For many of our customers, the ideas on how to respond — not just tactically, but strategically to these big disruptive moments — might already be on the table. People are already in the organization with the notion of how to create value in the new normal.

Nine Steps to the New Normal

For IT to Follow in Two Phases 

These moments bring those people to the surface, don’t they? They make champions out of innovators. Maybe they didn’t have the right moment in time or the right space to be that creative in the past.

Or maybe it’s a permission thing for many customers. They just didn’t have the permission. What’s key to these big, disruptive events is to create an environment where innovation is fostered, where those people that may have had ideas in the past but said, “Well, that will never work; it’s not core to the business model, it’s not core to driving innovation and productivity,” to create the environment where there are no sacred cows. Give them the space to come to the fore with those ideas. Create those kinds of new governance models.

Dixit: I would actually say that this is a great opportunity, right? Discontinuities in how we work create great cracks through which big innovations can be driven.

The phrase that I like to use is, “Never waste a crisis,” because a crisis creates discontinuities and opportunities. It’s a mindset thing. If we go through this crisis playing defense – and just trying to maintain what we already have, tweak it a little bit – that will be very unfortunate.

This goes back to Craig’s point about a sacred cow. We had a conversation with a customer who was talking about their hybrid IT mix, what apps and what workloads should run where. They had reached an uneasy alliance between risk and innovation. Their mix settled at a certain point of public, private, on-premises, and consumption-based sources.

But now they are finding that, because the environment has changed so much, they can revisit that mix from scratch. They have learned new things, and they want to bring more things on-premises. Or, they have learned something new and they decided to place some data in the cloud or use new Internet of things (IoT) and new artificial intelligence (AI) models.

The point is we shouldn’t approach this in just a defensive mode. We should approach it in an innovative mode, in a great-opportunity-being-presented-to-us-mode, because that’s exactly what it is.

Nine steps, two modes, one plan 

Gardner: And getting back to how this came about, the nine steps plan, Rohit, were you thinking of a specific industry or segment? Were you thinking public sector, private sector? Do these nine steps apply equally to everyone?

Dixit: That’s a good question, Dana. When we drew up the nine steps model, we drew from multiple industries. I think the model is applicable across all industries and across all segments — large enterprise and small- to medium-sized businesses (SMBs) as well.

The way it gets applied might be slightly different because for an enterprise their focus is more on the transaction, the monetary, and keeping revenue streams going in addition to, of course, the safety of their employees and communities.

When we drew up the nine steps model, we drew from multiple industries. The model is applicable across all industries and segments — large enterprises and SMBs.

But the public sector, they approach it very differently. They have national priorities, and citizen welfare is much more important. By the way, availability of cash, for example, might be different based on an SMB versus enterprise versus public sector.

But the applicability is across all, it’s just the way you apply the steps and how you bridge to the new normal. For example, what you would prioritize in the triage mode might be different for an industry or segment, but the applicability is very broad.

Partridge: I completely agree about the universal applicability of the nine steps model. For many industries, cash is going to be a big constraint right now. Just surviving through the next few months — to continue to transact and exchange value — is going to be the hard yards.

There are some industries where, at the moment, they are probably going to get some significant investment. Think about areas like the public sector — education, healthcare, and areas where critical national infrastructure is being stressed, like the telephones providing communication services because everybody is relying on that much more.

There are some industries where not just the nine steps model is universally applicable. Some industries are absolutely going to have the capability to invest because suddenly what they do is priority number one, not just the same citizen, welfare and health services, but to allow us to communicate and collaborate across the great distances we now work with.

So, I think it’s universally applicable and I think there is a story in each of the sectors which is probably a little bit different than others that we should consider.

Stay on track, prioritize safety 

Gardner: Craig, you mentioned earlier that mistakes will be made and that it’s okay. It’s part of the process when you are dealing in a crisis management environment. But are there key priorities that should influence and drive the decision-making — what keeps people on track?

Partridge: That’s a really good question, Dana. How do we prioritize some of the triage and adjust steps during the early phases of that crisis management phase of the model? A number of things have emerged that are universally applicable in those moments. And it starts, of course, with the safety of your people. And by your people, not just your employees and, of course, your customers, but also the people you interact with. In the government sector, it’s the citizens that you look after, and their welfare.

From inside of HPE, everything has been geared around the safety and welfare of the people and how we must protect that. That has to be number one in how you prioritize.

The second area you talked about before, the minimum viable operating model. So it’s about aligning the decisions you make in order to sustain the capability to continue to be productive in whichever way you can.

You’re focusing on things that create immediate revenue or immediate revenue-generating operations, anything that goes into continuing to get cash into the organization. Continuing to drive revenue is going to be really key. Keep that high on the priority list.

A third area would be around contractual commitments. Despite the global pandemic pausing movement in many economies around the world, there are still contractual commitments in play. So you want to make sure that your minimum viable operating model allows you to make good on the commitments you have with your customers.

Also, in the triage stage, think about your organization’s security posture. That’s clearly going to feature heavily in how you make priority decisions a key. You have a distributed workforce now. You have a completely different remote connectivity model and that’s going to open you up to all sorts of vulnerabilities that you need to consider.

Nine Steps to the New Normal

For IT to Follow in Two Phases 

Anything around critical customer support is key. So anything that enables you to continue to support your customers in a way that you would like to be supported yourself. Reach out to that customer, make sure they are well, safe, and are coping. What can you do to step in to help them through that process? I think that’s the key.

I will just conclude on prioritization with preserving the core transactional services that enable organizations to breathe; what we might describe as the oxygen apps, such as the enterprise resource planning (ERP) systems of the world, the finance systems, and the things that allow cash to flow in and out of the transactions and orders that need to be fulfilled. Those kinds of core systems need protection in these moments. So that would be my list of priorities.

Gardner: Rohit, critical customer support services is near the top of requirements for many. I know from my personal experience that it’s frustrating when I go to a supplier and find that they are no longer taking phone calls or that there is a long waiting line. How are you helping your organizations factor in customer support? And I imagine, you have to do it yourself, for your own organization, at HPE Pointnext Services.

Communicate clearly, remotely 

Dixit: Yes, absolutely. The first one is the one that you alluded to, the communications channels. How do we make sure that people can communicate and collaborate even though they are remote? How can we help in those kinds of things? Remote desktops. This has, for example, became extremely critical, as well as things like shared secure storage, which is critical so that people can exchange information and share data. And then wrapping around all of that for safe remote connectivity, collaboration, and storage, is a security angle to make sure that you do all of that in a protected, secure manner.

journeyThose are the kinds of things we are very much focused on — not just for ourselves, but also for our customers. We’re finding different levels of maturity in terms of their current adoption of any of these services across different industries and segments. So we are intersecting the customers at different points of their maturity and then moving them up that maturity stack for fully remote communication, collaboration, and then becoming much more secure in that.

Gardner: Rohit, how should teams organize themselves around these nine steps? We’ve talked about process and technology, but there is also the people side of the equation. What are you advising around team organization in order to follow these nine steps and preparing for the new normal?

Dixit: This is for me one of the most fascinating aspects of the model. In our triage step we borrowed a lot of our thinking from the way hospitals do triage. And we learned in that triage model that quick, immediate reaction means you need small teams that can work with autonomous decision-making. And you don’t want to overlay on that initially a restrictive governance model. The quick reaction through the “fog of war,” or whatever you want to call it, is extremely critical in that stage.

We borrowed a lot of our thinking from the way hospitals do triage. We learned that quick, immediate reaction means you need small teams that can work with autonomous decision-making.

By setting up small, autonomous teams that function independently, that make decisions independently, and you keep a light-touch governance model, then that feeds in broader directions, shares information, and captures learnings so that you remain very flexible.

Now, the fascinating aspect of this is that — as you bridge to the new normal, as you start to think about the mid- to thelong-term — the mode of operation becomes very different. You need somebody to collect all the information. You need somebody who is able to coordinate across the business, across IT, and the different functions, partners, and the customers. Then you can create a point of view about what the future holds.

What do we think the future mode of operations is going to look like from a business perspective? Translate that into IT needs and create a transformation plan, start to execute on that plan, which is not the skirmished approach that you’re taking in the immediate crisis management. You’re taking a much more evolved transformation approach that you’re going toward.

And what we find is, these modes of operations are very different. In fact, we advocate that you put two different teams on them. You can’t have the crisis management also involved in long-term planning and vice versa. It’s too much to handle and it’s very conflicting in the way it’s approached. So we suggest that you have two different approaches, two different governance models, two different teams that at some point in the future will come together.

Gardner: Craig, while you’re putting these small teams to work, are you able to see leadership qualities in people that maybe you didn’t have an opportunity to see before? Is this an opportunity for individuals to step up — and for managers to start looking for the type of leadership qualities — in this cauldron of crisis that will be of great value later?

Tech leaders emerge 

Partridge: I think that’s a fantastic observation because never more do you see leadership qualities on display than when people are in such pressurized systems. These are the moments of decision-making that need to be made rapidly, and where they have to have the confidence to acknowledge that sometimes those decisions may be wrong. The kind of leadership qualities that you’re going to see exhibited through this nine-step model are exactly the kind of leadership qualities that are going to give you that short list to potentially stand out for the next leaders of the organization.

With any of these moments of crisis management and long-term planning, those that step forward and take on thatburden and start to lead the organization through the thinking, process, strategy, and the vision are going to be thatpool of the next talent. So nurture them through this process because they could lead you well into the future.

Gardner: And I suppose this is also a time when we can look for technologies that are innovative and work in a pinch to be elevated in priority. I think we’re accelerating adoption patterns in this crisis mode.

So what about the information technology, Craig? Are we starting to see more use of cloud-first, software as a service (SaaS) models, multi-cloud, and hybrid IT? How are the various models of IT now available manifesting themselves in terms of being applicable now in the crisis?

Partridge: This global pandemic is maybe the first one that’s going to showcase why technology has become suchan integral part of how customers build, deliver, and create their value propositions. First, the most immediate area where technology has come into play is that massively distributed workforce now working from home. How was that possible even 10 years ago? How is it possible for an organization of 50,000 employees to suddenly have 70 percent to 80 percent of that workforce now communicating and collaborating online using virtual sessions?

The technology that underpins all of that remote experience has absolutely come to the fore. Then there are some technologies, which you may not see, but which are absolutely critical to how, as a society, we will respond to this.

Think about all of the data modeling and the number crunching that’s going on in these high-performance compute (HPC) platforms out there actively searching for the cure and the remedy to the novel coronavirus crisis. And the scientific field and HPC have become absolutely key to that.

The capability to instantly consume and to match that with what you pay has two benefits. It keeps costs aligned and it eases economic pressure by deferring capital spending over time.

You mentioned as-a-service models, and absolutely the capability to instantly consume and to match that with what you pay has two benefits. Not only does it keep the costs aligned, which is a threat that people are really going to focuson, but it might ease some of that economic pressure, because, as we know in those kinds of models, technology is consumed not as an upfront capital asset. It’s deferred over the use of its life, easing the economic stresses that customers are going to have.

If we hadn’t been through the cloud era, through pivoting technology to it being consumed as a service, then I don’t think we’d be in a position where we could respond as well in this particular time.

Dixit: What’s also very important is the mode of consumption of the technology. More and more customers are going to look for flexible models, especially in how they think about their hybrid IT model. What is the right mix of that hybrid IT? I think in these as-a-service models, or consumption-based models — where you pay for what you consume, no more, no less, and it allows you to flex up or down — that flexibility is going to drive a lot of the technology choices.

Gardner: As we transition to the new normal and we recognize we have to be thinking strategically as well as tactically at all times, do you have any reassurance that you can provide, Rohit, to people as they endeavor to get to that new normal?

Crisis management and strategic planning going hand-in-hand sounds like a great challenge. Are you seeing success? Are you seeing early signs that people are getting this and that it will be something that will put them in a stronger position having gone through this crisis?

In difficulty lies opportunity 

Dixit: Dana, for me, one of the best things I have seen in my interactions with customers, even internally at HPE, is the level of care and support that the companies are giving to their employees. I think it’s amazing. As a society and as a community, I’m really heartened by how positive the reactions have been and how well the companies are supporting them. That’s just one point, and I think technology does play a part in that, in enabling that.

The point I go back to is to never waste a crisis. The discontinuities we talked about, the great opportunities that this creates, if we approach this with the right mindset — and I see a lot of companies actually doing that, approaching this from an opportunity perspective instead of just playing defense. I think that’s really good to see.

Nine Steps to the New Normal

For IT to Follow in Two Phases 

If somebody is looking to design for the future, there is now more technology, consumption methods, and different means of approaching a problem than ever existed before. You have private cloud, public cloud, and you have consumption models on-premises, off-premises, and via colocation options. You have IoT, AI, and containerization. There is so much innovation out there and so many ways of doing things differently. Take that opportunity-based approach, it is going to be very disruptive and could be the making of a lot of great innovation.

Gardner: Craig, what light at the end of the tunnel do you see based on the work you’re doing with clients right now? What’s encouraging you that this is going to be a path to new levels of innovation and creativity?

Partridge: Over the last few years, I’ve been spending most of my time working with customers through their digital transformation agendas. A big focus has been the pivot toward better experiences: better customer engagement, better citizen engagement.  And a lot of that is enabled through digital engagement models underpinned by technology and driven by software.

Pick up the phone and speak to your HPE counterparts because they are there to help you. Nothing is more important to HPE than being there for our partners and customers.

What we are seeing now is the proof-positive that those investments made over the last few years were exactly the right investments to make. Those companies now have the capability to reach out very quickly, very rapidly. They can enable new features, new functions, new services, and new capabilities through those software-delivered experiences.

For me, what’s heartwarming is to see how we have embraced technology in our daily lives. It’s those customers who went in early with a customer experience-focused, technology-enabled, and edge-to-cloud outcome. Those are the ones now able to dance very quickly around this axis that we described in the HPE Pointnext Services nine-step model. So it’s a great proof-point.

Gardner: A lot of the detail to the nine-step program, and some great visual graphics, are available at Enterprise.nxt. An article is there about the nine-step process and dealing with the current crisis as well as setting yourself up for a new future.

Where else can people go to learn more about how to approach this as a partnership? Where else can people learn about how to deal with the current situation and try to come out in the best shape they can?

Dixit: There are a lot of great resources that customers and partners can reach out to with HPE, specifically of course, hpe.com, and a specific page around COVID-19 responses and great resources available to our customers and partners.

A lot of the capabilities that underpin some of the technology conversations we have been having are enabled through our Pointnext Services organization. So again, visit hpe.com/services to be able to get access to some of the resources.

And just pick up the phone and speak to HPE counterparts because they are there to help you. Nothing is more important to HPE at the moment than being there for our partners and customers.

Gardner: We are going to be doing more podcast discussions on dealing with the nine-step program as well as crisis management and long-term digital transformation here at BriefingsDirect, so look for more content there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, data center, Data center transformation, disaster recovery, Enterprise architect, enterprise architecture, Enterprise transformation, Help desk, Hewlett Packard Enterprise, managed services, supply chain | Tagged , , , , , , , , , , , , , | Leave a comment

International Data Center Day highlights how sustainability and diversity will shape the evolving modern IT landscape

IDCDThe next BriefingsDirect panel discussion explores how March 25’s International Data Center Day provides an opportunity to both look at where things have been in the evolution of the modern data center and more importantly — where they are going.

Those trends involve a lot more than just technology. Data center challenges and advancements alike will hinge around the next generation of talent operating those data centers and how diversity and equal opportunity best support that.

Our gathered experts also forecast that sustainability improvements — rather than just optimizing the speeds and feeds — will help determine the true long-term efficiency of IT facilities and systems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To observe International Data Center Day with a look at ways to make the data centers of the future the best-operated and the greenest ever, we are joined by Jaime Leverton, Senior Vice President and Chief Commercial Officer at eStruxture Data Centers in Montreal; Angie McMillin, Vice President and General Manager of IT Systems at VertivTM, and Erin Dowd, Vice President of Global Human Resources at Vertiv. The International Data Center Day observance panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Erin, why — based on where we have come from — is there now a need to think differently about the next generation of data center talent?

Erin Dowd

Dowd

Dowd: What’s important to us is that we have a diverse population of employees. We think about diversity from the perspective traditionally around ethnicity and gender. But when we consider diversity, we think about diversity of thought, diversity of behavior, and diverse backgrounds.

That all makes us a much stronger company; a much stronger industry. It’s representative of our customer base, frankly, and it’s representative of the globe. We are ensuring that we have people working within our company from around the world and contributing all of those diverse thoughts and perspectives that make us a much stronger company and much stronger industry.

Gardner: We have often seen that creative and innovative thought comes when you have a group of individuals that come from a variety of backgrounds, and so it’s often a big benefit. Why has it been slow-going? What’s been holding back the diversity of the support talent for data centers?

Diversity for future data centers 

Dowd: It’s a competitive environment, so it’s a struggle to find diverse candidates. It goes beyond our tech type of roles and into sales and marketing. We look at our talent early in their careers, and we are working on growing talent, in terms of nurturing them, helping them to develop, and helping them to grow into leadership roles. It takes a proactive approach, and it’s more than just letting the talent pool evolve naturally. It is about taking proactive and definitive actions around attracting people and growing people.

Gardner: I don’t think I am going out on a limb by observing that over the past 30 years, it’s been a fairly male-dominated category of worker. Tell us why women in science, technology, engineering, and math, or the so-called STEM occupations, are going to be a big part of making that diversity a strength.

Dowd: That is a huge pipeline for us as we benefit from all the initiatives to increase STEM education for women and men. The results help expand the pool, frankly, and it allows candidates across the board, that are interested at an early age, to best prepare for this type of industry. We know historically that girls have been less likely to pursue STEM types of interest at early ages.

So ensuring that we have people across the continuum, that we have women in these roles, to model and mentor — that’s really important in expanding the pool. There are a lot of things that we can be doing around STEM, and we are looking at all those opportunities.

Gardner: Statistically there are more women in universities than men, so that should translate into a larger share in the IT business. We will be talking about that more.

But we would also like to focus on International Data Center Day issues around sustainability. Jaime, why is sustainability the gift that keeps giving when it comes to improving our modern data centers?

Jaime Leverton

Leverton

Leverton: International Data Center Day is about the next generation of data center professionals. And we know that for the next generation, they are committed to preserving the environment, which is good news for all of us as citizens. And as one of the world’s biggest consumers of energy, I believe the data center industry has a fundamental duty to elevate its environmental stewardship with energy efficient infrastructure and renewable power resources. I think the conversation really does go well together with diversity.

Gardner: Alright, let’s dive in a little bit more to the issues around talent and finding the best future pool. First, Erin please tell us about your role at Vertiv.

Dowd: I am the Global Business HR Partner at Vertiv. So my focus is to help us design, build, and deliver the right people strategy for our teams that have a global presence. We focus on having super-engaged and productive people in the right places with the right skills, and in developing career opportunities across the continuum — from early level to senior level of contributors.

Gardner: We have heard a lot about the skills shortage in IT in general terms, but in your experience at Vertiv, what are your observations about the skills shortage? What challenges do you face?

Dowd: We have challenges in terms of a shortage of diverse candidates across the board. This is present in all positions. Increasing the diversity of candidates that we can attract and grow will help us address the shortage first-hand.

Gardner: And in addition to doing this on a purely pragmatic basis, there are other larger benefits. Tell us why diversity is so important to Vertiv over the long term?

We have challenges in terms of a shortage of diverse candidates across the board. This is present in all positions. The diversity of candidates that we can attract will help us.

Dowd: Diversity is the right thing to do. Just hands down, it has business benefits, and it has cultural benefits. As I mentioned earlier, it reflects not only on our global presence but also on our customer base. And research shows that companies that have more diverse workforces outperform and out-innovate those that don’t.

For example, companies in the top quartile of the workforce on diversity are 33 percent more likely to financially outperform their less diverse counterparts, according to a 2018 study from McKinsey. We have been embracing diversity, which aligns with our core values. It’s the right competitive strategy. It’s going to allow us to compete in the marketplace and relate to our customers best.

Gardner: Is Vertiv an outlier in this? Or is this the way the whole industry is going?

Dive into competitive talent pool 

Dowd: This is the way whole industry is going. I come from a line of IT companies prior to my tenure with Vertiv. Even the biggest, the most established companies are still wrestling with the competitiveness affiliated with the tracking of candidates that have diversity of thought, diverse backgrounds, diverse behaviors, and diversity on ethnicity and gender as well.

The trend is toward engineering and services, and everywhere we are experiencing turnover because it’s so competitive. It’s a very competitive environment. We are competing with brother and sister companies for the same types of talent.

WorkerAs I mentioned previously, if we attract people who are diverse in terms of thought, ethnicity, and gender we can expand our candidate pool and enhance our competitiveness. When our talent acquisition team looks at talent, they are expanding and enhancing diversity in our university relations and in our recruiting efforts. They are targeting diverse candidates as we hire interns and then folks that are later in their careers as well.

Gardner: We have been looking at this through the demand side, but on the supply-side, what are the incentives? Why should people from a variety of backgrounds consider and pursue these IT careers? What are the benefits to them?

Dowd: The career opportunities are amazing. This is a field that’s growing and that is not going to go away. We depend on IT infrastructure and data centers across our world, and we’re doing that more and more over time. There’s opportunity in the workplace and there are a lot of things that we are specifically doing at Vertiv to keep people engaged and excited. We think a lot about attracting talent.

But there is another piece, which is about retaining talent. Some of the things we are doing at Vertiv are specifically launching programs aligned with diversity.

So recently, and Angie has been involved in this, we have a women at Vertiv resource group called Women at Vertiv Excel (WAVE). And that group is nurturing women, encouraging more women to pursue leadership positions within Vertiv. Really it looks at diversity in leadership positions, but it also provides important training that women can apply in their current positions.

Together we are building one Vertiv culture, which is a really important framework for our company. We are creating solutions and resources that make us more competitive and reflect the global market. We find that diversity breeds new and different ideas, more innovation, and a deeper understanding of our customers, partners, employees, and our stakeholders all around the globe. We are a global company, so this is very important to us. It’s going to make us more successful as we grow into the future.

Another thing that we are doing is creating end-to-end management of Vertiv programs. This is new. We continue to improve this. It integrates behavioral skills and training designed to look at the work that we do through the eyes of others. We utilize experiences and talent effectively to grow stronger and stronger teams. Part of this is about recruiting and hiring. It has an emphasis on finding potential employees who possess a diverse experience of thought and perspectives. And diversity of thought comes from field experiences, from different backgrounds, and all of this contributes to our values as an employee in our organization.

Together we are building one Vertiv culture, which is a really important framework for our company. We are creating solutions and resources that make us more competitive and reflect the global market. We find that diversity breeds new and different ideas, more innovation, and a deeper understanding of our customers, partners, and employees.

We also are launching the Vertiv Operating System. Now this is being created, launched, and built with an emphasis on better understanding of our differences, in bridging gaps where there are differences, and in ways that bring out the best in everybody. It’s designed to encourage thought leadership, and to help all of us work through change management together.

Finally, another program that we’ve been implementing across the globe is called Intrinsic. And Intrinsic supplies a foundational assessment designed to improve our understanding of ourselves and also of our colleagues. It’s a formal experiential program that’s going to help us all learn more about ourselves, what makes our individual values and styles unique, but then also it allows us to think about the people that we are working with. We can learn more about our colleagues, potentially our customers, and it allows us to grow in terms of our team dynamics and the techniques that we are using to manage conflict, stress, and change.

Collectively, as we look at the full continuum of how we behave at Vertiv in the future we are building for ourselves, all of these efforts work together toward changing the way we think as individuals, how we behave in groups, and ultimately evolving our organizational culture to be more diverse, more inclusive, and more innovative.

Gardner: Jaime at eStruxture, when we look at sustainability, it aligns quite well with these issues around talent and diversity because all the polling shows that the younger generation is much more focused on energy efficiency and consciousness around their impact on the natural world — so sustainability. Tell us why the need for sustainability is key and aligns so well with talent and retaining the best people to work for your organization.

Sustainability inspires next generation 

Leverton: What we know to be true about the next generation is when they look to choose a career path, or take on an assignment, they want to make sure that it aligns with their values. They want to do work that they believe in. So, our industry offers them that opportunity to be value-aligned and to make an impact where it counts.

DC mainAs you can see all around us, people are working and learning remotely now more than ever, and data centers are what make all of that possible. They are crucial to our society and to our everyday lives. The data center industry is only going to continue to grow, and with our dependence on energy we have to have a focus on sustainability.

It represents a substantial opportunity to make a difference. It’s a fast-paced environment where we truly believe there is a career path for the next generation that will matter to them.

Gardner: Jaime, tell us about eStruxture Data Centers and your role there.

Leverton: eStruxture is relatively new data center company. It was established just over three years ago and we have grown rapidly from our original acquisition of our first data center in Montreal. We now have three data centers in Montreal, two in Vancouver, and one in Calgary. We are a Canadian pure-play — Canadian-owned, -operated, and -financed. We really believe in the Canadian landscape, the Canadian story, and we are going to continue to focus on growth in this nation.

Gardner: When it comes to efficiency and sustainability, we often look at power usage effectiveness (PUE). Where are we in terms of getting to complete sustainability? Is it that so farfetched?

Leverton: I don’t think it is. Huge strides have been made in reducing PUE, especially by us in our most recent construction, which has a PUE load of sub 1.2. Organizations in our industry continue to innovate every day, trying to get as close to that 1.0 as humanly possible.

We are very lucky that we partner with Vertiv. Vertiv solutions are key in driving our efficiency in our data centers, and we know that progress can be made continually by addressing the IP load deficiency and that is a savings that is incremental to PUE as well. PUE is specifically about the ratio of IP power usage and the power usage of the equipment that supports it. But we look at our data center and our business holistically to drive sustainability even outside of what the PUE covers.

Gardner: It sounds like sustainability is essentially your middle name. Tell me more about that. How did you focus the construction and placement of your data centers to be focused so much on sustainability?

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

Leverton: All of our facilities have been designed with a focus on sustainability. When we have purchased facilities, we have immediately gone to upgrade them and make them more efficient. We take advantage of free cooling wherever possible. As I mentioned, three of our data centers are in Montreal, so we get to take advantage of about eight months of the year of free cooling where the majority of our data centers are using 99.5 percent hydro-power energy, which is the cleanest possible energy that we can use.

We virtualize our environments as much as possible. We carefully select eco-responsible technologies and suppliers, and we are committed to continuing to increase our power usage effectiveness without ever sacrificing the performance, scalability, or uptime of our data centers, of course.

Gardner: And more specifically, when you look at that holistic approach to sustainability, how does working with a supplier like Vertiv augment and support that? How does that become a tag-team when it comes to the power source and the underlying infrastructure?

Leverton: Vertiv has just been such a great partner. They were there with us from the very beginning. We work together as a team, trying to make sure that we’re designing the best possible environment for our customers and for our community. One of our favorite solutions from Vertiv is around their thermal management, which is a water-free solution.

Our commitment is to operate as sustainably as possible. Being able to partner with Vertiv and build their solutions into our design right from the beginning has had a huge impact.  

That is absolutely ideal in keeping with our commitment to operate as sustainably as possible. In addition to being water-free, it’s 75 percent more efficient because it has advanced controls and economization. Being able to partner with Vertiv and build their solutions into our design right from the beginning has made a huge, huge impact.

Gardner: And, like I mentioned, sustainability is the gift that keeps giving. This is not just a nice to have. This is a bottom-line benefit. Tell us about the costs and how that reinforces sustainability initiatives.

Leverton: Yes, while there is an occasional higher cost in the short term, we firmly believe that the long-term total cost of ownership is lower — and the benefits far outweigh any initial incremental costs.

Obviously, it’s about our values. It’s critical that we do the right thing for the environment, for the community, for our staff, and for our customers. But, as I say, over the long-term, we believe the total cost is less. So far and above, sustainability is the right thing to do.

Gardner: Jaime, when it comes to that sustainability formula, what really works? It’s not just benefiting the organization that’s supplying, it’s also benefiting the consumer. Tell us how sustainability is also a big plus when it comes to those people receiving the fruits of what the data centers produce.

Leverton: Sustainability is huge for our customers, and it’s increasingly a key component of their decision-making criteria. In fact, many hyperscale cloud providers and corporations — large corporate enterprises — have declared very ambitious environmental responsibility objectives and are shifting to green energy.

Microsoft, as an example, is targeting over 70 percent renewable energy for its data centers by 2023. Amazonreached a 50 percent renewable energy target in 2018 and is now aiming for 100 percent.

Women and STEM step IT up 

Gardner: Let’s look at the sustainability issue again through the lens of talent and the people who are going to be supporting these great initiatives. Angie, when it comes to bringing more women into the STEM professions, how does the IT industry present itself as an attractive career path, say for someone just graduating from high school?

Angie McMillin

McMillin

McMillin: When I look at children today, they’re growing up with IT as part of their lives. That’s a huge advantage for them. They see firsthand the value and impact it has on everything they do. I look at my nieces and nephews, and even grandkids, and they can flip through phones, tablets, they are using XBoxes, you name it, all faster than adults.

They’re the next generation of IT. And now, with the COVID-19situation, children are learning how to do schooling collaboratively — but also remotely. I believe we can engage children early with the devices they already know and use. And with the tools that they’re now learning for schoolwork, those are a bridge to learning about what makes that work. It’s the data center industry. All of our data centers can be a part of that as they complete their schooling and go into higher education. They will remember this experience that we’re all living through right now forever — and so why not build upon that?

Gardner: Jaime, does that align with your personal experience in terms of technology being part of the very fabric of life?

Leverton: Oh, absolutely. I’m really proud of what I’ve seen happening in Canada. I have two young daughters and they have been able to take part in STEM camps, coding clubs, and technology is part of their regular curriculum in elementary school. The best thing we can do for our children is to teach them about technology, teach them how to be responsible with tech, and to keep them engaged with it so that over time they can be comfortable looking toward STEM careers later on.

Gardner: Angie, to get people focused on being part of the next generation of data centers, are there certain degrees, paths, or educational strategies that they should be pursuing?

Education paths lead to STEM careers 

McMillin: Yes. It’s a really interesting time in education. There are countless degrees specifically geared toward the IT industry. So those are good bets, but specifically in networking and computers, there’s coding, there is cyber security, which is becoming even more important, and the list goes on.

We currently see a very large skill set gap specifically around the science and technology functions. So these offer huge opportunities for a young person’s future. But I also want to highlight that the industry still needs the skill sets, the traditional engineering skills, such as power management, thermal management, services and equally important are the trade skills in this industry. There’s a current gap in the workforce and the training for that may be different, but it still has a really vital role to play.

And then finally, we’d be remiss if we didn’t recognize the fact that there are support functions, finance, HR, and marketing. People often think that you must only be in the science or engineering part of the business to work in a particular given market, and that really isn’t true. We need skill sets across a broad range to really help make us successful.

IDCD 2Leverton: I am an IT leader and have been in this business for 20 years, and my undergraduate degrees are in political science and psychology. So I really think that it’s all about how you think, and the other skills that you can bring to bear. More and more, we see emotional intelligence (EQ) and communication skills as the difference-maker to somebody’s career success or career trajectory. We just need to make sure that people aren’t afraid of coming out of more generalized degrees.

Gardner: We have heard a lot about the T structure, where we need to have the vertical technology background but also we want those with cultural leadership, liberal arts, and collaboration skills.

Angie, you are involved with mentoring young women specifically. What’s your take on the potential? What do you see now as the diversity is welling up and the available pool of talent is shifting?

McMillin: I am, and I absolutely love it. One of the things I do is support a women’s engineering summer camp probably much like Jaime’s daughters attend, and other events around my alma mater, with the University of Dayton. I support mentoring interns and other early career individuals, be they male or female. There is just so much potential in young people. They are absolutely eager to learn and play their part. They want to have relevance in the growing data center market, and the IT and sustainability that we talked about earlier. It’s really fun and enjoyable to help them along that journey.

There are two key themes I repeat. One is that success doesn’t happen overnight. So enjoy those small steps on the journey, learn as much as you can, and don’t give up. The second is keep an open mind about your career, try new things, and doors you never imagined will open up.

I get asked for advice, and there are two key themes that I repeat. One is that success doesn’t happen overnight. So enjoy those small steps on the journey that we take to much greater things, and the important part of that, is really just keep taking the steps, learn as much as you can, and don’t give up. The second thing is to keep an open mind in your career, being willing to try new things and opportunities and sometimes doors are going to open that you didn’t even imagine, which is absolutely okay.

As a prime example, I started my education in the aerospace industry. When that industry was hurting, I switched to mechanical. There is a broader range of that field of study, and I spent a large part of my career in automotive. I then moved to consumer and now I am in data center and IT. I am essentially a space geek and car junkie engineer with experience in engineering, strategy, sales, portfolio transformation, and operations. And now I am a general manager for an IT management portfolio.

If I hadn’t been open to new opportunities and doors along my career path, I wouldn’t be here today. So it’s an example for the younger generation. There are broad possibilities. You don’t have to have it all figured out now, but keep taking those steps and keep trying and keep learning — and the world awaits you, essentially.

Gardner: Angie what sort of challenges have you faced over the years in your career? And how is that changing?

Women rise, challenges continue 

McMillin: It’s a great question. My experience at Vertiv has been wonderful with a support structure of diversity for women and leadership. We talked about the new WAVE program that Erin mentioned earlier. You can feel that across your organization. It starts at the top. I also had the benefit, as many of us I think had on this podcast, of having good sponsors along the way in our career journeys to help us get to where we are.

But that doesn’t mean we haven’t faced challenges throughout our careers. And there are challenges that still arise for many in the industry. In all the industries I have worked, which have all been male-dominated industries, there is this necessity to have to prove yourself as a woman — like 10 times over — for your right to be at the table with a voice regardless of the credentials you have coming in. It gets exhausting, and it’s not consistent with male counterparts. It’s a “show me first” and then “I might believe,” it’s also BS. That’s something that a lot of women in this industry, as well as in other industries, continue to have to surpass.

The other common challenge is that you need to over-prove yourself, so that people know that the position was earned. I always want people to know I got my position because I earned it, and I have something to offer not because of a diversity quota. And that’s a lot better today than it’s been in years passed. But I can tell you, I can still hear those words, of accusations made of female colleagues that I knew throughout my career. When one female gets elevated in a position and fails, it makes it a lot harder for other females to get the chance of an opportunity or promotion.

Now, again, it’s getting better. But to give you a real-world example, if you think about the number of industries where there are women CEOs. If they don’t succeed, boards get very nervous about putting another woman in a CEO position. If a male CEO doesn’t succeed, he is often just not the right fit. So we still have a long way to go.

Gardner: Jaime at eStruxture, what’s been your experience as a woman in the technology field?

Leverton: Well, eStruxture has been an incredible experience for me. We have diversity throughout the organization. Actually we are almost at 50 percent of our population identifying as non-white heterosexual male, which is quite different from what I’ve experienced over the rest of my career in technology. From a female perspective, our senior leadership team is 35 percent women; our director population is almost 50 percent women.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

So it’s been a real breath of fresh air for me. In fact, I would say it really speaks to the values of our founder when he started this company three years ago and did it with the intention of having a diverse organization. Not only does it better mirror our customers but it absolutely reflects the values of our organization, the culture we wanted to create, and ultimately to drive better returns.

Gardner: Angie, why is the data center industry a particularly attractive career choice right now? What will the future look like in say five years? Why should people be thinking about this as a no-brainer when it comes to their futures?

Wanted: Skilled data center pros 

McMillin: We are in a fascinating time for data center trends. The future is very, very strong. We know now — and the kids of today certainly know — that data isn’t going away. It’s part of our everyday lives and it’s only going to expand — it’s going to get faster with more compute power and capability. Let’s face it, nobody has patience for slow anymore. There are trends in artificial intelligence (AI), 5G, and others that haven’t even been thought of yet that are going to offer enormous potential for careers for those looking to get into the IT space.

We are in a fascinating time for data center trends. The future is very strong. Data isn’t going away. And nobody has patience for slow anymore. There are trends in AI, 5G, and others that haven’t even been thought of yet.

And when we think about that new trend — with the increase of working or schooling remotely as many of us are doing currently — that may permanently alter how people work and learn going forward. There will be a need for different tools, capabilities, and data management. And how this all remains secure and efficient is also very important.

Likewise, more data centers will need to operate independently and be managed remotely. They will need to be more efficient. Sustainability is going to remain very prevalent, especially edge-of-the-network data centers and enabling the connectivity and productivity wherever they are.

wind powerGardner: Now that we are observing International Data Center Day 2020, where do you see this state of the data center in just the next few years? Angie, what’s going to be changing that makes this even more important to almost every aspect of our lives and businesses?

McMillin: We know now the data center as an ecosystem that is changing dramatically. The hybrid model is a product that’s enabling a diversification of data workloads where customers get the best of all options available: cloud, data center, and edge, as our regional global survey of data center professionals are experiencing phenomenal growth. And we also see a lot more remote management to operate and maintain these disparate locations securely.

We need more people with all the skill sets capable of supporting these advancements on the horizon like 5G, theindustrial internet of things (IIoT), and AI.

Gardner: Erin, where do you see the trends of technology and human resources going that will together shape the future of the data center?

Dowd: I will piggyback on the technology trends that Angie just referenced and say the future requires more skilled professionals. It will be more competitive in the industry to hire those professionals, and so it’s really a great situation for candidates.

logoIt makes it important for companies like Vertiv to continue creating environments that favor diversity. Diversity should manifest in many different ways and in an environment where we welcome and nurture a broad variety of people. That’s the direction of the future, and, naturally, the secret for success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, data center, Data center transformation, Enterprise transformation, Networked economy, Vertiv | Tagged , , , , , , , , , , , , , , , | Leave a comment

Business readiness provides an agile key to surviving and thriving in these uncertain times

device userJust as the nature of risk has been a whirling dervish of late, the counter-forces of business continuity measures have had to turn on a dime as well. What used to mean better batteries for servers and mirrored, distributed datacenters has recently evolved into anywhere, any-circumstance solutions that keep workers working — no matter what.

Out-of-the-blue workplace disruptions — whether natural disasterspolitical unrest, or the current coronavirus pandemic — have shown how true business continuity means enabling all employees to continue to work in a safe and secure manner.

The next BriefingsDirect business agility panel discussion explores how companies and communities alike are adjusting to a variety of workplace threats using new ways of enabling enterprise-class access and distribution of vital data resources and applications.

And in doing so, these public and private sector innovators are setting themselves up to be more agile, intelligent, and responsive to their workers, customers, and citizens once the disaster inevitably passes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share stories on making IT systems and people evolve together to overcome workplace disruptions is Chris McMasters, Chief Information Officer (CIO) at the City of Corona, California; Jordan Catling, Associate Director of Client Technology at The University of Sydney in Australia, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, how has business readiness changed over the past few years? It seems to be a moving target.

Minahan: The very nature of business readiness is not about preparing for what’s happening today — or responding to a specific incident. It’s a signal for having a plan to ensure that your work environment is ready for any situation.

That certainly means having in place the right policies and contingency plans, but it also — with today’s knowledge workforce — goes to enabling a very flexible and dynamic workspace infrastructure that allows you to scale up, scale down, and move your entire workforce on a moment’s notice.

You need to ensure that your employees can continue to work safely and remotely while giving your company the confidence that they’re doing that all in a very secure way, so the company’s information and infrastructure remains secure.

Gardner: Chris McMasters, as a CIO, you surely remember the days when IT systems were brittle, not easily adjusted, and hard to change. Has the nature of work and these business continuity challenges forced IT to be more agile?

McMasters: Yes, absolutely. There’s no better example than in government. Government IT is known for being on-premises and very resistant to change. In the current environment everything has been flipped on its head. We’re having to be flexible, more dynamic in how we deploy services, and in how users get those services.

Gardner: Jordan, higher education hasn’t necessarily been the place where we’d expect business continuity challenges to be overcome. But you’ve been dealing with an aggressive outbreak of the coronavirus in China.

Catling: It’s been a very interesting six months for us, particularly in higher education, with the Australian fires, floods, and now the coronavirus. But generally, as an institution that operates over 22 locations, with teaching hospitals and campuses — our largest campus has its own zip code — this is part of our day, enabling people to work from wherever they are.

The really interesting thing about this situation is we’re having to enable teaching from places that we wouldn’t ordinarily. We’re having to make better use of the tools that we have available to come up with innovative solutions to keep delivering a distinctive education that The University of Sydney is known for.

Gardner: And when you’re trying to anticipate challenges, something like COVID-19, the disease that emanates from the coronavirus, did you ever think that you’d have to virtually overnight provide students stuck in one location with the opportunity to continue to learn from a distance?

Catling: We need to always be preparing for a number of scenarios. We need to be able to rapidly deploy solutions to enable people to work from wherever they are. The flexibility and dynamic toolsets are really important for us to be able to scale up safely and securely.

Gardner: Tim, the idea of business continuity including workers not only working at home but perhaps in far-flung countries where they’ve been stuck because of a quarantine, for example — these haven’t always been what we consider IT business continuity. Why is worker continuity more important than ever?

Minahan: Globally we’re recognizing the importance of the overall employee experience and how it’s becoming a key differentiator for companies and organizations. We have a global shortage of medium- to high-skilled talent. We’re short about 85 million workers.

Companies are battling for the high ground on providing preferred ways to work. One way they do that is ensuring that they can provide flexible work environments that rely on effective workplace technologies that enable employees to do their very best work.

So companies are battling for the high ground on providing preferred ways to work. One way they do that is ensuring that they can provide flexible work environments, ones that rely on effective workplace technologies that enable employees to do their very best work wherever that might be. That might be in an office building. It might be in a remote location, or in certain situations they may need to turn on a dime and move from their office to the home force to keep operations going. Companies are planning to be flexible not just for business readiness but also for competitive advantage.

Gardner: Making this happen with enterprise-caliber, mission-critical reliability isn’t just a matter of renting some new end-devices and throwing up a few hotspots. Why is this about an end-to-end solution, and not just point solutions?

Be proactive not reactive

Minahan: One of the most important things to recognize is companies often first react to a crisis environment. Currently, you’re hearing a lot of, “Hey, we just,” like the school system in Miami, for example, “purchased 250,000 laptops to distribute to students and teachers to maintain their education.”

multiple threatsHowever, that may enable and empower students and employees, but it may be less associated with proper security measures and put both the companies’, workers’, and customers’ personal information at risk.

You need to plan from the get-go for having a very flexible, remote workplace infrastructure — one that embeds security. That way — no matter where the work needs to get done, no matter on what device, or even on whatever unfamiliar network — you can be assured that the appropriate security policies are in place to protect the private information of your employees. The critical information of your business, and certainly any kinds of customer or constituent information, is at stake.

Gardner: Let’s hear what you get when you do this right. Jordan at The University of Sydney, you had more than 14,000 students unexpectedly quarantined in China, yet they still needed to somehow do their coursework. Tell us how this came about, and what you’ve done to accommodate them.

Quality IT during quarantine

Catling: Exactly right. As this situation began to develop in late January, we quite quickly began to scenario plan around the possible eventualities. A significant part of our role, as the technologists within the university, is making sure that we’re providing a toolset that can adapt to the needs of the community.

University_of_Sydney

So we looked at various platforms that we were already using — and some that we hadn’t — to work out what do. Within the academic community, we needed the best set of tools for our staff to use in different and innovative ways. We quickly had to develop solutions and had to lean on our partners to help us out with developing those.

Gardner: Did you know where your students were going to be housed? Was this a case where you knew that they were going to be in a certain type of facility with certain types of resources or are they scattered around? How did you deal with that last mile issue, so to speak?

Catling: The last mile issue is a real tricky one. We knew that people were going to be in various locations throughout mainland China, and elsewhere. We needed to quickly build a solution capable of supporting our students — no matter where they were, no matter what device that they were using, and no matter what their local Internet connection was like.

We have had variability in the quality of our connections even within Australia. But now we needed a solution that would cater to as many people as possible and be considerate of quite a number of different scenarios that our students and staff would be facing.

Gardner: How were you are able to provide that quality of service across so many applications given that level of variability?

Catling: The biggest focus for us, of course, is the safety and security of our staff and students. It’s paramount. We very quickly tried to work out where our people would be connecting from and tried to make sure that the resources we were providing, the connection to the resources, would be as close to them as possible to minimize the impact of that last mile.

We worked with Citrix to put together a set of application delivery controllers into Hong Kong to make sure that the access to the solutions was nice and fast. We then worked to optimize the connection from Hong Kong to Sydney to maximize the user experience. 

We worked with Citrix to put together a set of application delivery controllers into Hong Kong to make sure that the access to the solution was nice and fast. Then we worked to optimize the connection back from Hong Kong to Sydney to maximize the user experience for our staff and students.

Gardner: So this has very much been a cloud-enabled solution. You couldn’t have really done this eight or 10 years ago.

Catling: Certainly not this quickly. Literally from putting a call into Citrix, we worked from design to a production environment within seven days. For me, that’s unheard of, really. Regardless of whether it’s 10 years ago or 10 weeks ago, it was quite a monumental effort. It’s highlighted the importance of having partners that both seek to understand the business problems you’re facing and coming up with innovative solutions rapidly and are able to deploy those at scale. And cloud is obviously a really important part of that.

Citrix logoWe are still delivering on this solution. We have the capabilities now that we didn’t have a couple of months ago. We’re able to provide applications to students no matter where they are. They’re able to continue their studies.

Obviously, the solution needs to remain flexible to the evolving needs. The situation is changing frequently and we are discovering new needs and new requirements. As our academics start to use the technology in different ways, we’re evolving the solution based on their feedback to try and maximize the experience for both our staff and students.

Gardner: Tim, when you hear Jordan describe this solution, does it strike you as a harbinger of more business continuity things to come? How has the coronavirus issue — and not just China but in Europe and in North America — reinforced your idea of what a workplace-enhanced business continuity solution should be?

Business continuity in crisis

Minahan: We continue to field a rising a number of inquiries from customers and other companies. They are trying to assess the best ways to ensure continuity of their business operations and switch to a remote workforce in a very short period of time.

Situations like this remind us that we need to be planning today for any kind of business-ready situation. Using these technologies ensures that you can quickly adapt your work models, moving entire employee groups from an office to a remote environment, if needed, whether it’s because of virus, flood, or any other unplanned event.

What’s exciting for me is being able to use such agile work models and digital workspace technology to arm companies with new sources for growth and competitive advantage.

One good example is we recently partnered with the Center for Economics and Business Research to examine the impact remote work models and technologies have on business and economic growth. We found that 69 percent of people who are currently unemployed or economically inactive would be willing to start working if given the opportunity to work flexibly by having the right technology.

device user2They further estimate that activating these, if you will, untapped pools of talent by enabling these flexible work-from-home models — especially for parents, workers in rural areas, retirees, part-time, and gig workers, folks that are normally outside of the traditional work pool and reactivating them through digital workspace technologies — could drive upward of an initial $2 trillion in economic gains across the US economy. So, the investment in readiness that folks are making is now being applied to drive ongoing business results even in non-crisis times.

Gardner: The coronavirus has certainly been leading the headlines recently, but it wasn’t that long ago that we had other striking headlines.

In California last fall, Chris McMasters, the wildfires proved a recurring problem. Tell us about Corona and why adjusting to a very dangerous environment — but requiring your key employees to continue to work – allowed you to adjust to a major business continuity challenge.

Fighting fire with cloud

McMasters: Corona is like a lot of local governments within the United States. I came from the private sector and have been in the city IT for about four years now. When I first got there, everything was on-premises. Our back-up with literally three miles away on the other side of the freeway.

If there was a disaster and something totaled the city, literally all of our technology assets would be down, which concerned me. I used to work for a national company and we had offices all over and we backed up across the country. So this was a much different environment. Yet we were dealing with public safety, which with police and fire service, 911 service, and they can never go down. Citizens depend on all of that.

CoronaCA

That was a wake-up call for me. At that time, we didn’t really have any virtual desktop infrastructure (VDI) going on. We did have server virtualization, but nothing in the cloud. In the government sector, we have a lot of regulation that revolves around the cloud and its security, especially when we are dealing with police and fire types of information. We have to be very careful. There are requirements both from the State of California and the federal government that we have to comply with.

At first, we used a government cloud, which was a little bit slower in terms of innovation because of all the regulations. But that was a first step to understanding what was ahead for us. We started this process about two years ago. At the time, we felt like we needed to push more of our assets to the cloud to give us more continuity.

At the end of the day, we realized we also needed to get the desktops up there, too: Using VDI and the cloud. And at the time, no one was doing that. We went and talked to Citrix on how that would extend to support our environment for public safety. Citrix has been there since day-one.

At the end of the day, we realized we also needed to get the desktops up there, too: Using VDI and the cloud. And at the time, no one was doing that. But we went and talked to Citrix. We flew out to their headquarters, sat with their people, and discussed our initiative, what we are trying to accomplish, and how that would extend out to support our environment for public safety. And that means all of the people out at the edge who actually touch citizens and provide emergency support services.

That was the beginning of the journey and Citrix has been there since day-one. They develop the products around that particular idea for us right up to today.

In the last two years, we’ve had quite a few fires in the State of California. Corona butts right up against the forest line and so we have had a lot of damage done by fires, both in our city and in the surrounding county. And there have been the impacts that occur after fires, too, which include mudslides. We get the whole gamut of that stuff.

But now we find that those first responders have the data to take action. We get the data into their hands quickly, make sure it’s secure on the way there, and we make that continuative so that it never fails. Those are the last people that we want to have fail.

We’ve been able to utilize this type of a platform where our data currently resides in two different datacenters in two different states. It’s on encrypted arrays at rest.

We are operating on a software-defined network so we can look at security from a completely different perspective. The old way was, “Let’s build a moat around it and a big wall, and hopefully no one gets in.” Now, instead we look at it quite differently. Our assets are protected outside of our facilities.

Those personnel riding in fire engines, in police cars, right up at the edge — they have to be secure right up to that edge. We have to maintain and understand the identity of that person. We need to know what applications they are accessing, or should not be accessing, and be secure all along that path.

This has all changed our outlook on how we deal with things and what a modern-day work environment looks like. The money we use comes from taxes, the people pay, and we provide services for our citizens. The interesting thing about that is we’re now driving toward the idea of government on-demand.

Before, when you would come home, right after a hard day’s work, city hall would be closed. Government was open 8 to 5, when people are normally working. So, when you want to conduct business at city hall, you have to take some time off of work. You try to find one day of the week, or a time when you might sneak in there to get your permits for something and proceed with your business.

endpoint-security-solution

But our new idea is different. Most of our services can be provided online for people. If we can do that, that’s fantastic, right? So, you can come home and say, “Hey, you know what? I was thinking about building an addition to my house.” So you go online, file your permits, and submit all of your documents electronically to us.

The difference that VDI provides for our employees is that I can now tap into a workforce of let’s say, a single mother who has a special needs child who can’t work normal hours, but she can work at night. So that person can take that permit, look at that permit at 6 or 7 pm, process the permit, and then at 5 am the next day, that process is done. You wake up in the morning, your permit has been processed by the city and completed. That type of flexibility is integral for us to make government more effective for people.

It’s not the necessarily the public safety support, which we are concerned about. But it’s about also generally providing flexible services for people and making sure government continues to operate.

Gardner:  Tim, it’s interesting that by addressing business continuity issues and disasters we are able to move very rapidly to a government on-demand or higher education on-demand. So, what are some of the larger goals when it comes to workforce agility?

Flexibility extends the business

Minahan: The examples that Chris and Jordan just gave are what excites me about flexible work models, empowered by digital workplace technologies, and the ability to embrace entirely new business models.

I used the example from the Center of Economic Business Research and how to tap into untapped talent pools. Another example of a company using similar technology is eBay. So eBay, like many of their competitors, would build a big call center and hire a bunch of people, train them up, and then one of the competitors will build a call center down the street and steal them away. They would have rapid turnover. They finally said, “Enough is enough, we have to think of a different model.”

eBay used the same approach of providing a secure digital workspace to reach into new talent pools outside of big cities. They could now hire gig workers and re-engage them in the workforce by using a workplace platform to arm them at the edge.

Well, they used the same approach of providing a secure digital workspace to reach into new talent pools outside of big cities. They could now hire gig workers, stay-at-home parents, etc., and re-engage them in the workforce by using the workplace platform to arm them at the edge and provide a service that was formally only provided in a big work hub, a big call center.

They went from having zero home force workers to 600 by the end of last year, and they are on a path to 4,000 by the end of this year. eBay solved a big problem, which is providing support for customers. How do I have a call center in a very competitive market? Well, I turn the tables and create new pools of talent, using technology in an entirely different way.

Gardner: Jordan, now that you’ve had help from organizations like Citrix to deal with your tough issue of students stuck in China, or other areas where there’s a quarantine, are you going to take that innovation and use it in other ways? Is this a gift that keeps giving?

Catling: It’s a really interesting question. What it’s demonstrated to me is that, as technologists, we need to be working with all of our people across the organization to understand their needs and to provide the right tools, but not necessarily to be prescriptive in how they are used. This current coronavirus situation has demonstrated to us that a combination of just a few tools — for example, the Citrix platform, ZoomEcho, and Canvas — means a very different thing to one person than to another person.

There’s such large variability in the way that education is delivered across the university, across so many disciplines, that it becomes about providing a flexible set of tools that all of our people can use in different and exciting ways. That extends not only to the current situation but to more normal times.

If we can provide the right toolset that’s flexible and meets the users where they are, and also make sure that the solutions provide a natural experience, that’s when you are really geared up well for success. The technology kind of fades into the background and becomes a true enabler of the bright minds across the institution.

Gardner: Chris, now that you’re able to do more with virtual desktops and delivering data regardless of the circumstances to your critical workers as well as to your citizens, what’s the next step?

Can you add a layer of intelligence rather than just being about better feeds and speeds? What comes next, and how would Citrix help you with that?

Intelligence improves government

McMasters: We’re neck deep in data analytics and in trying to understand how we can make impacts correctly by analyzing data. So adding artificial intelligence (AI) on top of those layers, understanding utilization of our resources, is the amazing part of where we’re going.

Wildfire_in_CaliforniaThere’s so much unused hardware and processing power tied up in our normal desktop machines. Being able to disrupt that and flip it up on its end is a fundamental change in how government operates. This is literally turning it on-end. I mean, AI can impact all the way down to how we do helpdesk, how it minimizes our response times and turnaround times, to increased productivity, and in how we service 160,000 people in my city. All of that changes.

Already I’m saving hundreds of thousands of dollars by using the cloud and VDI models and at the same time increasing all my service levels across the board. And now we can add this layer of business continuity to it, and that’s before we start benefitting from predictive AI and using data to determine asset utilization.

Moving from a CAPEX model to this OPEX model for government is something very new, it’s something that public sector or a private sector has definitely capitalized on and I think public sector is ripe for doing that. So for us, it’s changing everything, including our budget, how we deliver services, how we do helpdesk support, and on to the ways that we’re assessing our assets and leveraging citizens’ tax dollars correctly.

Gardner: Tim, organizations, both public and private sector, get involved with these intelligent workspaces in a variety of ways. Sometimes it might be a critical issue such as business continuity or a pandemic.

But ultimately, as Chris just mentioned, this is about digital business transformation. How are you able to take whatever on-ramp organizations are getting into an intelligent workspace and then give them more reasons to see ongoing productivity? How is this something that has a snowball effect on productivity?

AI, ML works with you

Minahan: Chris hit the nail on the head. Certainly, the initial on-ramps to digital workspace provides employees with unified and secure access to everything they need to be productive and in one experience. That means all of their apps, all of their content, regardless of where that’s stored, regardless of what device they’re accessing it from and regardless of where they’re accessing it from.

However, it gets really exciting when you go beyond that foundation of unified experience in a secure environment toward infusing things like machine learning (ML), digital assistants, and bots to change the way that people work. They can newly extract out some of the key insights and tasks that they need to do and offer them up to employees in real-time in a very personalized way. Then they can quickly take care of those tasks and the things they need to remove that noise from their day, and even guide them toward the right next steps to take to be even more productive, more engaged, and do much more innovative and creative work.

So, absolutely, AI and ML and the rise of bots are the next phase of all of this, where it’s not just a place you go to launch apps and work securely, but a place where you go to get your very best work done.

Gardner: Jordan, you were very impressively able to get more than 14,000 students to continue their education regardless of what Mother Nature threw at them. And you were able to do it in seven days. For those organizations that don’t want to be caught under such circumstances, that want to become proactive and prepared, what lessons have you have learned in your most recent journey that you can share with them? How can they be better positioned to combat any unfortunate circumstances they might face?

Prioritize when and how you work

Catling: It’s almost becoming cliché to say, but work is something that you do — it’s not a place anymore. So when we’re looking at and assessing tools for how we support the university, we’re focusing on taking a cloud-first approach where it doesn’t matter where a student or staff member is. They have access to all the resources they need on-demand. That’s one of the real guiding principles we should be using in our decision-making process.

Scalability is also a very important thing to us. The nature of the way that education is delivered today with an on-campus model is that demand is very peaky. We need to be mindful of how scalable and rapidly scalable a solution can be. That’s important to consider, particularly in the higher education context. How quickly can you scale up and down your environments to meet varying demands?

We can use the Citrix platform in many different ways. It’s not only for us to provide applications out to students to complete coursework. It can also be used for providing secure access to data and workspaces. 

Also, it’s important to consider the number of flexible ways that each of the technology products you choose can be used. For example, with the Citrix platform we can use it in many different ways. It’s not only for us to provide applications out to students to complete their coursework. It can also be used for providing secure access to data and to workspaces. There are so many different ways it can be extended, and that’s a real important thing when deciding which platform to use.

The final really important takeaway for us has been the establishment of true partnerships. We’ve had extremely good support from our partners, such as Citrix and Zoom, where they very rapidly sought to understand and work with us to solve the unique business problems that we’re facing. The real, true partnership is not one of just providing products, but of really sitting down shoulder-to-shoulder, trying to understand, but also suggesting ways to use a technology we may not be thinking of — or maybe it’s never been done before.

As Chris mentioned earlier, virtual desktops in the cloud weren’t a big thing that many years ago. About a decade ago, we began working with Citrix to provide streams of desktops to physical devices across campus.

That was something — that was a very unusual use of technology. So I think that the partnership is very important and something that organizations should develop and be ready to use. It goes in both directions at all times.

Gardner: Chris, now that you have, unfortunately, dealt with these last few harsh wildfire seasons in Southern California, what lessons have you learned? How do you make yourselves more like local government on demand?

Public-private partnerships

McMasters: That’s a big question. For us, we looked at breaking some of the paradigms that exist in government. They don’t have the same impetus to change as in the private sector. They are less willing to take risks. However, there are ways to work with vendors and partners to mitigate a lot of that risk, ways to pilot and test cutting-edge technologies that don’t put you at risk as you push these things out.

There are very few vendors that I would consider such partners. I probably can count them on one hand in total, and the interesting thing is that when we were selecting a vendor for this particular project, we were looking for a true partner. In our case, it was Citrix and Microsoft who came to the table. And when I look back at what’s happened in our relationship with those two in particular, I couldn’t ask for anything better.

We have literally had technicians, engineers, everyone on-site, on the phone every step of the way as we have been developing this. They took a lot of the risk out for us, because we are dealing with public dollars and we need to make sure these projects work. To have that level of comfort and stability in the background and knowing that I can rely on these people was huge. It’s what allowed us to develop to where we are today, which is far advanced in the government world.

That’s where things have to change. This kind of public-private partnership is what the public sector needs to start maturing. It’s bidirectional; it goes both ways. There is a lot of information that we offer to them; there is a lot of things they do for us. And so it goes back and forth as we develop this through this product cycle. It’s advantageous for both of us to be in it.

That’s where sometimes, especially in the public sector, we lose focus. They don’t understand what the private sector wants and what they are moving toward. It’s about being aligned on both sides of that equation — and it benefits both parties.

Technology is going to change, and it just keeps driving faster. There’s always another thing around the corner, but building these types of partnerships with vendors and understanding what they want helps them understand what you want, and then be able to deliver.

Gardner: Tim, how should businesses better work with vendor organizations to prepare themselves and their workers for a flexible future?

Minahan: First off, I would echo Chris’s comments. We all want government on-demand. You need a solution like that. But how they should work together? There are two great examples here in The University of Sydney and the City of Corona.

It really starts by listening. What are the problems we are trying to solve in planning for the future? How do we create a digitally agile organization and infrastructure that allows us to pursue new business opportunities, and just as easily ensure business continuity? So start by listening, map out a joint roadmap together and innovate toward that.

We are collectively as an industry constantly looking to innovate, constantly looking to leverage new technologies to drive business outcomes — whether those are for our citizens, students, or clientele. Start by listening, doing joint and co-development work, and constantly sharing that innovation with the rest of the market. It raises all boats.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in artificial intelligence, Citrix, Cloud computing, contact center, Cyber security, Data center transformation, disaster recovery, Information management, Internet of Things, Microsoft, mobile computing, risk assessment, Security, supply chain, User experience, vdi | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

As containers go mainstream, IT culture should pivot to end-to-end DevSecOps

hanging container

Container-based deployment models have rapidly gained popularity from cloud models to corporate data centers. IT operators are now looking to extend the benefits of containers to more use cases, including the computing edge.

Yet in order to push containers further into the mainstream, security concerns need to be addressed across this new end-to-end container deployment spectrum — and that means addressing security during development under the rubric of DevSecOps best practices.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as the next BriefingsDirect Voice of Innovation discussion examines the escalating benefits that come from secure and robust container use with Simon Leech, Worldwide Security and Risk Management Practice at Hewlett Packard Enterprise (HPE) Pointnext Services. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Simon, are we at an inflection point where we’re going to see containers take off in the mainstream? Why is this the next level of virtualization?

Leech: We are certainly seeing a lot of interest from our customers when we speak to them about the best practices they want to follow in terms of rapid application development.

One of the things that always held people back a little bit with virtualization was that you are always reliant on an operating system (OS) managing the applications that sit on top of that OS in managing the application code that you would deploy to that environment.

But what we have seen with containers is that as everything starts to follow a cloud-native approach, we start to deal with our applications as lots of individual microservices that all communicate integrally to provide the application experience to the user. It makes a lot more sense from a development perspective to be able to address the development in these small, microservice-based or module-based development approaches.

So, while we are not seeing a massive influx of container-based projects going into mainstream production at the moment, there are certainly a lot of customers testing their toes in the water to identify the best possibilities to adopt and address container use within their own application development environments.

Gardner: And because we saw developers grok the benefits of containers early and often, we have also seen them operate within a closed environment — not necessarily thinking about deployment. Is now the time to get developers thinking differently about containers — as not just perhaps a proof of concept (POC) or test environment, but as ready for the production mainstream?

Leech: Yes. One of the challenges I have seen with what you just described is a lot of container projects start as a developer’s project behind his laptop. So the developer is going out there, identifying a container-based technology as something interesting to play around with, and as time has gone by has realized he can actually make a lot of progress by developing his applications using a container-based architecture.

This is often done under the radar of management. one of the things we are discussing with customers as we address DevSecOps and DevOps is to make sure you get buy-in from the executive team and enable top-down integration.

What that means from an organizational perspective is that this is often done under the radar of management. One of the things we are discussing with our customers as we go and talk about addressing DevSecOps and DevOps initiatives is to make sure that you do get that buy-in from the executive team and so you can start to enable some top-down integration.

Don’t just see containers as a developer’s laptop project but look at it broadly and understand how you can integrate that into the overall IT processes that your organization is operating with. And that does require a good level of buy-in from the top.

Gardner: I imagine this requires a lifecycle approach to containers thinking — not just about the development, but in how they are going to be used over time and in different places.

Now, 451 Research recently predicted that the market for containers will hit $2.7 billion this year. Why do you think that the IT operators — the people who will be inheriting these applications and microservices — will also take advantage of containers? What does it bring to their needs and requirements beyond what the developers get out of it?

Quick-change code artists

Leech: One of the biggest advantages from an operational perspective is the ability to make fast changes to the code you are using. So whereas in the traditional application development environment, a developer would need to make a change to some code and it would involve requesting a downtime to be able to update the complete application, with a container-based architecture, you only have to update parts of the architecture.

So, it allows you to make many more changes than you previously would have been able to deliver to the organization — and it allows you to address those changes very rapidly.

Gardner: How does this allow for a more common environment to extend across hybrid IT — from on-premises to cloud to hybrid cloud and then ultimately to the edge?

HPE C truck

Leech: Well, applications developed in containers and developed within a cloud-native approach typically are very portable. So you don’t need to be restricted to a particular version or limits, for example. The container itself runs on top of any OS of the same genre. Obviously, you can’t run a Windows container on top of a Linux OS, or vice versa.

But within the general Linux space that pretty much has compatibility. So it makes it very easy for the containers to be developed in one environment and then released into different environments.

Gardner: And that portability extends to the hyperscale cloud environments, the public cloud, so is there a multi-cloud extensibility benefit?

Leech: Yes, definitely. You see a lot of developers developing their applications in an on-premises environment with the intention that they are going to be provisioned into a cloud. If they are done properly, it shouldn’t matter if that’s a Google Cloud Platform instance, a Microsoft Azure instance, or Amazon Web Services (AWS).

Gardner: We have quite an opportunity in front of us with containers across the spectrum of continuous development and deployment and for multiple deployment scenarios. What challenges do we need to think about to embrace this as a lifecycle approach?

What are the challenges to providing security specifically, making sure that the containers are not going to add risk – and, in fact, improve the deployment productivity of organizations?

Make security a business priority 

Leech: When I address the security challenges with customers, I always focus on two areas. The first is the business challenge of adopting containers, and the security concerns and constrains that come along with that. And the second one is much more around the technology or technical challenges.

If you begin by looking at the business challenges, of how to adopt containers securely, this requires a cultural shift, as I already mentioned. If we are going to adopt containers, we need to make sure we get the appropriate executive support and move past the concept that the developer is doing everything on his laptop. We train our coders on the needs for secure coding.

A lot of developers are not trained as security specialists. It makes sense to put a program into place that trains coders to think more about security, especially in a container environment where you have fast release cycles. 

A lot of developers have as their main goal to produce high-quality software fast, and they are not trained as security specialists. It makes a lot of sense to put an education program into place, that allows you to train those internal coders so that they understand the need to think a little bit more about security — especially in a container environment where you have fast release cycles and sometimes the security checks get missed or don’t get properly instigated. It’s good to start with a very secure baseline.

And once you have addressed the cultural shift, the next thing is to think about the role of the security team in your container development team, your DevOps development teams. And I always like to try and discuss with my customers the value of getting a security guy into the product development team from day one.

Often, we see in a traditional IT space that the application gets built, the infrastructure gets designed, and then the day before it’s all going to go into production someone calls security. Security comes along and says, “Hey, have you done risk assessments on this?” And that ends up delaying the project.

Hooded guyIf you introduce the security person into the small, agile team as you build it to deliver your container development strategy, then they can think together with the developers. They can start doing risk assessments and threat modeling right from the very beginning of the project. It allows us to reduce delays that you might have with security testing.

At the same time, it also allows us to shift our testing model left in a traditional waterfall model, where testing happens right before the product goes live. But in a DevOps or DevSecOps model, it’s much better to embed the security, best practices, and proper tooling right into the continuous integration/continuous delivery (CI/CD) pipeline.

The last point around the business view is that, again, going back to the comment I made earlier, developers often are not aware of secure coding and how to make things secure. Providing a secure-by-default approach — or even a security self-service approach – allows developers to gain a security registry, for example. That provides known good instances of container images or provides infrastructure and compliance code so that they can follow a much more template-based approach to security. That also pays a lot of dividends in the quality of the software as it goes out the door.

Gardner: Are we talking about the same security precautions that traditional IT people might be accustomed to but now extending to containers? Or is there something different about how containers need to be secured?

Updates, the container way 

Leech: A lot of the principles are the same. So, there’s obviously still a need for network security tools. There’s still a need to do vulnerability assessments. There is still a need for encryption capabilities. But the difference with the way you would go about using technical controls to protect a container environment is all around this concept of the shared kernel.

An interesting white paper has been released by the National Institute of Standards and Technology (NIST) in the US, SP 800-190, which is their Application Container Security Guide. And this paper identifies five container security challenges around risks with the images, registry, orchestrator, the containers themselves, and the host OS.

So, when we’re looking at defining a security architecture for our customers, we always look at the risks within those five areas and try to define a security model that protects those best of all.

hpe-logoOne of the important things to understand when we’re talking about securing containers is that we have a different approach to the way we do updates. In a traditional environment, we take a gold image for a virtual machine (VM). We deploy it to the hypervisor. Then we realize that if there is a missing patch, or a required update, that we roll that update out using whatever patch management tools we use.

In a container environment, we take a completely different approach. We never update running containers. The source of your known good image is your registry. The registry is where we update containers, have updated versions of those containers, and use the container orchestration platform to make sure that next time somebody calls a new container that it’s launched from the new container image.

It’s important to remember we don’t update things in the running environment. We always use the container lifecycle and involve the orchestration platform to make those updates. And that’s really a change in the mindset for a lot of security professionals, because they think, “Okay, I need to do a vulnerability assessment or risk assessment. Let me get out my Qualys and my Rapid7,” or whatever, and, “I’m going to scan the environment. I’m going to find out what’s missing, and then I’m going to deploy patches to plug in the risk.”

So we need to make sure that our vulnerability assessment process gets built right into the CI/CD pipeline and into the container orchestration tools we use to address that needed change in behavior.

Gardner: It certainly sounds like the orchestration tools are playing a larger role in container security management. Do those in charge of the container orchestration need to be thinking more about security and risk?

Simplify app separation 

Leech: Yes and no. I think the orchestration platform definitely plays a role and the individuals that use it will need to be controlled in terms of making sure there is good privileged account management and integration into the enterprise authentication services. But there are a lot of capabilities built into the orchestration platforms today that make the job easier.

One of the challenges we’ve seen for a long time in software development, for example, is that developers take shortcuts by hard coding clear text passwords into the text, because it’s easier. And, yeah, that’s understandable. You don’t need to worry about managing or remembering passwords.

But what you see a lot of orchestration platforms offering is the capability to deliver sequence management. So rather than storing the passcode in within the code, you can now request the secret from the secrets management platform that the orchestration platform offers to you.

Orchestration tools give you the capability to separate container workloads for differing sensitivity levels. This provides separation between the applications without having to think too much about it.

These orchestration tools also give you the capability to separate container workloads for differing sensitivity levels within your organization. For example, you would not want to run containers that operate your web applications on the same physical host as containers that operate your financial applications. Why? Because although you have the capability with the container environment using separate namespaces to separate the individual container architectures from one another, it’s still a good security best practice to run those on completely different physical hosts or in a virtualized container environment on top of different VMs. This provides physical separation between the applications. Very often the orchestrators will allow you to provide that functionality within the environment without having to think too much about it.

Gardner: There is another burgeoning new area where containers are being used. Not just in applications and runtime environments, but also for data and persistent data. HPE has been leading the charge on making containers appropriate for use with data in addition to applications.

How should the all-important security around data caches and different data sources enter into our thinking?

Save a slice for security 

Leech: Because containers are temporary instances, it’s important that you’re not actually storing any data within the container itself. At the same time, as importantly, you’re not storing any of that data on the host OS either.

puzzleIt’s important to provide persistent storage on an external storage array. So looking at storage arrays, things like from HPE, we have Nimble Storage or Primera. They have the capability through plug-ins to interact with the container environment and provide you with that persistent storage that remains even as the containers are being provisioned and de-provisioned.

So your container itself, as I said, doesn’t store any of the data, but a well-architected application infrastructure will allow you to store that on a third-party storage array.

Gardner: Simon, I’ve had an opportunity to read some of your blogs and one of your statements jumped out … “The organizational culture still lags behind when it comes to security.” What did you mean by that? And how does that organizational culture need to be examined, particularly with an increased use of containers?

Leech: It’s about getting the security guys involved in the DevSecOps projects early on in the lifecycle of that project. Don’t bring them to the table toward the end of the project. Make them a valuable member of that team. There was a comment made about the idea of a two-pizza team.

A two-pizza team means a meeting should never have more people in it than can be fed by two pizzas and I think that that applies equally to development teams when you’re working on container projects. They don’t need to be big; they don’t need to be massive.

It’s important to make sure there’s enough pizza saved for the security guy! You need to have that security guy in the room from the beginning to understand what the risks are. That’s a lot of where this cultural shift needs to change. And as I said, executive support plays a strong role in making sure that that happens.

Gardner: We’ve talked about people and process. There is also, of course, that third leg of the stool — the technology. Are the people building container platforms like HPE thinking along these lines as well? What does the technology, and the way it’s being designed, bring to the table to help organizations be DevSecOps-oriented?

Select specific, secure solutions 

Leech: There are a couple of ways that technology solutions are going to help. The first are the pre-production commercial solutions. These are the things that tend to get integrated into the orchestration platform itself, like image scanning, secure registry services, and secrets management.

A lot of those are going to be built into any container orchestration platform that you choose to adopt. There are also commercial solutions that support similar functions. It’s always up to an organization to do a thorough assessment of whether their needs can be met by the standard functions in the orchestration platform or if they need to look at some of the third-party vendors in that space, like Aqua Security or Twistlock, which was recently acquired by Palo Alto Networks, I believe.

No single solution covers all of an enterprise’s requirements. It’s a task to assess security shortcomings, what products you need, and then decide who will be the best partner for those total solutions.

And then there are the solutions that I would gather up as post-production commercial solutions. These are for things such as runtime protection of the container environment, container forensic capabilities, and network overlay products that allow you to separate your workloads at the network level and provide container-based firewalls and that sort of stuff.

Very few of these capabilities are actually built into the orchestration platforms. They tend to be third parties such as SysdigGuardicore, and NeuVector. And then there’s another bucket of solutions, which are more open-source solutions. These typically focus on a single function in a very cost-effective way and are typically open source community-led. And these are solutions such as SonarQubePlatform as a Service (PaaS), and Falco, which is the open source project that Sysdig runs. You also have Docker Bench and Calico, a networking security tool.

But no single solution covers all of an enterprise customer’s requirements. It remains a bit of a task to assess where you have security shortcomings, what products you need, and who’s going to be the best partner to deliver those products with those technology solutions for you.

Gardner: And how are you designing Pointnext Services to fill that need to provide guidance across this still dynamic ecosystem of different solutions? How does the services part of the equation shake out?

Leech: We obviously have the technology solutions that we have built. For example, the HPE Container Platform, which is based around technology that we acquired as part of the BlueData acquisition. But at the end of the day, these are products. Companies need to understand how they can best use those products within their own specific enterprise environments.

ship containersI’m part of Pointnext Services, within the advisory and professional services team. A lot of the work that we do is around advising customers on the best approaches they can take. On one hand, we’d like them to purchase our HPE technology solutions, but on the other hand, a container-based engagement needs to be a services-led engagement, especially in the early phases where a lot of customers aren’t necessarily aware of all of the changes they’re going to have to make to their IT model.

At Pointnext, we deliver a number of container-oriented services, both in the general container implementation area as well as more specifically around container security. For example, I have developed and delivered transformation workshops around DevSecOps.

We also have container security planning workshops where we can help customers to understand the security requirements of containers in the context of their specific environments. A lot of this work is based around some discovery we’ve done to build our own container security solution reference architecture.

Gardner: Do you have any examples of organizations that have worked toward a DevSecOps perspective on continuous delivery and cloud native development? How are people putting this to work on the ground?

Edge elevates container benefits 

Leech: A lot of the customers we deal with today are still in the early phases of adopting containers. We see a lot of POC engagement where a particular customer may be wanting to understand how they could take traditional applications and modernize or architect those into cloud-native or container-based applications.

There’s a lot of experimentation going on. A lot of the implementations we see start off small, so the customer may buy a single technology stack for the purpose of testing and playing around with containers in their environment. But they have intentions within 12 to 18 months of being able to take that into a production setting and reaping the benefits of container-based deployments.

Gardner: And over the past few years, we’ve heard an awful lot of the benefits for moving closer to the computing edge, bringing more compute and even data and analytics processing to the edge. This could be in a number of vertical industries, from autonomous vehicles to manufacturing and healthcare.

But one of the concerns, if we move more compute to the edge, is will security risks go up? Is there something about doing container security properly that will make that edge more robust and more secure?

Leech: Yes, a container project done properly can actually be more secure than a traditional VM environment. This begins from the way you manage the code in the environment. And when you’re talking about edge deployments, that rings very true.

From the perspective of the amount of resources it has to use, it’s going to be a lot lighter when you’re talking about something like autonomous driving to have a shared kernel rather than lots of instances of a VM running, for example.

From a strictly security perspective, if you deal with container lifecycle management effectively, involve the security guys early, have a process around releasing, updating, and retiring container images into your registry, and have a process around introducing security controls and code scanning in your software development lifecycle — making sure that every container that gets released is signed with an appropriate enterprise signing key — then you have something that is very repeatable, compared with a traditional virtualized approach to application and delivery.

That’s one of the big benefits of containers. It’s very much a declarative environment. It’s something that you prescribe … This is how it’s going to look. And it’s going to be repeatable every time you deploy that. Whereas with a VM environment, you have a lot of VM sprawl. And there are a lot of changes across the different platforms as different people have connected and changed things along the way for their own purposes.

There are many benefits with the tighter control in a container environment. That can give you some very good security benefits.

Gardner: What comes next? How do organizations get started? How should they set themselves up to take advantage of containers in the right way, a secure way?

Begin with risk evaluation 

Leech: The first step is to do the appropriate due diligence. Containers are not going to be for every application. There are going to be certain things that you just can’t modernize, and they’re going to remain in your traditional data center for a number of years.

I suggest looking for the projects that are going to give you the quickest wins and use those POCs to demonstrate the value that containers can deliver for your organization. Make sure that you do appropriate risk awareness, work with the services organizations that can help you. The advantage of a services organization is they’ve probably been there with another customer previously so they can use the best practices and experiences that they have already gained to help your organization adopt containers.

Just make sure that you approach it using a DevSecOps model. There is a lot of discussion in the market at the moment about it. Should we be calling it DevOps or should we call it SecDevOps or DevOpsSec? My personal opinion is call it DevSecOps because security in a DevSecOps module sits right in the middle of development and operations — and that’s really where it belongs.

endpoint-security-solution

In terms of assets, there is plenty of information out there in a Google search; it finds you a lot of assets. But as I mentioned earlier, the NIST White Paper SP 800-190 is a great starting point to understand not only container security challenges but also to get a good understanding of what containers can deliver for you.

At the same time, at HPE we are also committed to delivering relevant information to our customers. If you look on our website and also our enterprise.nxt blog site, you will see a lot of articles about best practices on container deployments, case studies, and architectures for running container orchestration platforms on our hardware. All of this is available for people to download and to consume.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, Cloud computing, containers, Cyber security, Data center transformation, enterprise architecture, Hewlett Packard Enterprise, Security, Virtualization | Tagged , , , , , , , , , , , , , , , | Leave a comment

AI-first approach to infrastructure design extends analytics to more high-value use cases

speech text

The next BriefingsDirect Voice of artificial intelligence (AI) Innovation discussion explores the latest strategies and use cases that simplify the use of analytics to solve more tough problems.

Access to advanced algorithms, more cloud options, high-performance compute (HPC) resources, and an unprecedented data asset collection have all come together to make AI more attainable — and more powerful — than ever.

Major trends in AI and advanced analytics are now coalescing into top competitive differentiators for most businesses. Stay with us as we examine how AI is indispensable for digital transformation through deep-dive interviews on prominent AI use cases and their escalating benefits.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about analytic infrastructure approaches that support real-life solutions, we’re joined by two experts, Andy Longworth, Senior Solution Architect in the AI and Data Practice at Hewlett Packard Enterprise (HPE) Pointnext Services, and Iveta Lohovska, Data Scientist in the Pointnext Global Practice for AI and Data at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Andy, what are the top drivers for making AI more prominent in business use cases?

Longworth: We have three main things driving AI at the moment for businesses. First of all, we know about the data explosion. These AI algorithms require huge amounts of data. So we’re generating that, especially in the industrial setting with machine data.

Andy Longworth

Longworth

Also, the relative price of computing is coming down, giving the capability to process all of that data at accelerating speeds as well. You know, the graphics processing units (GPUs) and tensor processing units (TPUs) are becoming more available, enabling us to get through that vast volume of data.

And thirdly, the algorithms. If we look to organizations likeFacebookGoogle, and academic institutions, they’re making algorithms available as open source. So organizations don’t have to go and employ somebody to build an algorithm from the ground up. They can begin to use these pre-trained, pre-created models to give them a kick-start in AI and quickly understand whether there’s value in it for them or not.

Gardner: And how do those come together to impact what’s referred to as digital transformation? Why are these actually business benefits?

Longworth: They allow organizations to become what we call data driven. They can use the massive data that they’ve previously generated but never tapped into to improve business decisions, impacting the way they drive the business through AI. It’s transforming the way they work.

AI data boost to business 

Across several types of industry, data is now driving the decisions. Industrial organizations, for example, improve the way they manufacture. Without the processing of that data, these things wouldn’t be possible.

Gardner: Iveta, how do the trends Andy has described make AI different now from a data science perspective? What’s different now than, say, two or three years ago?

Lohovska: Most of the previous AI algorithms were 30, 40, and even 50 years old in terms of the linear algebra and their mathematical foundations. The higher levels of computing power enable newer computations and larger amounts of data to train those algorithms.

Iveta Lohovska

Lohovska

Those two components are fundamentally changing the picture, along with the improved taxonomies and the way people now think of AI as differentiated between classical statistics and deep learning algorithms. Now, not just technical people can interact with these technologies and analytic models. Semi-technical people can with a simple drag-and-drop interaction, based on the new products in the market, adopt and fail fast — or succeed faster — in the AI space. The models are also getting better and better in their performance based on the amount of data they get trained on and their digital footprint.

Gardner: Andy, it sounds like AI has evolved to the point where it is mimicking human-like skills. How is that different and how does such machine learning (ML) and deep learning change the very nature of work?

Let simple tasks go to machines 

Longworth: It allows organizations and people to move some of the jobs that were previously very tedious for people so they can be done by machines and repurposes the people’s skills into more complex jobs. For example, in computer vision and applying that in quality control. If you’re creating the same product again and again and paying somebody to look at that product to say whether there’s a defect on it, it’s probably not the best use of their skills. And, they become fatigued.

If you look at the same thing again and again, you start to miss features of that and miss the things that have gone wrong. A computer doesn’t get that same fatigue. You can train a model to perform that quality-control step and it won’t become tired over time. It can keep going for longer than, for example, an eight-hour shift that a typical person might work. So, you’re seeing these practical applications, which then allows the workforce to concentrate on other things.

Gardner: Iveta, it wasn’t that long ago that big data was captured and analyzed mostly for the sake of compliance and business continuity. But data has become so much more strategic. How are businesses changing the way they view their data?

Lohovska: They are paying more attention to the quality of the data and the variety of the data collection that they are focused on. From a data science perspective, even if I want to say that the performance of models is extremely important, and that my data science skills are a critical component to the AI space and ecosystem, it’s ultimately about the quality of the data and the way it’s pipelined and handled.

Organizations will realize that being more selective and paying more attention to the foundations of how they handle big data — or small data — will get them to the data science part of the process.

This process of data manipulation, getting to the so-called last mile of the data science contribution, is extremely important. I believe it’s the critical step and foundation. Organizations will realize that being more selective and paying more attention to the foundations of how they handle big data — or small data – will get them to the data science part of the process.

You can already see the maturity as many customers, partners, and organizations pay more attention to the fundamental layers of AI. Then they can get better performance at the last mile of the process.

Gardner: Why are the traditional IT approaches not enough? How do cloud models help?

Cloud control and compliance 

Longworth: The cloud brings opportunities for organizations insomuch as they can try before they buy. So if you go back to the idea of processing all of that data, before an organization spends real money on purchasing GPUs, they can try them in the cloud to understand whether they work and deliver value. Then they can look at the delivery model. Does it make sense with my use case to make a capital investment, or do I go for a pay-per-use model using the cloud?

You also have the data management piece, which is understanding where your data is. From that sense, cloud doesn’t necessarily make life any less complicated. You still need to know where the data resides, control that data, and put in the necessary protections in line with the value of the data type. That becomes particularly important with legislation like the General Data Protection Regulation (GDPR) and the use of personally identifiable information (PII).

outside factoryIf you don’t have your data management under control and understand where all those copies of that data are, then you can’t be compliant with GDPR, which says you may need to delete all of that data.

So, you need to be aware of what you’re putting in the cloud versus what you have on-premises and where the data resides across your entire ecosystem.

Gardner: Another element of the past IT approaches has to do with particulars vs. standards. We talk about the difference between managing a cow and managing a herd.

How do we attain a better IT infrastructure model to attain digital business transformation and fully take advantage of AI? How do we balance between a standardized approach, but also something that’s appropriate for specific use cases? And why is the architecture of today very much involved with that sort of a balance, Andy?

Longworth: The first thing to understand is the specific use case and how quickly you need insights. We can process, for example, data in near real-time or we can use batch processing like we did in days of old. That use case defines the kind of processing.

If, for example, you think about an autonomous vehicle, you can’t batch-process the sensor data coming from that car as it’s driving on the road. You need to be able to do that in near real-time — and that comes at a cost. You not only need to manage the flow of data; you need the compute power to process all of that data in near real-time.

So, understand the criticality of the data and how quickly you need to process it. Then we can build solutions to process the data within that framework and within the right time that it needs to be processed. Otherwise, you’re putting additional cost into a use case that doesn’t necessarily need to be there.

When we build those use cases we typically use cloud-like technologies. That allows us portability of the use case, even if we’re not necessarily going to deploy it in the cloud. It allows us to move the use case as close to the data as possible.

When we build those use cases we typically use cloud-like technologies — be that containers or scalar technologies. That allows us portability of the use case, even if we’re not necessarily going to deploy it in the cloud. It allows us to move the use case as close to the data as possible.

For example, if we’re talking about a computer vision use case on a production line, we don’t want to be sending images to the cloud and have the high latency and processing of the data. We need a very quick answer to control the production process. So you would want to move the inference engine as close to the production line as possible. And, if we use things like HPE Edgeline computing and containers, we can place those systems right there on the production line to get the answers as quickly as we need.

So being able to move the use case where it needs to reside is probably one of the biggest things that we need to consider.

Gardner: Iveta, why is the so-called explore, experiment, and evolve approach using such a holistic ecosystem of support the right way to go?

Scientific methods and solutions

Lohovska: Because AI is not easy. If it were easy, then everyone would be doing it and we would not be having this conversation. It’s not a simple statistical use case or a program or business intelligence app where you already have the answer or even an idea of the questions you are asking.

The whole process is in the data science title. You have the word “science,” so there is a moment of research and uncertainty. It’s about the way you explore the data, the way you understand the use cases, starting from the fact that you have to define your business case, and you have to define the scope.

My advice is to start small, not exhaust your resources or the trust of the different stakeholders. Also define the correct use case and the desired return on investment (ROI). HPE is even working on the definitions and the business case when approaching an AI use case, trying to understand the level of complexity and the required level of prediction needed to achieve the use case’s success.

Such an exploration phase is extremely important so that everyone is aligned and finds a right path to minimize failure and get to the success of monetizing data and AI. Once you have the fundamentals, once you have experimented with some use cases, and you see them up and running in your production environment, then it is the moment to scale them.

I think we are doing a great job bringing all of those complicated environments together, with their data complexity, model complexity, and networking and security regulations into one environment that’s in production and can quickly bring value to many use cases.

This flow is extremely important, of experimenting and not approaching things like you have a fixed answer or fixed approach. It’s extremely important, and this is the way we at HPE are approaching AI.

Gardner: It sounds as if we are approaching some sort of a unified reference architecture that’s inclusive of systems, cloud models, data management, and AI services. Is that what’s going to be required? Andy, do we need a grand unifying theory of AI and data management to make this happen?

Longworth: I don’t think we do. Maybe one day we will get to that point, but what we are reaching now is a clear understanding of what architectures work for which use cases and business requirements. We are then able to apply them without having to experiment every time we go into this because it’s a complement to what Iveta said.

machine monitoringWhen we start to look at these use cases, when we engage with customers, what’s key is making sure there is business value for the organization. We know AI can work, but the question is, does it work in the customer’s business context?

If we can take out a good deal of that experimentation and come in with a fairly good answer to the use case in a specific industry, then we have a good jump start on that.

As time goes on and AI develops, we will see more generic AI solutions that can be used for many different things. But at the moment, it’s really still about point solutions.

Gardner: Let’s find out where AI is making an impact. Let’s look first, Andy, at digital prescriptive maintenance and quality control. You mentioned manufacturing a little earlier. What’s the problem, context, and how are we getting better business outcomes?

Monitor maintenance with AI

Longworth: The problem is the way we do maintenance schedules today. If you look back in history, we had reactive maintenance that was basically … something breaks and then we fix it.

Now, most organizations are in a preventative mode so a manufacturer gives a service window and says, “Okay, you need to service this machinery every 1,000 hours of running.” And that happens whether it’s needed or not.

Read the White Paper on Digital Prescriptive

 Maintenance and Quality Control 

When we get into prescriptive and predictive maintenance, we only service those assets as they actually need it, which means having the data, understanding the trends, recognizing if problems are forthcoming, and then fixing them before they impact the business.

That data from machinery may sense temperature, vibration, speed, and getting a condition-based monitoring view and understanding in real time what’s happening with the machinery. You can then also use past history to be able to predict what is going to happen in the future with that machine.

We can get to a point where we know in real time what’s happening with the machinery and have the capability to predict the failures before they happen.

The prescriptive piece comes in when we understand the business criticality or the business impact of an asset. If you have a production line and you have two pieces of machinery on that production line, both may have the identical probability of failure. But one is on your critical manufacturing path, and the other is some production buffer.

The prescriptive piece goes beyond the prediction to understand the business context of that machinery and applying that to how you are behaving, and then how you react when something happens with that machine.

As a business, the way that you are going to deal with those two pieces of machinery is different. You will treat the one on the critical path differently than the one where you have a product buffer. And so the prescriptive piece goes beyond the prediction to understanding the business context of that machinery and applying that to how you are behaving, and then how you react when something happens with that machine.

That’s the idea of the solution when we build digital prescriptive maintenance. The side benefit that we see is the quality control piece. If you have a large piece of machinery that you can test to it running perfectly during a production run, for example, then you can say with some certainty what the quality of the outcoming product from that machine will be.

video of carsGardner: So we have AI overlooking manufacturing and processing. It’s probably something that would make you sleep a little bit better at night, knowing that you have such a powerful tool constantly observing and reporting.

Let’s move on to our next use case. Iveta, video analytics and surveillance. What’s the problem we need to solve? Why is AI important to solving it?

Scrutinize surveillance with AI 

Lohovska: For video surveillance and video analytics in general, the overarching field is computer vision. This is the most mature and currently the trendiest AI field, simply because the amount of data is there, the diversity is there, and the algorithms are getting better and better. It’s no longer state-of-the-art, where it’s difficult to grasp, adopt, and bring into production. So, now the main goal is moving into production and monetizing these types of data sources.

Read the White Paper on

Video Analytics and Surveillance 

When you talk about video analytics or surveillance, or any kind of quality assurance, the main problem is improving on or detecting human errors, behaviors, and environments. Telemetry plays a huge role here, and there are many complements and constraints to consider in this environment.

That makes it hardware-dependent and also requires AI at the edge, where most of the algorithms and decisions need to happen. If you want to detect fire, detect fraud or prevent certain types of failure, such as quality failure or human failure — time is extremely important.

As HPE Pointnext Services, we have been working on our own solution and reference architectures to approach those problems because of the complexity of the environment, the different cameras, and hardware handling the data acquisition process. Even at the beginning it’s enormous and very diverse. There is no one-size-fits-all. There is no one provider or one solution that can handle surveillance use cases or broad analytical use cases at the manufacturing plant or oil and gas rig where you are trying to detect fire or oil and gas spills from the different environments. So being able to approach it holistically, to choose the right solution for the right complement, and design the architecture is key.

Also, it’s essential to have the right hardware and edge devices to acquire the data and handle the telemetry. Let’s say when you are positioning cameras in an outside environment and you have different temperatures, vibrations, and heat. This will reflect on the quality of the acquired information going through the pipeline.

Some of the benefits in use cases using computer vision and video surveillance include real time information coming from manufacturing plants, knowing that all the safety and security standards there are met, and that the people operating are following the instructions and have the safeguards required for a specific manufacturing plant is also extremely important.

When you have a quality assurance use case, video analytics is one source of information to tackle the problem. For example, improving the quality of your products or batches is just one source in the computer vision field. Having the right architecture, being agile and flexible, and finding the right solution for the problem and the right models deployed at the right edge device — or at the right camera — is something we are doing right now. We have several partners working to solve the challenges of video analytics use cases.

Gardner: When you have a high-scaling, high-speed AI to analyze video, it’s no longer a gating factor that you need to have humans reviewing the processes. It allows video to be used in so many more applications, even augmented reality, so that you are using video on both ends of the equation, as it were. Are we seeing an explosion of applications and use cases for video analytics and AI, Iveta?

Lohovska: Yes, absolutely. The impact of algorithms in this space is enormous. Also, all the open source datasets, such as ImageNet and ResNet, allow a huge amount of data to train any kind of algorithms on those open source datasets. You can adjust them and pre-train them for your own use cases, whether it’s healthcare, manufacturing, or video surveillance. It’s very enabling.

You can see the diversity of the solutions people are developing and the different programs they are tackling using computer vision capabilities, not only from the algorithms, but also from the hardware side, because the cameras are getting more and more powerful.

Currently, we are working on several projects in the non-visible human spectrum. This is enabled by the further development of the hardware acquiring those images that we can’t see.

Gardner: If we can view and analyze machines and processes, perhaps we can also listen and talk to them. Tell us about speech and natural language processing (NLP), Iveta. How is AI enabling those businesses and how they transform themselves?

Speech-to-text to protect

Lohovska: This is another strong field for how AI is used and still improving. It’s not as mature as computer vision, simply because the complexity of human language and speech, and the way speech gets recorded and transferred. It’s a bit more complex, so it’s not only a problem of technologies and people writing algorithms, but also linguists being able to combine the grammar problems and write the right equation to solve those grammar problems.

Read the White Paper on

Speech and Natural Language Processing 

But one very interesting field in the speech and NLP area is speech-to-text, so basically being able to transcribe speech into text. It’s very helpful for emergency organizations handling emergency calls or fraud detection, where you need, in real time, to detect fraud or danger. If someone is in danger, it’s a very common use case for law enforcement or for security organizations or for simply improving the quality of your service for call centers.

carsThis example is industry- or vertical-independent. You can have finance, manufacturing, retail — but all of them have some kind of customer support. This is the most common use case, being able to record and improve the quality of your services, based on the analysis you can apply. Similar to the video analytics use case, the problem here, too, is handling the complexity of different algorithms, different languages, and the varying quality of the recordings.

A reference architecture, where you have the different components designed on exactly this holistic approach, allows the user to explore, evolve, and experiment in this space. We choose the right complement for the right problem and how to approach it.

And in this case, if we combine the right data science tool with the right processing tool and the right algorithms on top of it, then you can simply design the solution and solve the specific problem.

Gardner: Our next and last use case for AI is one people are probably very familiar with, and that’s the autonomous driving technology (ADT).

Andy, how are we developing highly automated-driving infrastructures that leverage AI and help us get to that potential nirvana of truly self-driving and autonomous vehicles?

Data processing drives vehicles 

Longworth: There are several problems around highly autonomous driving as we have seen. It’s taking years to get to the point where we have fully autonomous cars and there are clear advantages to it.

If you look at, for example, what the World Health Organization (WHO) says, there are more than 1 million deaths per year in road traffic accidents. One of the primary drivers for ADT is that we can reduce the human error in cars on the road — and reduce the number of fatalities and accidents. But to get to that point we need to train these immensely complex AI algorithms that take massive amounts of data from the car.

Just purely from the sensor point of view, we have high-definition cameras giving 360-degree views around the car. You have radar, GPS, audio, and vision systems. Some manufacturers use light detection and ranging (LIDAR), some not. But you have all of these sensors giving massive amounts of data. And to develop those autonomous cars, you need to be able to process all of that raw data.

Read the White Paper on

Development of Self-Driving Infrastrcuture 

Typically, in an eight-hour shift, an ADT car generates somewhere between 70 and 100 terabytes of data. If you have an entire fleet of cars, then you need to be able to very quickly get that data off of the car so that you can get them back out on the road as quickly as possible. Then you need to get that data from where you offload it into the data center so that the developers, data scientists, analysts, and engineers can build to the next iteration of the autonomous driving strategy.

When you have built that, tested it, and done all the good things that you need to do, you need to next be able to get those models and that strategy from the developers back into the cars again. It’s like the other AI problems that we have been talking about, but on steroids because of the sheer volume of data and because of the impact of what happens if something should go wrong.

At HPE Pointnext Services, we have developed a set of solutions that address several of the pain points in the ADT development process. First is the ingest; how can we use HPE Edgeline processing in the car to pre-process data and reduce the amount of data that you have to send back to the data center. Also, you have to send back the most important data after the eight-hour drive first, and then send the run-of-the-mill, backup data later.

At HPE Pointnext Services, we have developed a set of solutions that address several of the pain points in the ADT development process. 

The second piece is the data platform itself, building a massive data platform that is extensible to store all the data coming from the autonomous driving test fleet. That needs to also expand as the fleet grows as well as to support different use cases.

The data platform and the development platform are not only massive in terms of the amount of data that it needs to hold and process, but also in terms of the required tooling. We have been developing reference architectures to enable automotive manufacturers, along with the suppliers of those automotive systems, to build their data platforms and provide all the processing that they need so their data scientists can continuously develop autonomous driving strategies and be able to test them in a highly automated way, while also giving access to the data to the additional suppliers.

For example, the sensor suppliers need to see what’s happening to their sensors while they are on the car. The platform that we have been putting together is really concerned with having the flexibility for those different use cases, the scalability to be able to support the data volumes of today, but also to grow — to be able to have the data volumes of the not-too-distant future.

The platform also supports the speed and data locality, so being able to provide high-speed parallel file systems, for example, to feed those ML development systems and help them train the models that they have.

So all of this pulls together the different components we have talked about with the different use cases, but at a scale that is much larger than several of the other use cases, probably put together.

Gardner: It strikes me that the ADT problem, if solved, enables so many other major opportunities. We are talking about micro-data centers that provide high-performance compute (HPC) at the edge. We are talking about the right hybrid approach to the data management problem — what to move, what to keep local, how to then have a lifecycle approach to. So, ADT is really a key use-case scenario.

Why is HPE uniquely positioned to solve ADT that will then lead to so many enabling technologies for other applications?

Longworth: Like you said, the micro-data center — every autonomous driving car essentially becomes a data center on wheels. So being able to provide that compute at the edge to enable the processing of all that sensor data.

If you look at the HPE portfolio of products, there are very few organizations that have edge compute solutions and the required processing power in such small packages. But it’s also about being able to wrap it up in, not only the hardware, but the solution on top, the support, and being able to provide a flexible delivery model.

Lots of organizations want to have a cloud-like experience, not just from the way they consume the technology, but also in the way they pay for the technology. So, by HPE providing everything as-a-service allows being able to pay for it all, as you use it, for your autonomous driving platform. Again, there are very few organizations in the world that can offer that end-to-end value proposition.

Collaborate and corroborate 

Gardner: Iveta, why does it take a team-sport and solution-approach from the data science perspective to tackle these major use cases?

They can attack the complexity of those use cases from each side because it requires not just data science and the hardware but a lot of domain-specific expertise to solve those problems, too. 

Lohovska: I agree with Andy. The way we approach those complex use cases and the fact that you can have them as a service — and not only infrastructure-as-a-service (IaaS) or data-as-a-service (DaaS) — but working on AI and modeling-as-a-service (MaaS). You can have a marketplace for models and being able to plug-and-play different technologies, experiment, and rapidly deploy them allows you to rapidly get value out of those technologies. That is something we are doing on a daily basis with amazing experts and people with the knowledge of the different layers. They can then attack the complexity of those use cases from each side, because it requires not just data science and the hardware, but a lot of domain-specific expertise to solve those problems. This is something we are looking at and we are doing in-house.

And I am extremely happy to say that I have the pleasure to work with all of those amazing people and experts within HPE.

Gardner: And there is a great deal more information available on each of these use cases for AI. There are white papers on the HPE website in Pointnext Services.

What else can people do, Andy, to get ready for these high-level AI use cases that lead to digital business transformation? How should organizations be setting themselves up on a people, process, and technology basis to become adept at AI as a core competency?

Longworth: It is about people, technology, process, and all these things combined. You don’t go and buy AI in a box. You need a structured approach. You need to understand what the use cases are that give value to your organization and to be able to quickly prototype those, quickly experiment with them, and prove the value to your stakeholders.

hpe-logoWhere a lot of organizations get stuck is moving from that prototyping, proof of concept (POC), and proof of value (POV) phase into full production. It is tough getting the processes and pipelines that enable you to transition from that small POV phase into a full production environment. If you can crack that nut, then the next use-cases that you implement, and the next business problems that you want to solve with AI, become infinitely easier. It is a hard step to go from POV through to the full production because there are so many bits involved.

You have that whole value chain from grabbing hold of the data at the point of creation, processing that data, making sure you have the right people and process around that. And when you come out with an AI solution that gives some form of inference, it gives you some form of answer, you need to be able to act upon that answer.

You can have the best AI solution in the world that will give you the best predictions, but if you don’t build those predictions into your business processes, you may well have never made them in the first place.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, data center, enterprise architecture, Hewlett Packard Enterprise, machine learning, professional services, video delivery | Tagged , , , , , , , , , , , , , , , | Leave a comment

Automation and connectivity will enable the modern data center to extend to many more remote locations

grid mainEnterprise IT strategists are adapting to new demands from the industrial edge, 5G networks, and hybrid deployment models that will lead to more diverse data centers across more business settings.

That’s the message from a broad new survey of 150 senior IT executives and data center managers on the future of the data center. IT leaders and engineers say they must transform their data centers to leverage the explosive growth of data coming from nearly every direction.

Yet, according to the Forbes-conducted survey, only a small percentage of businesses are ready for the decentralized and often small data centers that are needed to process and analyze data close to its source.

The next BriefingsDirect discussion on the latest data center strategies unpacks how more self-healing and automation will be increasingly required to manage such dispersed IT infrastructure and support increasingly hybrid deployment scenarios.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Joining us to help learn more about how modern data centers will efficiently extend to the computing edge is Martin Olsen, Vice President of Global Edge and Integrated Solutions at VertivTM. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Martin, what’s driving this movement away from mostly centralized IT infrastructure to a much more diverse topology and architecture?

Martin Olsen

Olsen

Olsen: It’s an interesting question. The way I look at it is it’s about the cloud coming to you. It certainly seems that we are moving away from centralized IT or centralized locations where we process data. It’s now more about the cloud moving beyond that model.

We are on the front steps of a profound re-architecting of the Internet. Interestingly, there’s no finish line or prescribed recipe at this point. But we need to look at processing data very, very differently.

Over the past decade or more, IT has become an integral part of our businesses. And it’s more than just back-end applications like customer relationship management (CRM), enterprise resource planning (ERP), and material requirements planning (MRP) systems that service the organization. It’s also become an integrated fabric to how we conduct our businesses.

Meeting at the edge 

Gardner: Martin, Cisco predicts there will be 28.5 billion connected devices by 2022, and KPMG says 5G networks will carry 10,000 times more traffic than current 4G networks. We’re looking at an “unknown unknown” here when it comes to what to expect from the edge.

Olsen: Yes, that’s right, and the starting point is well beyond just content distribution networks (CDNs), it’s also about home automation, so accessing your home security cameras, adjusting the temperature, and other things around home automation.

That’s now moving to business automation, where we use compute and generate data to develop, design, manufacture, deploy, and operate our offerings to customers in a much better and differentiated fashion.

We’re also trying to improve the customer experience and how we interact with consumers. So billions of devices generating an unimaginable amount of data out there, is what has become known as edge computing, which means more computing done at or near the source of data.

In the past, we pushed that data out for consuming, but now it’s much more about data meets people, it’s data interacting with people in a distributed IT environment. And then, going beyond that is 5G.

We see a paradigm shift in the way we use IT. Take the amount of tech that goes into manufacturing. It’s exploding, with tens of thousands of sensors deployed in just one facility to help dramatically improve productivity and drive efficiency into the business.

We see a paradigm shift in the way we use IT. Take, for example, the amount of tech that goes into a manufacturing facility, especially high-tech manufacturing. It’s exploding, with tens of thousands of sensors deployed in just one facility to help dramatically improve productivity, differentiate, and drive efficiency into the business.

Retail operations, from a compute standpoint, now require location services to offer a personalized experience in both the pre-shop phase as well as when you go into the store, and potentially in the post-shop, or follow-up experience.

We need to deliver these services quickly, and that requires lower latency and higher levels of bandwidth. It’s increasingly about pushing out from a central standpoint to a distributed fashion. We need to be rethinking how we deploy data centers. We need to think about the future and where these data centers are going to go. Where are we going to be processing all of this data?

Where does the data go?

Gardner: The complexity over the past 10 years about factoring cloud, hybrid cloud, private cloud, and multi-cloud is now expanding back down into the organization — whether it’s an environment for retail, home and consumer, and undoubtedly industrial and business-to-business. How are IT leaders and engineers going to update their data centers to exploit 5G and edge computing opportunities despite this complexity?

Olsen: You have to think about it differently around your physical infrastructure. You have the data aspect of where data moves and how you process it. That’s going to sit on physical infrastructure somewhere, and it’s going to need to be managed somehow.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

You should, therefore, think differently about redesigning and deploying the physical infrastructure. How do you operate and manage it? The concept of a data center has to transform and evolve. It’s no longer just a big building. It could be 100, 1,000, or 10,000 smaller micro data centers. These small data centers are going to be located in places we had previously never imagined you would put in IT infrastructure.

And so, the reliance on onsite technical and operational expertise has to evolve, too. You won’t necessarily have that technical support, a data center engineer walking the halls of a massive data center all day, for example. You are going to be in places like some backroom of a retail store, a manufacturing facility, or the base of a cell tower. It could be highly inaccessible.

ecosystemYou’ll need solutions that offer predictive operations, that have self-healing capabilities within them where they can fail in place but still operate as a function of built-in redundancy. You want to deploy solutions that have zero-touch provisioning, so you don’t have to go to every site to set it up and configure it. It needs to be done remotely and with automation built-in.

You should also consider where the applications are going to be hosted, and that’s not clear now. How much bandwidth is needed? It’s not clear. The demand is not clear at this point. As I said in the beginning, there is no finish line. There’s nothing that we can draw up and say, “This is what it’s going to be.” There is a version of it out there that’s currently focused around home automation and content distribution, and that’s just now moving to business automation, but again, not in any prescribed way yet.

You should consider where the applications are going to be hosted, and that’s not clear. How much bandwidth is needed? It’s not clear. There’s nothing that we can draw up and say, “This is what it’s going to be.” 

So we don’t want to adopt the “right” technologies now. And that becomes a real concern for your ability to compete over time because you can outdate yourself really, really quickly if you don’t make the right choices.

Gardner: When you face such change in your architecture and potential decentralization of micro data centers, you still need to focus on security, backup and recovery, and contingency plans for emergencies. We still need to be mission-critical, even though we are distributed. And, as you point out, many of these systems are going to be self-healing and self-configuring, which requires a different set of skills.

We have a people, process, and technology sea change coming. You at Vertiv wanted to find out what people in the field are thinking and how they are reacting to such change. Tell us about the Vertiv-Forbes survey, what you wanted to accomplish, and the top-line findings.

Survey says seek strategic change

Olsen: We wanted to gauge the thinking and gain a sense of what the C-suite, the data center engineers, and the data center community were thinking as we face this new world of edge computing, 5G, and Internet of things (IoT). The top findings show a need for fundamental strategic change. We face a new mixture of architectures that is far more decentralized and with much more modularity, and that will mean a new way to manage and operate these data centers, too.

Based on the survey, 11 percent of C-suite executives don’t believe they are currently updated even to be ahead of current needs. They certainly don’t have the infrastructure ready for what’s needed in the future. It’s much less so with the data center engineers we polled, with only 1 percent of them believing they are ready. That means the vast majority, 99 percent, don’t believe they have the right infrastructure.

avocentThere is also broad agreement that security and bandwidth need to be updated. Concern about security is a big thing. We know from experience that security concerns have stunted remote monitoring adoption. But the sheer quantity of disparate sites required for edge computing makes it a necessity to access, assess, and potentially reconfigure and remotely fix problems through remote monitoring and access.

Vertiv is driving a high level of configurability of instruments so you can take our components and products and put them together in a multitude of different ways to provide the utmost flexibility when you deploy. We are driving modularized solutions in terms of both modular data center and modularity in terms of how it all goes together onsite. And we are adding much more intelligence into our offerings for the remote sites, as well as the connectivity to be able to access, assess, and optimize these systems remotely.

Gardner: Martin, did the survey indicate whether the IT leaders in the field are anticipating or demanding such self-configuration technologies?

Olsen: Some 24 percent of the executives reported that they expect more than 50 percent of data centers will be self-configuring or have zero-touch provisioning by 2025. And about one-third of them say that more than 50 percent of their data centers will be self-healing by then, too.

That’s not to say that they have all of the answers. That’s their prediction and their responses to what’s going to be needed to solve their needs. So, 29 percent of engineers say they don’t know what percentage of the data centers will be self-configuring and self-healing, but there is an overwhelming agreement that it is a capability they need to be thinking about. Vertiv will develop and engineer our offerings going forward based on what’s going to be put in place out there.

Gardner: So there may be more potential points of failure, but there is going to be a whole new set of technologies designed to ameliorate problems, automate, and allow the remote capability to fix things as needed. Tell us about the proper balance between automation and remote servicing. How might they work together?

Make intelligent choices before you act 

Olsen: First of all, it’s not just a physical infrastructure problem. It has everything to do with the data and workloads as well. They go hand-in-hand; it certainly requires a partnership, a team of people and organizations that come together and help.

Driving intelligence into our products and taking that data off of our systems as they operate provides actionable data. You can then offer that analysis up to non-technical people on how to rectify situations and to make changes.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

These solutions also need to communicate with the hypervisor platforms — whether that’s via traditional virtualization or containerization. Fundamentally, you need to be able to decide how and when to move your applications and workloads to the optimal points on the network.

We are trying to alleviate that challenge by making our offerings more intelligent and offering up actionable alarms, warnings, and recommendations to weigh choices across an overall platform. Again, it takes a partnership with the other vendors and services companies. It’s not just from a physical infrastructure standpoint.

Gardner: And when that ecosystem comes together, you can provide a constellation of data centers working in harmony to deliver services from the edge to the consumer and back to the data centers. And when you can do that around and around, like a circuit, great things can happen.

So let’s ground this, if we can, to the business reality. We are going to enable entirely new business models, with entirely new capabilities. Are there examples of how this might work across different verticals? Can you illustrate — when you have constructed decentralized data centers properly — the business payoffs?

Improving remote results 

Olsen: As you point out, it’s all about the business outcomes we can deliver in the field. Take healthcare. There is a shortage of healthcare expertise in rural areas. Being able to offer specialized doctors and advanced healthcare in places that you wouldn’t imagine today requires a new level of compute and network that delivers low latency all the way to the endpoints.

Imagine a truck fitted with a medical imaging suite. That’s going to have to operate somewhat autonomously. The 5G connectivity becomes essential as you process those images. They have to be graphically loaded into a central repository to be accessed by specialists around the world who read the images.

That requires two-way connectivity. A huge amount of data from these images needs to move to provide that higher level of healthcare and a better patient experience in places where we couldn’t do it before.

There will need to be aggregation points throughout the network. You will need compute to reside at multiple levels of the infrastructure. Places like the base of a cell tower could become the focal point.

So 5G plays into that, but it also means being able to process and analyze some of the data locally. There need to be aggregation points throughout the network. You will need compute to reside at multiple levels of the infrastructure. Places like the base of a cell tower could become a focal point for this.

You can imagine having four, five, six times as much compute power sitting in these places along a remote highway that is not easily accessible. So, having technical staff be able to troubleshoot those becomes vital.

There are also uses cases that will use augmented reality (AR). Think of technicians in the field being able to use AR when they dispatch a field engineer to troubleshoot a system somewhere. We can make them as effective as possible, and access expertise from around the world to help troubleshoot these sites. AR becomes a massive part of this because you can overlay what the onsite people are seeing in through 3D glasses or virtual reality glasses and help them through troubleshooting, fixing, and optimizing whatever system they might be working on.

WorkerAgain, that requires compute right at the endpoint device. It requires aggregation points and connectivity all the way back to the cloud. So, it requires a complex network working together. The more advanced these use cases become — the more remote locations we have to think through — we are going to have to deploy infrastructure and access it as well.

Gardner: Martin, when I listen to you describe these different types of data centers with increased complexity and capabilities in the networks, it sounds expensive. But are there efficiencies you gain when you have a comprehensive design across all of the parts of the ecosystem? Are there mitigating factors that help with the total cost?

Olsen: Yes, as the net footprint of compute increases, I don’t think the cost is linear with that. We have proven that with the Vertiv technologies we have developed and already deployed. As the compute footprint increases, there is a fundamental need for driving energy efficiency into the infrastructure. That comes in the form of using more efficient ways of cooling the IT infrastructure, and we have several options around that.

It’s also from new battery technologies. You start thinking about lithium-ion batteries, which Vertiv has solutions around. Lithium-ion batteries make the solution far more resilient, more compact, and it needs much less maintenance when it sits out there.

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

So, the amount of infrastructure that’s going to go out there will certainly increase. We don’t think it’s necessarily going to be linear in terms of the cost when you pay close attention to how, as an organization, you deploy edge computing. By considering these new technologies, that’s going to help drive energy efficiency, for example.

Gardner: Were there any insights from the Forbes survey that went to the cost equation? How do the IT executives expect this to shake out?

Energy efficiency partnerships 

Olsen: We found that 71 percent of the C-suite executives said that future data centers will reduce costs. That speaks to both the fact that there will be more infrastructure out there, but that it will be more energy efficient in how it’s run.

It’s also going to reduce the cost of the overall business. Going back to the original discussion around the business outcomes, deploying infrastructure in all these different places will help drive down the overall cost of doing business.

It’s an energy efficiency play both from a very fundamental standpoint in the way you simply power and cool the equipment, and overall, as a business, in the way you deliver improved customer experience and how you deliver products and services for your customers.

Gardner: How do organizations prepare themselves to get out in front of this? As we indicated from the survey findings, not that many say they are prepared. What should they be doing now to change that?

Olsen: Yes, most organizations are unprepared for the future — and not necessarily even in agreement on the challenges. A very small percentage of the respondents, 11 percent of executives believe that their data centers are ahead of current needs, even less so for the data center engineers. Only 44 percent of them say that their data centers are updated regularly. Only 29 percent say their data centers even meet current needs.

To prepare going forward, they should seek partnerships. Get the data centers upgraded, but also think through and understand how organizations like Vertiv have decades of experience in designing, deploying, and operating large data centers from a physical infrastructure standpoint. We use that experience and knowledge base for the data center of tomorrow. It can be a single IT rack or two going to any location.

We take all of our learning and experience and drive it into what becomes the smallest common denominator data center, which could just be a rack. These are modular solutions that are intelligent and can be optimized remotely.

We take all of that learning and experience and drive it into what becomes the smallest common denominator data center, which could just be a rack. So it’s about working with someone who has that experience, already has the data, and has the offerings of configurable, modular solutions that are intelligent and provide accessibility to access, assess, and optimize remotely. And it’s about managing the data that comes off these systems and extracts the value out of it, the way we do that with some of our offering around Vertiv LIFE Services, with very prescriptive, actionable alarms and alerts that we send from our systems.

Very few organizations can do this on their own. It’s about the ecosystem, working with companies like Vertiv, working closely with our strategic partners on the IT side, storage networks, and all the way through to the applications that make it all work in unison.

batteryThink through how to efficiently add compute capacity across all of these new locations, what those new locations should look like, and what the requirements are from a security standpoint.

There is a resiliency aspect to it as well. In harsh environments such as high-tech manufacturing, you need to ensure the infrastructure is scalable and minimizes capital expenditure spending. The modular approach allows building for a future that may be somewhat unknown at this point. Deploying modular systems that you can easily augment and add capacity or redundancy to over time — and that operate via robust remote management platforms — are some of the things you want to be thinking about.

Gardner: This is one of the very few empirical edge computing research assets that I have come across, the Vertiv and Forbes collaboration survey. Where can people find out more information about it if they want more details? How is this going to be available?

Learn How Self-Healing and Automation 

Help Manage Dispersed IT Infrastructure 

Olsen: We want to make this available to everybody to review. In the interest of sharing the knowledge about this new frontier, the new world of edge computing, we will absolutely be making this research and study available. I want to encourage people to go visit vertiv.com to find more information and download the research results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, Cyber security, data center, Data center transformation, enterprise architecture, hyperconverged infrastructure, Security, Vertiv | Tagged , , , , , , , , , , , , , , | Leave a comment

How Intility uses HPE Primera intelligent storage to move to 100 percent data uptime

smart head

The next BriefingsDirect intelligent storage innovation discussion explores how Norway-based Intility sought and found the cutting edge of intelligent storage.

Stay with us as we learn how this leading managed platform services provider improved uptime — on the road to 100 percent — and reduced complexity for its end users.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To hear more about the latest in intelligent storage strategies that lead to better business outcomes, please welcome Knut Erik Raanæs, Chief Infrastructure Officer at Intility in Oslo, Norway. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Knut, what trends and business requirements have been driving your need for Intility to be an early adopter of intelligent storage technology?

Knut Erik Raanæs

Raanæs

Raanæs: For us, it is important to have good storage systems that are easy to operate to lower our management costs. At the same time, it gives great uptime for our customers.

Gardner: You are dealing not only with quality of service requirements; you also have very rapid growth. How does intelligent storage help you manage such rapid growth?

Raanæs: By easily having performance trends shown so we can spot when we are running full. If that happens, we can react before we run out of capacity.

Gardner: As a managed cloud service provider, it’s important for you to have strict service level agreements (SLAs) met. Why are the requirements of cloud services particularly important when it comes to the quality of storage services?

Intelligent, worry-free storage 

Raanæs: It’s very important to have good quality of service separation because we have lots of different kinds of customers. We don’t want to have the noise-enabled problem where one customer affects another customer — or even the virtual machine (VM) of one customer affects another VM. The applications should work independently of each other.

That’s why we have been using Hewlett Packard Enterprise (HPE)Nimble Storage. Our quality of service would be much worse at the VM disk level. It’s very good technology.

Gardner: Tell us about Intility, your size, scope, how long you have been around, and some of the major services you provide.

Raanæs: Intility was founded in 2000. We have always been focused on being a managed cloud service provider. From the start, there have been central shared services, a central platform, where we on-boarded customers and they shared email systems, and Microsoft Active Directory, along with all the application backup systems.

Over the last few years, the public cloud has made our customers more open to cloud solutions in general, and to not having servers in the local on-premises room at the office. We have now grown to more than 35,000 users, spread over 2,000 locations across 43 countries. We have 11 shared services datacenters, and we also have customers with edge location deployments due to high latency or unstable Internet connections. They need to have the data close to them.

Gardner: What is required when it comes to solving those edge storage needs?

Customers often want inexpensive solutions. We have to look at different solutions that give the best stability but don’t cost too much. And we need remote management of the solution.

Raanæs: Those customers often want inexpensive solutions. So we have to look at different solutions and pick the one that gives the best stability but that also doesn’t cost too much. We also need easy remote management of the solution, without being physically present.

Gardner: At Intility, even though you’re providing infrastructure-as-a-services (IaaS), you are also providing a digital transformation benefit. You’re helping your customers mature and better manage their complexity as well as difficulty in finding skills. How does intelligent IaaS translate into digital transformation?

Raanæs: When we meet with potential customers, we focus on taking away concerns about infrastructure. They are just going to leave that part to us. The IT people can then just move up in [creating value] and focus on digitalizing the business for their customers.

Gardner: Of course, cloud-based services require overcoming challenges with security, integration, user access management, and single sign on. How are those higher-level services impacted by the need for intelligent storage?

Smart storage security 

Raanæs: With intelligent storage, we can focus on having our security operations center (SOC) monitor responses the instant they see them on our platforms. We can keep a keen eye on our storage systems to make sure that nothing ever happens on the storage. That can be an early signal of something happening.

Gardner: Please describe the journey you have been on when it comes to storage. What systems you have been using? Why have intelligence, insights, and analysis capabilities been part of your adoption?

Girl rack

Raanæs: We started back in 2013 with HPE 3PAR arrays. Before that we used IBM storage. We had multiple single-Redundant Array of Inexpensive Disks (RAID) sets and had to manage hotspots ourselves, so by moving even one VM we had to try and balance it out manually.

In 2013, when we went with the first 3PAR array, we had huge benefits. That 3PAR array used less space and at the same time we didn’t have to manage or even out the hotspots. 3PAR and its active controllers were a great plus for us for many years.

But about one-and-a-half years ago, we started using HPE Nimble arrays, primarily due to the needs of VMware vCenter and quality of service requirements. Also, with the Nimble arrays, the InfoSight technology was quite nice.

Gardner: Right. And, of course, HPE is moving that InfoSight technology into more areas of their infrastructure. How important has InfoSight been for you?

Raanæs: It’s been quite useful. We had some systems that required us to use other third-party applications to give an expansive view of the performance of the environment. But those applications were quite expensive and had functionality that we really didn’t need. So at first we pulled data from the vCenter database and visualized the data. That was a huge start for us. But when InfoSight came along later it gave us even more information about the environment.

Gardner: I understand you are now also a beta customer for HPE Primera storage. Tell us about your experience with Primera. How does that move the needle forward for you?

For 100 percent uptime 

Raanæs: Yes, we have been beta testing Primera, and it has been quite interesting. It was easy to set up. I think maybe 20 minutes from getting it into the rack and just clicking through the setup. It was then operational and we could start provisioning storage to the whole system.

And with Primera, HPE is going in with 100 percent uptime guarantee. Of course, I still expect to deal with some rare incidences or outages, but it’s nice to see a company that’s willing to put their money where their mouth is, and say, “Okay, if there is any downtime or an outage happens, we are going to give you something back for it.”

Gardner: Do you expect to put HPE Primera into production soon? How would you use it first?

With Primera, HPE is going in with 100 percent uptime guarantee. It’s nice to see a company that’s willing to put their money where their mouth is.

Raanæs: So we are currently waiting for our next software upgrade for HPE Primera. Then we are then going to look at putting it into production. The use case is going to be general storage because we have so much more storage demand and need to try to keep it consistent, to make it easier to manage.

Gardner: And do you expect to be able to pass along these benefits of speed of deployment and 100 percent uptime to your end users? How do you think this will improve your ability to deliver SLAs and better business outcomes?

Raanæs: Yes, our end users are going to be quite happy with 100 percent uptime. No one likes downtime — not us, not our customers. And HPE Primera’s speed of deployment means that we have more time to manage other parts of the platform and to get better service out to the customers.

HPE PRimera logoGardner: I know it’s still early and you are still in the proof of concept stage, but how about the economics? Do you expect that having such high levels of advanced intelligence across storage will translate into your ability to do more for less, and perhaps pass some of those savings on?

Raanæs: Yes, I expect that’s going to be quite beneficial for us. Because we are based in Norway, one of our largest expenses is for people. So, the more we can automate by using the systems, the better. I am really looking forward to seeing this improve and getting easier to manage systems and analyze performance within a few hours.

Gardner: On that issue of management, have you been able to use HPE Primera to the degree where you have been able to evaluate its ease of management? How beneficial is that?

Work smarter, not harder 

Raanæs: Yes, the ease of management was quite nice. With Primera you can do the service upgrade more easily. So with 3PAR, we had to schedule an upgrade with the upgrade team at HPE and had to wait a few weeks. Now we can just do the upgrade ourselves.

And hardware replacements are easier, too. We can just get a nice PDF showing you how to replace the parts. So it’s also quite nice.

I also like the part of the service processor in 3PAR that’s now just garnered with Primera; it’s in with the array. So, that’s one less thing to worry about managing.

Gardner: Knut, as we look to the future, other technologies are evolving across the infrastructure scene. When combined with something like HPE Primera, is there a whole greater than the sum of the parts? How will you will be able to use more intelligence broadly and leverage more of this opportunity for simplicity and passing that onto your end users?

Raanæs: I’m hoping that more will come in the future. We are also looking at non-volatile memory express (NVMe). That’s a caching solution and it’s ready to be built into HPE Primera, too. So that’s also quite interesting to see what the future will bring there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, data center, Data center transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, machine learning, VMware | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

A new status quo for data centers–seamless communication from core to cloud to edge

DC mainAs 2020 ushers in a new decade, the forces shaping data center decisions are extending compute resources to new places.

With the challenging goals of speed, agility, and efficiency, enterprises and service providers alike will be seeking new balance between the need for low latency and optimal utilization of workload placement. Hybrid models will therefore include more distributed, confined, and modular data centers at or near the edge.

These are but some of a few top-line predictions on the future state of the modern data center design. The next BriefingsDirect data center strategies discussion with two leading IT and critical infrastructure executives examines how these new data center variations nonetheless must also interoperate seamlessly from core to cloud to edge.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the new state of extensible data centers is Peter Panfil, Vice President of Global Power at VertivTM, and Steve Madara, Vice President of Global Thermal at Vertiv. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The world is rapidly changing in 2020. Organizations are moving past the debate around hybrid deployments, from on-premises to public clouds. Why do we need to also think about IT architectures and hybrid computing differently?

Peter Panfil

Panfil

Panfil: We noticed a trend at Vertiv in our customer base. That trend is toward a new generation of data centers. We have been living with distributed IT, client-server data centers moving to cloud, either a public cloud or a private cloud.

But what we are seeing is the evolution of an edge-to-core, near-real-time data center generation. And it’s being driven by devices everywhere, the “connected-all-the-time” model that all of us seem to be going to.

And so, when you are in a near-real-time world, you have to have infrastructure that supports your near-real-time applications. And that is what the technology folks are facing. I refer to it as a pack of dogs chasing them — the amount of data that’s being generated, the applications running remotely, and the demand for availability, low latency, and driving cost down as much as you possibly can. This is what’s changing how they approach their critical infrastructure space.

Gardner: And so, a new equilibrium is emerging. How is this different from the past?

Madara: If we go back 20 years, everything was centralized at enterprise data centers. Then we decided to move to decentralized, and then back to centralized. We saw a move to colocation as people decided that’s where they could get lower cost to run their apps. And then things went to the cloud, as Peter said earlier.

Steve Madara

Madara

And now, we have a huge number of devices connected locally. Cisco says by late 2020 that it’s going to have 23 billion connected devices, and over half of those are going to be machine-to-machine communications, which, as Peter mentioned earlier, the latency is going to be very, very critical.

An interesting read is Michael Lewis’s book Flash Boys about the arbitrage that’s taking place with the low latency that you have in stock market trading. I think we are going to see more of that moving to the edge. The edge is more like a smart rack or smart row deployment in an existing facility. It’s going to be multi-tenant, because it’s going to be able to be throughout large cities. There could be 20 or 30 of these edge data center sites hosting different applications for customers.

This move to the edge is also going to provide IT resources in a lot of underserved markets that don’t yet have pervasive compute, especially in emerging countries

Gardner: Why is speed so important? We have been talking about this now for years, but it seems like the need for speed to market and speed to value continues to ramp up. What’s driving that?

Moving to the edge, with momentum 

Panfil: There is more than one kind of speed. There is speed of response of the application, that’s something that all of us demand — speed of response of the applications. I have to have low latency in the transactions I am performing with my data or with my applications. So there is the speed of the actual data being transmitted.

There is also speed of deployment. When Steve talked earlier about centralized cloud deployments in these core data centers, your data might be going over a significant distance, hopping along the way. Well, if you can’t live with that latency that gets inserted, then you have to take the IT application and put it closer to the source and consumer of the data. So there is a speed of deployment, from core to edge, that happens.

And the third type of speed is you have to have low-first-cost, high-asset-utilization, and rapid-scalability. So that’s a speed of infrastructure adaptation to what the demands for the IT applications are.

So when we mean speed, I often say it’s speed, speed, and speed. First it’s the data speed, then deploying fast, and then at scale at business-friendly cost and reliability. 

So when we mean speed, I often say it’s speed, speed, and speed. First, it’s the data IT. Once I have data IT speed, how did I achieve that? l did it by deploying fast, in the scale needed for the applications, and lastly at a cost and reliability that makes it tolerable for the businesses.

Gardner: So I guess it’s speed-cubed, right?

Panfil: At least, speed-cubed. Steve, if we had a nickel for every time one of our customers said “speed,” we wouldn’t have to work anymore. They are consumed with the different speeds that they have to deal with — and it’s really the demands of their customers.

Gardner: Vertiv for years has been looking at the data center of the future and making some predictions around what to expect. You have been rather prescient. To continue, you have now identified several areas for 2020, too. Let’s go through those trends.

Steve, Vertiv predicts that “hybrid architectures will go mainstream.” Why did you identify that, and what do you mean?

The future goes hybrid 

Madara: If we look at the history of going from centralized to decentralized, and going to colocation and cloud applications, it shows the ongoing evolution of Internet of Things (IoT) sensors, 5G networks, smart cities, autonomous cars, and how more and more of that data is generated and will need to be processed locally. A lot of that is from machine-to-machine applications.

logoSo when we now talk about hybrid, we have to get very, very close to the source, as far as the processing is concerned. That’s going to be a large-scale evolution that’s going to drive the need for hybrid applications. There is going to be processing at the edge as well as centralized applications — whether it’s in a cloud or hosted in colocation-based applications.

Panfil: Steve, you and I both came up through the ranks. I remember when the data closet down the hall was basically a communications matrix. Its intent was to get communications from wherever we were to wherever our core data center was.

Well, the cloud is not going away. Number two, enterprise IT is not going away. What the enterprise is saying is, “Okay, I am going to take my secret sauce and I am going to put it in an edge data center. I am going to put the compute power as close to my consumer of that data and that application as I possibly can. And then I am going to figure out where the rest of it’s going to go.”

If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can’t, I might even break up my enterprise data center into small or micro data centers that give me even better responses.

“If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can’t, I might even break up my enterprise data center into small or micro data centers that give me even better responses.”

Dana, it’s interesting, there was a recent wholesale market summary published that said the difference between the smaller and the larger wholesale deals widened. So what that says is the large wholesale deals are getting bigger, the small wholesale deals are getting smaller, and that the enterprise-based demand, in deployments under 600 kilowatts, is focused on low-latency and multi-cloud access.

That tells us that our customers, the users of that critical space, are trying to place their IT appliances as close as they can to their customers, eliminating the latency, responding with speed, and then figuring out how to mesh that edge deployment with their core strategy.

Gardner: Our second trend gets back to the speed-cubed notion. I have heard people describe this as a new arms race, because while it might be difficult to differentiate yourself when everyone is using the same public cloud services, you can really differentiate yourself on how well you can conduct yourself at speed.

What kinds of capabilities across your technologies will make differentiation around speed work to an advantage as a company?

The need for speed 

Panfil: Well, I was with an analyst recently, and I said the new reality is not that the big will eat the small — it’s that the fast will eat the slow. And any advantage that you can get in speed of applications, speed of deployment, deploying those IT assets — or morphing the data center infrastructure or critical space infrastructure – helps improve capital efficiency. What many customers tell us is that they have to shorten the period of time between deciding to spend money on IT assets and the time that those asset start creating revenue.

They want help being creative in lowering their first-cost, in increasing asset utilization, and in maintaining reliability. If, holy cow, my application goes down, I am out of business. And then they want to figure out how to manage things like supply chains and forecasting, which is difficult to do in this market, and to help them be as responsive as they can to their customers.

powerMadara: Forecasting and understanding the new applications — whether it’s artificial intelligence (AI) or 5G — the CIOs need to decide where they need to put those applications whether they should be in the cloud or at the edge. Technology is changing so fast that nobody can predict far out into the future as far as to where I will need that capacity and what type of capacity I will need.

So, it comes down to being able to put that capacity in the place where I need it, right when I need it, and not too far in advance. Again, I don’t want to spend the capital, because I may put it in the wrong place. So it’s got to be about tying the demand with the supply, and that’s what’s key as far as the infrastructure.

And the other element I see is technology is changing fast, even on the infrastructure side. For our equipment, we are constantly making improvements every day, making it more efficient, lower cost, and with more capability. And if you put capacity in today that you don’t need for a year or two down the road, you are not taking advantage of the latest, greatest technology. So really it’s coupling the demand to the actual supply of the infrastructure — and that’s what’s key.

Another consideration is that many of these large companies, especially in the colocation market, have their financial structure as a real estate investment trust (REIT). As a result, they need to tie revenue with expenses tighter and tighter, along with capital spending.

Panfil: That’s a good point, Steve. We redesigned our entire large power portfolio at Vertiv specifically to be able to address this demand.

In previous generations, for example, the uninterruptible power supply (UPS) was built as a complete UPS. The new generation is built as a power converter, plus an I/O section, plus an interface section that can be rapidly configured to the customer, or, in some cases, put into a vendor-managed inventory program. This approach allows us to respond to the market and customers quicker.

We were forced to change our business model in such a way that we can respond in real time to these kinds of capacity-demand changes.

Madara: And to add to that, we have to put together more and more modules and solutions where we are bundling the equipment to deliver it faster, so that you don’t have to do testing on site or assembly on site. Again, we are putting together solutions that help the end-user address the speed of the construction of the infrastructure.

also think that this ties into the relationship that the person who owns the infrastructure has with their supplier base. Those relationships have to build in, as Peter mentioned earlier, the ability to do stocking of inventory, of having parts available on-site to go fast.

Gardner: In summary so far, we have this need for speed across multiple dimensions. We are looking at more hybrid architectures, up and down the scale — from edge to core, on-premises to the cloud. And we are also looking at crunching more data and making real-time analytics part of that speed advantage. That means being able to have intelligence brought to bear on our business decisions and making that as fast as possible.

So what’s going on now with the analytics efficiency trend? Even if average rack density remains static due to a lack of space, how will such IT developments as high performance computing (HPC) help make this analysis equation work to the business outcome’s advantage?

High-performance, high-density pods 

Madara: The development of AI applications, machine learning (ML), and what could be called deep learning are evolving. Many applications are requiring these HPC systems. We see this in the areas of defense, gaming, the banking industry, and people doing advanced analytics and tying it to a lot of the sensor data we talked about for manufacturing.

It’s not yet widespread, it’s not across the whole enterprise or the entire data center, and these are often unique applications. What I hear in large data centers, especially from the banks, is that they will need to put these AI applications up on 30-, 40-, 50- or 60-kW racks — but they only have three or four of these racks in the whole data center.

The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of high-density compute. They are going to need to decide how they are going to facilitize for that type of equipment.

The end-user will need to decide how to tune or adjust facilities to accommodate these small but growing pods of high-density compute. And if they are in their own facility, if it’s an enterprise that has its own data center, they will need to decide how they are going to facilitize for that type of equipment.

A lot of the colocation hosting facilities have customers saying, “Hey, I am going to be bringing in the future a couple of racks that are very high density. A lot of these multi-tenant data centers are saying, ‘Oh, how do I provision for these, because my data center was laid out for this average of maybe 8 kW per rack? How do I manage that, especially for data centers that didn’t previously have chilled water to provide liquid to the rack?’”

We are now seeing a need to provide chilled water cooling that would go to a rear door heat exchanger on the back of the rack. It could be chilled water that would go to a rack for chip cooling applications. And again, it’s not the whole data center; it’s a small segment of the data center. But it raises questions of how I do that without overkill on the infrastructure needed.

batteryGardner: Steve, do you expect those small pods of HPC in the data center to make their way out to the edge when people do more data crunching for the low-latency requirements, where you can’t move the data to a data center? Do you expect to have this trend grow more distributed?

Madara: Yes, I expect this will be for more than the enterprise data center and cloud data centers. I think you are going to see analytics applications developed that are going to be out at the edge because of the requirements for latency.

When you think about the autonomous car; none of us know what’s going to be required there for that high-performance processing, but I would expect there is going to be a need for that down at the edge.

Gardner: Peter, looking at the power side of things when we look at the batteries that help UPS and systems remain mission-critical regardless of external factors, what’s going on with battery technology? How will we be using batteries differently in the modern data center?

Battery-powered savings 

Panfil: That’s a great question. Battery technology has been evolving at an incredibly fast rate. It’s being driven by the electric vehicles. That growth is bringing to the market batteries that have a size and weight advantage. You can’t put a big, heavy pack of batteries in a car and hope to have it perform well.

It also gives a long-life expectation. So data centers used to have to decide between long-life, high-maintenance, wet cells and the shorter-life, high-maintenance, valve-regulated lead-acid (VRLA) batteries. In step with the lithium-ion batteries (LIBs) and thin plate pure lead (TPPL) batteries, what’s happened is the total cost of ownership (TCO) has started to become very advantageous for these batteries.

Our sales leadership lead sent me the most recent TCO between either TPPL or LIBs versus traditional VRLA batteries, and the TCO is a winner for the LIBs and the TPPL batteries. In some cases, over a 10-year period, the TCO is a factor of two lower for LIB and TPPL.

Where in the cloud generation of data centers was all about lowest first cost, in this edge-to-core mentality of data centers, it’s about TCO. There are other levers that they can start to play with, too.

So, for example, they have life cycle and operating temperature variables. That used to be a real limitation. Nobody in the data center wanted their systems to go on batteries. They tried everything they could to not have their systems go on the battery because of the potential for shortening the life of their batteries or causing an outage.

Today we are developing IT systems infrastructure that takes advantage of not only LIBs, but also pure lead batteries that can increase the number of [discharge/recharge] cycles. Once you increase the number of cycles, you can think about deploying smart power configurations. That means using batteries not only in the critical infrastructure for a very short period of time when the power grid utility fails, but to use that in critical infrastructure to help offset cost.

If I can reduce utility use at peak demand periods, for example, or I can reduce stress on the grid at specified times, then batteries are not only a reliability play – they are also a revenue-offset play. And so, we’re seeing more folks talking to us about how they can apply these new energy storage technologies to change the way they think about using their critical space.

Also, folks used to think that the longer the battery time, the better off they were because it gave more time to react to issues. Now, folks know what they are doing, they are going with runtimes that are tuned to their operations team’s capabilities. So, if my operations team can do a hot swap over an IT application — either to a backup critical space application or to a redundant data center — then all of a sudden, I don’t need 5 to 12 minutes of runtime, I just need the bridge time. I might only need 60 to 120 seconds.

Now, if I can have these battery times tuned to the operations’ capabilities — and I can use the batteries more often or in higher temperature applications — then I can really start to impact my TCO and make it very, very cost-effective.

Gardner: It’s interesting; there is almost a power analog to hybrid computing. We can either go to the cloud or the grid, or we can go to on-premises or the battery. Then we can start to mix and match intelligently. That’s really exciting. How does lessening dependence on the grid impact issues such as sustainability and conserving energy?

Sustainability surges forward 

Panfil: We are having such conversations with our key accounts virtually every day. What they are saying is, “I am eventually not going to make smoke and steam. I want to limit the number of times my system goes on a generator. So, I might put in more batteries, more LIBs or TPPL batteries, in certain applications because if my TCO is half the amount of the old way, I could potentially put in twice as much, and have the same cost basis and get that economic benefit.”

And so from a sustainability perspective, they are saying, “Okay, I might need at some point in the useful life of that critical space to not draw what I think I need to draw from my utility. I can limit the amount of power I draw from that utility.”

I love all of you out there in data center design, but most of them are designed for peak useage. These changes allow them to design more for the norm of the requirements. That means they can put in less infrastructure, less battery, to right-size their generators; same thing on cooling. 

This is not a criticism, I love all of you out there in data center design, but most of them are designed for peak usage. So what these changes allow them to do is to design more for the norm of the requirements. That means they can put in less infrastructure, the potential to put in less battery. They have the potential to right-size their generators; same thing on the cooling side, to right-size the cooling to what they need and not for the extremes of what that data center is going to see.

From a sustainability perspective, we used to talk about the glass as half-full or half-empty. Now, we say there is too much of a glass. Let’s right-size the glass itself, and then all of the other things you have to do in support of that infrastructure are reduced.

Madara: As we look at the edge applications, many will not have backup generators. We will have alternate energy sources, and we will probably be taking more hits to the batteries. Is the LIB the better solution for that?

UPSPanfil: Yes, Steve, it sure is. We will see customers with an expectation of sustainability, a path to an energy source that is not fossil fuel-based. That could be a renewable energy source. We might not be able to deploy that today, but they can now deploy what I call foundational technologies that allow them to take advantage of it. If I can have a LIB, for example, that stores excess energy and allows me to absorb energy when I’m creating more than I need — then I can consume that energy on the other side. It’s better for everybody.

Gardner: We are entering an era where we have the agility to optimize utilization and reduce our total costs. The thing is that it varies from region to region. There are some areas where compliance is a top requirement. There are others where energy issues are a top requirement because of cost.

What’s going on in terms of global cross-pollination? Are we seeing different markets react to their power and thermal needs in different ways? How can we learn from that?

Global differences, normalized 

Madara: If you look at the size of data centers around the world, the data centers in the U.S. are generally much larger than in Europe. And what’s in Europe is much larger than what we have in other developed countries. So, there are a couple of things, as you mentioned, energy availability, cost of energy, the size of the market and the users that it serves. We may be looking at more edge data centers in very underserved markets that have been in underdeveloped countries.

So, you are going to see the size of the data center and the technology used potentially different to better fit needs of the specific markets and applications. Across the globe, certain regions will have different requirements with regard to security and sustainability.

Even though we have these potential differences, we can meet the end-user needs to right-size the IT resources in that region. We are all more common than we are different in many respects. We all have needs for security, we all have needs for efficiency, it may just be to different degrees.

Panfil: There are different regional agency requirements, different governmental regulations that companies have to comply with. And so what we find, Dana, is that what our customers are trying to do is normalize their designs. I won’t say they are standardizing their design because standardization says I am going to deploy exactly the same way everywhere in the world. I am a fan of Kit Kats and Kit Kats are not the same globally, they vary by region, the same is true for data centers.

So, when you look at how the customers are trying to deal with the regional and agency differences that they have to live with, what they find themselves doing is trying to normalize their designs as much as they possibly can globally, realizing that they might not to be able to use exactly the same power configuration or exactly the same thermal configuration. But we also see pockets where different technologies are moving to the forefront. For example, China has data centers that are running at high voltage DC, 240 volts DC, we have always had 48-volt DC IT applications in the Americas and in Europe. Customers are looking at three things — speed, speed, and speed.

And so when we look at the application, for example, of DC, there used to be a debate, is it AC or DC? Well, it’s not an “or” it’s an “and.” Most of the customers we talk to, for example, in Asia are deploying high-voltage DC and have some form of hybrid AC plus DC deployment. They are doing it so that they can speed their applications deployments.

In the Americas, the Open Compute Project (OCP) deploys either 12 or 48 volts to the rack. I look at it very simply. We have been seeing a move from 2N architecture to N plus 1 architecture in the power world for a decade, this is nothing more than adopting the N plus 1 architecture at the rack level versus the 2N architecture at the rack level.

And so what we see is when folks are trying to, number one, increase the speed; number two, increase their utilization; number three, lower their total cost, they are going to deploy infrastructures that are most advantageous for either the IT appliances that they are deploying or for the IT applications that they are running, and it’s not the same for everybody, right Steve?

You and I have been around the planet way too many times, you are a million miler, so am I. It’s amazing how a city might be completely different in a different time zone, but once you walk into that data center, you see how very consistent they have gotten, even though they have done it completely independently from anybody else.

Madara: Correct!

Consistency lowers costs and risks 

Gardner: A lot of what we have talked about boils down to a need to preserve speed-to-value while managing total cost of utilization. What is there about these multiple trends that people can consider when it comes to getting the right balance, the right equilibrium, between TCO and that all important speed-to-value?

Madara: Everybody strives to drive cost down. The more you can drive the cost down of the infrastructure, the more you can do to develop more edge applications.

I think we are seeing a very large rate of change of driving cost down. Yet we still have a lot of stranded capacity out there in the marketplace. And people are making decisions to take that down without impacting risk, but I think they can do it faster.

Standardization helps drive speed, whether it’s normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every one is different. 

Peter mentioned standardization. Standardization helps drive speed, whether it’s normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every new one is different.

Repeating allows you to build a supply base ecosystem where everybody has the same goal, knows what to do, and can be partners in driving out cost and in driving speed. Those are some of the key elements as we go forward.

Gardner: Peter when we look to that standardization, you also allow for more seamless communication from core to cloud to edge. Why is that important, and how can we better add intelligence and seamless communication among and between all these different distributed data centers?

Panfil: When we normalize designs globally, we take a look at the regional differences, sort out what the regional differences have to be, and then put a proof of concept deployment. And out of that comes a consistent method of procedure.

DC walkersWhen we talk about managing the data center effectively and efficiently, first of all, you have to know what you have. And second, you have to know what it’s doing. And so, we are seeing more folks normalizing their designs and getting consistency. They can then start looking at how much of their available capacity from a design perspective they are actually using both on a normal basis and on a peak basis and then they can determine how much of that they are willing to use.

We have some customers who are very risk-averse. They stay in the 2N world, which is a 50 percent maximum utilization. We applaud them for it because they are not going to miss a transaction.

There are others who will say, “I can live with the availability that an N+1 architecture gives me. I know I am going to have to be prepared for more failures. I am going to have to figure out how to mitigate those failures.”

So they are working constantly at figuring out how to monitor what they have and figure out what the equipment is doing, and how they can best optimize the performance. We talked earlier about battery runtimes, for example. Sometimes they might get short or sometimes they might be long.

As these companies get into this step and repeat function, they are going to get consistency of their methods of procedure. They’re going to get consistency of how their operations teams run their physical infrastructure. They are going to think about running their equipment in ways that is nontraditional today but will become the norm in the next generation of data centers. And then they are going to look at us and say, “Okay, now that I have normalized my design, can I use rapid deployment configuration? Can I put it on a skid, in a container? Can I drop it in place as the complete data center?”

Well, we build it one piece of equipment at a time and stitch it all together. The question that you asked about monitoring, it’s interesting because we talked to a major company just last month. Steve and I were visiting them at their site. And they said, “You know what? We spend an awful lot of time figuring out how our building management system and our data exchange happens at the site. Could Vertiv do some of that in the factory? Could you configure our data acquisition systems? Could you test them there in the factory? Could we know that when the stuff shows up on site that it’s doing the things that it’s supposed to be doing instead of us playing hunt and peck to figure out what the issues are?”

We said, “Of course.” So we are adding that capability now into our factory testing environment. What we see is a move up the evolutionary scale. Instead of buying separate boxes, we are seeing them buying solutions — and those solutions include both monitoring and controls.

Steve didn’t even get a chance to mention the industry-leading Vertiv Liebert® iCOM™ control for thermal. These controls and monitoring systems allow them to increase their utilization rates because they know what they have and what it’s doing.

Gardner: It certainly seems to me, with all that we have said today, that the data center status quo just can’t stand. Change and improvement is inevitable. Let’s close out with your thoughts on why people shouldn’t be standing still; why it’s just not acceptable.

Innovation is inevitable 

Madara: At the end of the day, the IT world is changing rapidly every day. Whether in the cloud or down at the edge, the IT world needs to adjust to those needs. They need to be able to be cut out enough of the cost structure. There is always a demand to drive cost down.

If we don’t change with the world around us, if we don’t meet the requirements of our customers, things aren’t going to work out – and somebody else is going to take it and go for it.

Panfil: Remember, it’s not the big that eats the small, it’s the fast that eats the slow.

Madara: Yes, right.

Panfil: And so, what I have been telling folks is, you got to go. The technology is there. The technology is there for you to cut your cost, improve your speed, and increase utilization. Let’s do it. Otherwise, somebody else is going to do it for you.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, Cyber security, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, hyperconverged infrastructure, Internet of Things, Vertiv | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Intelligent spend management supports better decision-making across modern business functions

financeThe next BriefingsDirect discussion on attaining intelligent spend management explores the findings of a recent IDC survey on paths to holistic business processes improvement.

We’ll now learn how a long history of legacy systems and outdated methods holds companies back from their potential around new total spend management optimization. The payoffs on gaining such a full and data-rich view of spend patterns across services, hiring, and goods includes reduced risk, new business models, and better strategic decisions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To help us chart the future of intelligent spend management, and to better understand how the market views these issues, we are joined by Drew Hofler, Vice President of Portfolio Marketing at SAP Ariba and SAP Fieldglass. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What trends or competitive pressures are prompting companies to seek better ways to get a total spend landscape view? Why are they incentivized to seek broader insights?

Drew Hofler

Hofler

Hofler: After years of grabbing best-of-breed or niche solutions for various parts of the source-to-pay process, companies are reaching the limits of this siloed approach. Companies are now being asked to look at their vendor spend as a whole. Whereas before they would look just at travel and expense vendors, or services procurement, or indirect or direct spend vendors, chief procurement and financial officers now want to understand what’s going on with spend holistically.

And, in fact, from the IDC report you mentioned, we found that 53 percent of respondents use different applications for each type of vendor spend that they have. Sometimes they even use multiple applications within a process for specific types of vendor spend. In fact, we find that a lot of folks have cobbled together a number of different things — from in-house billing to niche vendors – to keep track of all of that.

Managing all of that when there is an upgrade to one particular system — and having to test across the whole thing — is very difficult. They also have trouble being able to reconcile data back and forth.

One of our competitors, for example — to show how this Frankenmonster approach has taken root — tried to build a platform of every source and category of spend across the entire source-to-pay process by acquiring 14 different companies in six years. That creates a patchwork of applications where there is a skim of user interfaces across the top for people to enter, but the data is disconnected. The processes are disconnected. You have to manage all of the different code bases. It’s untenable.

Gardner: There is a big technology component to such a patchwork, but there’s a people level to this as well. More-and-more we hear about the employee experience and trying to give people intelligent tools to make higher-level decisions and not get bogged down in swivel-ware and cutting and pasting between apps. What do the survey results tell us all about the people, process, and technology elements of total spend management?

Unified data reconciliation

Hofler: It really is a combination of people, process, and technology that drives intelligent spend. It’s the idea of bringing together every source, every category, every buying channel for all of your different types of vendor spend so that you can reconcile on the technology side; you can reconcile the data.

For example, one of the things that we are building is master vendor unification across the different types of spend. A vendor that you see — IBM, for example — in one system is going to be the same as in another system. The data about that vendor is going to be enriched by the data from all of the other systems into a unified platform. But to do that you have to build upon a platform that uses the same micro-services and the same data that reconciles across all of the records so that you’re looking at a consistent view of the data. And then that has to be built with the user in mind.

So when we talk about every source, category, and channel of spend being unified under a holistic intelligent spend management strategy, we are not talking about a monolithic user experience. In fact, it’s very important that the experience of the user be tailored to their particular role and to what they do. For example, if I want to do my expenses and travel, I don’t want to go into a deep, sourcing-type of system that’s very complex and based on my laptop. I want to go into a mobile app. I want to take care of that really quickly.

If I’m sourcing some strategic suppliers I certainly can’t do that on just a mobile app. I need data, details, and analysis. And that’s why we have built the platform underneath it all to tie this together.

On the other hand, if I’m sourcing some strategic suppliers I certainly can’t do that on just a mobile app. I need data, details, and analysis. And that’s why we have built the platform underneath it all to tie this together even while the user interfaces and the experience of the user is exactly what they need.

When we did our spend management survey with IDC, we had more than 800 respondents across four regions. The survey showed a high amount of dissatisfaction because of the wide-ranging nature of how expense management systems interact. Some 48 percent of procurement executives said they are dissatisfied with spend management today. It’s kind of funny to me because the survey showed that procurement itself had the highest level of dissatisfaction. They are talking about their own processes. I think that’s because they know how the sausages are being made.

Gardner: Drew, this dissatisfaction has been pervasive for quite a while. As we examine what people want, how did the survey show what is working? What gives them the data they need, and where does it go next?

Let go of patchwork 

Hofler: What came out of the survey is that part of the reason for that dissatisfaction is the multiple technologies cobbled together, with lots of different workflows. There are too many of those, too much data duplication, too many discrepancies between systems, and it doesn’t allow companies to analyze the data, to really understand in a holistic view what’s going on.

In fact, 47 percent of the procurement leaders said they still rely on spreadsheets for spend analysis, which is shocking to me, having been in this business for a long time. But we are much further along the path in helping that out by reconciling master data around suppliers so they are not duplicating data.

IDCIt’s also about tying together, in an integrated and seamless way, the entire process across different systems. That allows workflow to not be based on the application or the technology but on the required processes. For example, when it comes to installing some parts to fix a particular machine, you need to be able to order the right parts from the right suppliers but also to coordinate that with the right skilled labor needed to install the parts.

If you have separate systems for your services, skilled labor, and goods, you may be very disconnected. There may be parts available but no skilled labor at the time you need in the area you need. Or there may be the skilled labor but the parts are not available from a particular vendor where that skilled labor is.

What we’ve built at SAP is the ability to tie those together so that the system can intelligently see the needs, assess the risks such as fluctuations in the labor market, and plan and time that all together. You just can’t do that with cobbled together systems. You have to be able to have a fully and seamlessly integrated platform underneath that can allow that to happen.

Gardner: Drew, as I listen to you describe where this is going, it dovetails with what we hear about digital transformation of businesses. You’re talking not just about goods and services, you are talking about contingent labor, about all the elements that come together from modern business processes, and they are definitely distributed with a lifecycle of their own. Managing all that is the key.

Now that we have many different moving parts and the technology to evaluate and manage them, how does holistic spend management elevate what used to be a series of back-office functions into a digital business transformation value?

Hofler: Intelligent spend management makes it possible for all of the insights that come from these various data points — by applying algorithms, machine learning (ML), and artificial intelligence (AI) — to look at the data holistically. It can then pull out patterns of spend across the entire company, across every category, and it allows the procurement function to be at the nexus of those insights.

If you think of all the spend in a company, it’s a huge part of their business when you combine direct, indirect, services, and travel and expenses. You are now able to apply those insights to where there are the price fluctuations, peaks and valleys in purchasing, versus what the suppliers and their suppliers can provide at a certain time.

It’s an almost infinite amount of data and insights that you can gain. The procurement function is being asked to bring to the table not just the back-office operational efficiency but the insights that feed into a business strategy and the business direction. It’s hard to do that if you have disconnected or cobbled-together systems and a siloed approach to data and processes. It’s very difficult to see those patterns and make those connections.

But when you have a common platform such as SAP provides, then you’re able to get your arms around the entire process. The Chief Procurement Officer (CPO) can bring to the table quite a lot of data and the insights and that show the company what they need to know in order to make the best decisions.

Gardner: Drew, what are the benefits you get along the way? Are there short-, medium-, and long-term benefits? Were there any findings in the IDC survey that alluded to those various success measurements?

Common platform benefits 

Hofler: We found that 80 percent of today’s spend managers’ time is spent on low-level tasks like invoice matching, purchase requisitioning, and vendor management. That came out of the survey. With the tying together of the systems and the intelligence technologies infused throughout, those things can be automated. In some cases, they can become autonomous, freeing up time for more valuable pursuits for the employees.

New technologies can also help, like APIs for ecosystem solutions. This is one of the great short-term benefits if you are on an intelligent spend management platform such as SAP’s. You become part of a network of partners and suppliers. You can tap into that ecosystem of partners for solutions aligned with core spend management functions.

Celonis, for example, looks at all of your workflows across the entire process because they are all integrated. It can see it holistically and show duplication and how to make those processes far more efficient. That’s something that can be accessed very quickly.

Longer-term, companies gain insights into the ebbs and flows of spending, cost, and risk. They can begin to make better decisions on who to buy from based on many criteria. They can better choose who to buy from. They start to understand the risks across entire supply chains.

Longer-term, companies gain insights into the ebbs and flows of spending, cost, and risk. They can begin to make better decisions on who to buy from based on many criteria. They can better choose who to buy from. They can also in a longer-term situation start to understand the risks involved across entire supply chains.

One of the great things about having an intelligent spend platform is the ability to tie in through that network to other datasets, to other providers, who can provide risk information on your suppliers and on their suppliers. It can see deep into the supply chain and provide risk analytics to allow you to manage that in a much better way. That’s becoming a big deal today because there is so much information, and social media allows information to pass along so quickly.

When a company has a problem with their supply chain — whether that’s reputational or something that their suppliers’ suppliers are doing — that will damage their brand. If there is a disruption in services, that comes out very quickly and can very quickly hit the bottom line of a company. And so the ability to moderate those risks, to understand them better, and to put strategies together longer term and short-term makes a huge difference. An intelligent spend platform allows that to happen.

Gardner: Right, and you can also start to develop new business models or see where you can build out the top line and business development. It makes procurement not just about optimization, but with intelligence to see where future business opportunities lie.

Comprehend, comply, control 

Hofler: That’s right, you absolutely can. Again, it’s all about finding patterns, understanding what’s happening, and getting deeper understanding. We have so much data now. We have been talking about this forever, the amount of data that keeps piling up. But having an ability to see that holistically, have that data harmonized, and the technological capability to dive into the details and patterns of that data is really important.

Ariba logoAnd that data network has, in our case, more than 20 years’ worth of spend data, with more than $13 trillion in lifetime of spend data and more than $3 trillion a year of transactions moving through our network – the Ariba Network. So not only do companies have the technologies that we provide in our intelligent spend management platform to understand their own data, but there is also the capability to take advantage of rationalized data across multiple industries, benchmarks, and other things, too, that affect them outside of their four walls.

So that’s a big part of what’s happening right now. If you don’t have access into those kinds of insights, you are operating in the dark these days.

Gardner: Are there any examples that illustrate some of the major findings from the IDC survey and show the benefits of what you have described?

Hofler: Danfoss, a Danish company, is a customer of ours that produces heating and cooling drives, and power solutions; they are a large company. They needed to standardize disparate enterprise resource planning (ERP) systems across 72 factories and implement services for indirect spend control and travel across 100 countries. So they have a very large challenge where there is a very high probability for data to become disconnected and broken down.

That’s really the key. They were looking for the ability to see one version of truth across all the businesses, and one of the things that really drives that need is the need for compliance. If you look at the IDC survey findings, close to half of executive officers are particularly concerned with compliance and auditing in spend management policy. Why? Because it allows both more control and deeper trust in budgeting and forecasting, but also because if there are quality issues they can make sure they are getting the right parts from the right suppliers.

The capability for Danfoss to pull all of that together into a single version of truth — as well as with their travel and expenses — gives them the ability to make sure that they are complying with what they need to, holistically across the business without it being spotty. So that was one of the key examples.

Another one of our customers, Swisscom, a telecommunications company in Switzerland, a large company also, needed intelligent spend management to manage their indirect spend and their contingent workforce.

They have 16,000 contingent workers, with 23,000 emails and a couple of thousand phone calls from suppliers on a regular basis. Within that supply chain they needed to determine supplier selection and rates on receipt of purchase requisitions. There were questions about supplier suitability in the subsequent procurement stages. They wanted a proactive, self-service approach to procurement to achieve visibility into that, as well as into its suppliers and the external labor that often use and install the things that they procure.

By moving from a disconnected system to the SAP intelligent spend offering, they were able to gain cohesive information and a clear view of their processes — consumer, supplier, procurement, and end-user services.

So, by moving from a disconnected system to the SAP intelligent spend offering, they were able to gain cohesive information and a clear view of their processes, which includes those around consumer, supplier, procurement, and end user services. They said that using this user-friendly platform allowed them to quickly reach compliance and usability by all of their employees across the company. It made it very easy for them to do that. They simplified the user experience.

And they were able to link suppliers and catalogs very closely to achieve a vision of total intelligent spend management using SAP Fieldglass and SAP Ariba. They said they transformed procurement from a reactive processing role to one of proactively controlling and guiding, thanks to uniform and transparent data, which is really fundamental to intelligent spend.

Gardner: Before we close out, let’s look to the future. It sounds like you can do so much with what’s available now, but we are not standing still in this business. What comes next technologically, and how does that combine with process efficiencies and people power — giving people more intelligence to work with? What are we looking for next when it comes to how to further extend the value around intelligent spend management?

Harmony and integration ahead 

Hofler: Extending the value into the future begins with the harmonization of data and the integration of processes seamlessly. It’s process-driven, and it doesn’t really matter what’s below the surface in terms of the technology because it’s all integrated and applied to a process seamlessly and holistically.

What’s coming in the future on top of that, as companies start to take advantage of this, is that more intelligent technologies are being infused into different parts of the process. For example, chatbots and the ability for users to interact with the system in a natural language way. Automation of processes is another example, with the capability to turn some processes into being fully autonomous, where the decisions are based on the learning of the machines.

screenshot MacThe user interaction can then become one of oversight and exception management, where the autonomous processes take over and manage when everything fits inside of the learned parameters. It then brings in the human elements to manage and change the parameters and to manage exceptions and the things that fall outside of that.

There is never going to be removal of the human, but the human is now able with these technologies to become far more strategic, to focus more on analytics and managing the issues that need management and not on repetitive processes that can be handled by the machine. When you have that connected across your entire processes, that becomes even more efficient and allows for more analysis. So that’s where it’s going.

Plus, we’re adding more ecosystem partners. When you have a networked ecosystem on intelligent spend, that allows for very easy connections to providers who can augment the core intelligent spend functions with data. For example, for attaining global tax, compliance, risk, and VAT rules through partners like American Express and Thomson Reuters. All of these things can be added. You will see that ecosystem growing to continue to add exponential value to being a part of an intelligent spend management platform.

Gardner: There are upcoming opportunities for people to dig into this and understand it and find the ways that it makes sense for them to implement, because it varies from company to company. What are some ways that people can learn details?

Hofler: There is a lot coming up. Of course, you can always go to ariba.com, fieldglass.com or sap.com and find out about our intelligent spend management offerings. We will be having our SAP Ariba Live conference in Las Vegas in March, and so tons and tons of content there, and lots of opportunity to interact with other folks who are in the same situation and implementing these similar things. You can learn a lot.

We are also doing a webinar with IDC to dig into the details of the survey. You can find information about that on ariba.com, and certainly if you are listening to this after the fact, you can hear the recording of that on ariba.com and download the report.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, data analysis, machine learning, procurement, risk assessment, SAP, SAP Ariba, Spot buying, supply chain | Tagged , , , , , , , , , , , , , , , | Leave a comment

How an MSP brings comprehensive security services to diverse clients

Polaris spin

As businesses move more of their IT services to the cloud, reducing complexity and making sure that security needs are met throughout the migration process are now top of mind.

For a UK managed services provider (MSP), finding the right mix of security strength and ease-of-use for its many types of customers became a top priority.

The next managed services security management edition of BriefingsDirect explores how Northstar Services, Ltd. in Bristol-area England adopted Bitdefender Cloud Security for Managed Service Providers (MSPs) to both improve security for their end users and to make managing that security at scale and from the cloud easier than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to discuss the role of the latest Bitdefender security technology — and making MSPs more like security services providers — is John Williams, Managing Director at Northstar Services, Ltd. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are some of the top trends driving the need for an MSP such as Northstar to provide even better security services?

Williams: We used to get lots of questions regarding stability for computers. They would break fairly regularly and we’d have to do hardware changes. People were interested in what software we were going to load — what the next version of this, that, and the other was — but those discussions have changed a great deal. Now everybody is talking about security in one form or another.

Gardner: Whenever you change something — whether it’s configurations, the software, or the service provider, like a cloud — it leaves gaps that can create security problems. How can you be doubly sure when you make changes that the security follows through?

The value of visibility, 24-7 

John Williams

Williams

Williams: We used to install a lot of antivirus software on centralized servers. That was very common. We would set up a big database and install security software on there, for example. And then we would deploy it to the endpoints from those servers, and it worked fairly well. Yet it was quite a lot of work to maintain it.

But now we are supporting people who are so much more mobile. Some customers are all out and about on the road. They don’t go to the office. They are servicing their customers, and they have their laptop. But they want the same level of security as they would have on a big corporate network.

So we have defined the security products that give us visibility of what’s happening. It means that we don’t have to know that they are up to date. We have to manage those clients wherever they are on whatever device they have — all from one place.

Gardner: Even though these customers are on the frontline, you’re the one as the MSP they are going to call up when things don’t go right.

Williams: Yes, absolutely. We have lots of customers who don’t have on-site IT resources. They are not experts. They often have small businesses with hundreds of users. They just want to call us, find out what’s going on when they see a problem on their computers, and we have got to know whether that’s a security issue or an application that’s broken.

But they are very concerned that we have that visibility all of the time. Our engineers need to be able to access that easily and address it as soon as a call comes in.

Gardner: Before we learn more about your journey to solving those issues, tell us about Northstar. How long have you been around and what’s the mainstay of your business?

Williams: I have been running Northstar for more than 20 years now, since January 1999. I had been working in IT as an IT support engineer in large organizations for a few years, but I really wanted to get involved in looking after small businesses.

People appreciate it when you make an effort. They want to tell you that you did a good job, and they want to know that someone is paying attention to them.

I like that because you get direct feedback. People appreciate it when you make an effort. They want to tell you that you did a good job, and they want to know that someone is paying attention to them.

So it was a joy to be able to get that up and going. We have a great team here now and that’s what gets me out of bed in the morning — working with our team to look after our customers.

Gardner: Smaller businesses moving to the cloud has become more the norm lately. How are your smaller organizations managing that? Sometimes with the crossover — the hybrid period between having both on-premises devices as well as cloud services — can be daunting. Is that something you are helping them with?

Moving to cloud step-by-step 

Williams: Yes, absolutely. We often see circumstances where they want to move one set of systems to the cloud before they want to move everything to the cloud. So they generally are on a trend where they want to get rid of in-house services, especially for the smaller end of the market, for customers who are smaller. But they often have legacy systems that they can’t easily port off the services from. They might have been custom written or are older versions that they can’t afford to upgrade at this point. So we end up supporting partly in the cloud and partly on-premises.

And some customers, that’s their strategy. They take a particular workload, a database, for example, or some graphics software that they use, that runs brilliantly on servers in their offices. But they want to outsource other applications.

So, when we look at security, we need software that’s going to be able to work across those different scenarios. It can’t just be one or the other. It’s no good if it’s just on-premises, and no good if it’s just in the cloud. It has to be able to do all of that, all from one console because that’s what we are supporting.

PC

Gardner: John, what were your requirements when you were looking for the best security to accomplish this set of requirements? What did you look for and how did your journey end?

Williams: Well, you can talk about the things being easy to manage, things being visible and with good reporting. All those things are important, and we assessed all of those. But the bottom line is, does it pick up infections? Is it able to keep those units secure and safe? And when an infection has happened, does it clean them up or stop them in their tracks quickly?

That has to be the number one thing, because whatever other savings you might make in looking after security, the fact that something that’s trying to do something bad is blocked — that has to be number one; stopping it in its tracks and getting it off that unit as quickly as possible. The sooner it’s stopped, the less damage and the less time the engineers have to spend rebuilding the units that have been killed by viruses or malware.

And we used to do quite a lot of that. With the previous antivirus security software we used, there was a constant stream of cleaning up after infections. Although it would detect and alert us, very often the damage was already done. So, we had a long period of repairing that, often rebuilding the whole operating system (OS), which is really inconvenient for customers.

And again, coming back to the small businesses, they don’t have spare PCs hanging around that they can just get out of the cupboard and carry on. Very often that’s the most vital kit that they own. Every moment it’s out of action, that’s directly affecting their bottom line. So detecting infections and stopping them in their tracks was our number-one criteria when we were looking.

Gardner: In the best of all worlds, the end user is not even aware that they were infected, not aware it was remediated, not having to go through the process of rebuilding. That’s a win-win for everyone.

Automation around security is therefore top of mind these days. What you have been able to do with Bitdefender Cloud Security for MSPs that accomplishes that invisibility to the end user — and also helps you with automation behind the scenes?

Stop malware in its tracks 

Williams: Yes, the stuff was easy to deploy. But what it boils down to is that we just don’t get as many issues to have to automate the resolution for. So automation is important, and the things it does are useful. But the number of issues that we have to deal with is so few now that even if we were to 100 percent automate, it wouldn’t make a massive savings, because it’s not interrupting us very much.

rich bitdefender logoIt’s stopping malware in its tracks and cleaning it up. Most of the time we are seeing that it has done it, rather than us having to automate a script to do some removal or some changes or that kind of thing. It has already done it. I suppose that is automated, if you think about it, yes.

Gardner: You said it’s been a dramatic difference between the past and now with the number of issues to deal with. Can you qualify that?

Williams: In the three or four years we have used Bitdefender, when we look at the number of tickets that we used to get in for antivirus problems on people’s laptops and PCs, they have just dropped to such a low level now, it’s a tiny proportion. I don’t think it’s even coming up on a graph.

When we look at the number of tickets we used to get in for antivirus problems, since we have used Bitdefender they have just dropped to such a low level now, it’s a tiny proportion. It doesn’t even come up on a graph.

You record the type of ticket that comes in, and it’s a printer issue, a hardware issue. The virus removal tickets are not featuring high enough to even appear on the graph because Bitdefender is just dealing with those infections and fixing them without having to get to them and rebuild PCs.

Gardner: When you defend a PC, Mac or mobile device, that can have a degradation effect. Users will complain about slow apps, especially when the antivirus software is running. Has there been an improvement in terms of the impact of the safety net when it comes to your use of Bitdefender Cloud Security for MSPs?

Williams: Yes, it’s much lighter on the OS than the previous software that we were using. We were often getting calls from customers to say that their units were running slowly because of the heavy load it was having to do in order to run the security software. That’s the exact opposite of what you want. You are putting this software on there so that they get a better experience; in other words, they are not getting infected as often.

But then you’re slowing down their work every day, I mean, that’s not a great trade-off. Security is vital but if it has such a big impact on them that they are losing time by just having it on there — then that’s not working out very well.

Now [with Bitdefender Cloud Security for MSPs] it’s light enough from the that it just isn’t an issue. We don’t get customers saying, “Since you put the antivirus on my laptops, it seems to be slower.” In fact, it’s usually the opposite.

Gardner: I’d like to return to the issue of cloud migration. It such a big deal when people move across a continuum of on-premises, hybrid, and cloud – and be able to move while security is maintained. It’s like changing the wings on an airplane and keeping it flying at the same time.

What is it about the way that Bitdefender has architected its solution that helps you, as a service provider, guide people through that transition but not lose a sense of security?

Don’t worry, be happy 

Williams: It’s because we are managing all of the antivirus licenses in the cloud, whether they are on-premises, inside an office where they are using those endpoints,  or whether they are out and about; whether it’s a client-server running in cloud services or running on-premises, we are putting the same software on there and managing it in the same console. It means we don’t worry about that security piece. We know that whatever they change to, whatever they are coming from, we can put the same software on and manage it in the same place — and we are happy.

Gardner: As a service provider I’m sure that the amount of man hours you have to apply to different solutions directly affects your bottom line. Is there something about the administration of all of this across your many users that’s been an improvement? The GravityZone Cloud Management console, for example, has that allowed you to do more with less when it comes to your internal resources?

Williams: Yes, and the way that I gauge that is the amount of time. Engineers want to do an efficient job, that’s what they like, they want to get to the root of problems and fix them quickly. So any piece of software or tool that doesn’t work efficiently for them, I get a long list of complaints about on a regular basis. All engineers want to fix things fast because that’s what the customer wants, and they are working on their behalf.

Before, I would have constant complaints about how difficult it was to manage and deploy software on the units if they needed to be decommissioned. It was just troublesome. But now I don’t get any complaints over it. The staff is nothing but complimentary about the software. That just makes me happy because I know that they are able to work with it, which means that they are doing the job that they want to do, which is helping our customers and keeping them happy. So yes, it’s much better.

Gardner: Looking to the future, is there something that you are interested in seeing more of? Perhaps around encryption or the use of machine learning (ML) to give you more analytics as to what’s going on? What would you like to see out of your security infrastructure and services from the cloud in the next couple of years?

The devil’s in the data detail 

Williams: One thing that customers are talking to us about quite a bit now is data security. So they are thinking more about the time when they are going to have to report the fact that they’ve been attacked. And no software on earth is perfect. The whole point of security is that the threat continually evolves.

At the point where you’ve had a breach of some kind, you want to understand what’s happened. And so, having information back from the security software that helps you to understand how the breach happened — and the extent of it — that’s becoming really important to customers. When they submit those reports, as legally they have to do, they want to have accurate information to say, “We had an infection, and that’s it.” If they don’t know exactly what the extent of it was – or whether any data was accessed or infected or encrypted without having that detail — that’s a problem.

keyboardSo the more information that we can gain from the security software about the extent, that’s going to be more important going forward.

Gardner: Anything else come to mind about what you’d like to see from the technology side?

Williams: So automation is important and that artificial intelligence (AI) side of it where the software itself learns about what’s happening and can give you an idea when it spots something that’s out of the ordinary — that will be more useful as time goes on.

Gardner: John, what advice do you have for other MSPs when it comes to a security, a better security posture?

Williams: Don’t be afraid of defining the securing services. You have to lead that conversation, I think. That’s what customers want to know. They want to know that you have thought about it, and that’s at the very full front of your mind.

We meet our customers regularly. The first item on the agenda is security. We like to talk about where they are, what’s the next thing that they can do to make sure they are doing everything they can to protect the data they have gathered from their customers, and to look after their data about their staff, too, and to keep their services running.

We go meet our customers regularly and we usually have a standard agenda that we use. The first item on the agenda is security. And that journey for each customer is different. They are starting from different places. So we like to talk about where they are, what’s the next thing that they can do to make sure they are doing everything they can to protect the data they have gathered from their customers, and to look after their data about their staff, too, and to keep their services running.

We put that at the top of the agenda for every meeting. That’s a great way of behaving as a service provider. But, of course, in order to do that, to deliver on that, you have to have the right tools. You have to say, “Okay, if I am going to be in that role to help people with a security, I have to have those tools in place.”

If they are complicated, difficult to use, and hard to implement — then that’s going to make it horrible. But if they are simple and give you great visibility, then you are going to be able to deliver a service that customers will really want to buy.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, Help desk, managed services, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Better IT security comes with ease in overhead for rural Virginia county government

Caroline_County_Courthouse 2The next public sector security management edition of BriefingsDirect explores how managing IT for a rural Virginia county government means doing more with less — even as the types and sophistication of cybersecurity threats grow.

For County of Caroline, a small team of IT administrators has built a technically advanced security posture that blends the right amounts of automation with flexible, cloud-based administration.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share their story on improving security in a local government organization are Bryan Farmer, System Technician, and David Sadler, Director of Information Technology, both for County of Caroline in Bowling Green, Virginia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Dave, tell us about County of Caroline and your security requirements. What makes security particularly challenging for a public sector organization like yours?

Sadler: As everyone knows, small governments in the State of Virginia — and all across the United States and around the world — are being targeted by a lot of bad guys. For that reason, we have the responsibility to safeguard the data of the citizens of this county — and also of the customers and other people that we interact with on a daily basis. It’s a paramount concern for us to maintain the security and integrity of that data so that we have the trust of the people we work with.

Gardner: Do you find that you are under attack more often than you used to be?

David Sadler

Sadler

Sadler: The headlines of nearly any major newspaper you see, or news broadcasts that you watch, show what happens when the bad guys win and the local governments lose. Ransomware, for example, happens every day. We have seen a major increase in these attacks, or attempted attacks, over the past few years.

Gardner: Bryan, tell us a bit about your IT organization. How many do you have on the frontlines to help combat this increase in threats?

Farmer: You have the pleasure today of speaking with the entire IT staff in our little neck of the woods. It’s just the two of us. For the last several years it was a one-man operation, and they brought me on board a little over a year-and-a-half ago to lend a hand. As the county grew, and as the number of users and data grew, it just became too much for one person to handle.

Gardner: You are supporting how many people and devices with your organization?

Small-town support, high-tech security

Bryan Farmer (1)

Farmer

Farmer: We are mainly a Microsoft Windows environment. We have somewhere in the neighborhood of 250 to 300 users. If you wrap up all of the devices, Internet of Things (IoT) stuff, printers, and things of that nature, it’s 3,000 to 4,000 devices in total.

Sadler: But the number of devices that actually touch our private network is in the neighborhood of around 750.

Farmer: We are a rural area so we don’t have the luxury of having fiber in between all of our locations and sites. So we have to resort to virtual private networks (VPNs) to get traffic back and forth. There are airFiberconnections, and we are doing some stuff over the air. We are a mixed batch. There is a little bit of everything here.

Gardner: Just as any business, you have to put your best face forward to your citizens, voters, and taxpayers. They are coming for public services, going online for important information. How large is your county and what sort of applications and services you are providing to your citizens?

Farmer: Our population is 30,000?

Sadler: Probably 28,000 to 30,000 people, yes.

Farmer: A large portion of our county is covered by a U.S. Army training base, it’s a lot of nonliving area, so to speak. The population is condensed into a couple of small areas.

We host a web site and forum. People can look up their taxes, permit prices, and basic information that the average citizen will need.

We host a web site and forum. It’s not as robust as what you would find in a big city or a major metropolitan area, but people can look up their taxes, permit prices, things of that nature; basic information that the average citizen will need such as utility information.

Gardner: With a potential of 30,000 end users — and just two folks to help protect all of the infrastructure, applications, and data — automation and easy-to-use management must be super important. Tell us where you were in your security posture before and how you have recently improved on that.

Finding a detection solution

Sadler: Initially when I started here, and I came over from the private sector, we were running one of the big companies that had a huge name but was basically not showing us the right amount of good protection, you could say.

So we switched to a second company, Kaspersky, and immediately we started finding detections of existing malware and different anomalies in the network that had existed for years without protection from Symantec. So we settled on Kaspersky. And anytime you go to an enterprise-level antivirus (AV) endpoint solution, the setup, adjustment, and on-boarding process takes longer than what a lot of people would lead you to believe.

It took us about six months with Kaspersky. I was by myself, so it took me about six months to get everything set up and running like it should, and it performed extremely well. I had a lot of granularity as far as control of firewalls and that type of product.

The granularity is what we like because we have users that have a broad range of needs. We have to be able to address all of those broad ranges under one umbrella.

Many of the different AV endpoint solutions we evaluated lacked the granularity we wanted to address the needs of everyone with one product. We spend six months evaluating and we landed on Bitdefender.

Unfortunately, when the US Department of Homeland Security decided to at first recommend that you not use [Kaspersky] and then later banned that product from use, we were forced to look for a replacement solution, and we evaluated multiple different products.

Again, what we were looking for was granularity because we wanted to be able to address the needs of everyone under the umbrella with one particular product. Many of the different AV endpoint solutions we evaluated lacked that granularity. It was, more or less, another version of the software that we started with. They didn’t give a real high level of protection or did not allow for adjustment.

When we started evaluating a replacement, we were finding things that we could not do with a particular product. We spent probably about six months evaluating different products — and then we landed on Bitdefender.

Now, coming from the private sector and dealing with a lot of home users, my feelings for Bitdefender were based on the reputation of their consumer-grade product. They had an extremely good reputation in the consumer market. Right off the bat, they had a higher score when we started evaluating. It doesn’t matter how easy a product is to use or adjust if their basic detection level is low, then everything else is a waste of time.

rich bitdefender logoBitdefender right off the bat has had a reputation for having a high level of detection and protection as well as a low impact on the systems. Being a small, rural county government, we use machines that are unfortunately a little bit older than what would be recommended, five to six years old. We are using some older machines that have lower processing power, so we could not take a product that would kill the performance of the machine and make it unusable.

During our evaluations we found that Bitdefender performed well. It did not have a lot of system overhead and it gave us a high level of protection. What’s really encouraging is when you switch to a different product and you start scaling your network and find threats that had been existing there for years undetected. Now you know at least you are getting something for your money, and that’s what we found with Bitdefender.

Gardner: I have heard that many times. It has to, at the core, be really good at detecting. All the other bells and whistles don’t count if that’s not the case. Once you have established that you are detecting what’s been there, and what’s coming down the wire every day, the administration does become important.

Bryan, what is the administration like? How have you improved in terms of operations? Tell us about the ongoing day-to-day life using Bitdefender.

Managing mission-critical tech

Farmer: We are Bitdefender GravityZone users. We host everything in the cloud. We don’t have any on-premises Bitdefender machines, servers, or anything like that, and it’s nice. Like Dave said, we have a wide range of users and those users have a wide range of needs, especially with regards to Internet access, web page access, stuff like that.

For example, a police officer or an investigator needs to be able to access web sites that a clerk in the treasurer’s office just doesn’t need to be able to access. To be able to sit at my desk or take my laptop out anywhere that I have an Internet connection and make an adjustment if someone cannot get to somewhere that they need is invaluable. It saves so much time.

We don’t have to travel to different sites. We don’t have to log-in to a server. I can make adjustments from my phone. It’s wonderful to be able to set up these different profiles and to have granular control over what a group of people can do.

We can adjust which programs they can run. We can remove printing from a network. There are so many different ways that we can do it, from anywhere as long as we have a computer and Internet access. Being able to do that is wonderful.

Gardner: Dave, there is nothing more mission-critical than a public safety officer and their technology. And that technology is so important to everybody today, including a police officer, a firefighter, and an emergency medical technician (EMT). Any feedback when it comes to the protection and the performance, particularly in those mission-critical use cases?

Sadler: Bitdefender has allowed us the granularity to be able to adjust so that we don’t interfere with those mission-critical activities that the police officer or the firefighter are trying to perform.

Our security service is hosted in the cloud, and we have found that that is an actual benefit. Bitdefender GravityZone offers us the capability to monitor as well as adjust on machines that never see our network. 

So initially there was an adjustment period. Thank goodness everybody was patient during that process and I think now we are finally — about a year into the process, a little over a year — and we have gotten stuff set pretty good. The adjustments that we are having to make now are minor. Like Bryan said, we don’t have an on-premises security server here. Our service is hosted in the cloud, and we have found that that is an actual benefit. Before, with having a security server and the software hosted on-premises, there were machines that didn’t touch the network. We are looking at probably 40 to 50 percent of our machines that we would have had to manage and protect [manually] because they never touch our network.

The Bitdefender GravityZone cloud-based security product offers us the capability to be able to monitor for detections, as well as adjust firewalls, etc., on machines that we never touch or never see on our network. It’s been a really nice product for us and we are extremely happy with its performance.

endpoint-security-solution

Gardner: Any other metrics of success for a public sector organization like yours with a small support organization? In a public sector environment you have to justify your budget. When you tell the people overseeing your budget why this is a good investment, what do you usually tell them?

Sadler: The benefit we have here is that our bosses are aware of the need to secure the network. We have cooperation from them. Because we are diligent in our evaluation of different products, they pretty much trust our decisions.

Justifying or proving the need for a security product has not been a problem. And again, the day-to-day announcements that you see in the newspaper and on web sites about data breaches or malware infections — all that makes justifying such a product easier.

Gardner: Any examples come to mind that have demonstrated the way that you like to use these products and these services? Anything come to mind that illustrates why this works well, particularly for your organization?

Stop, evaluate, and reverse infections

Farmer: Going back to the cloud hosting, all a machine has to do is touch the Internet. We have a machine in our office here right now that one of our safety officials had and we received an email notification that something was going on. That machine needed to be disinfected, we needed to take a look at this machine.

The end-user didn’t have to notice it. We didn’t have to wait until it was a huge problem or a ransomware thing or whatever the case may be. We were notified automatically in advance. We were able to contact the user and get to the machine. Thankfully, we don’t think it was anything super-critical, but it could have been.

That automation was fantastic, and not having to react so aggressively, so to speak. So the proactivity that a solution like Bitdefender offers is outstanding.

Gardner: Dave, anything come to mind that illustrates some of the features or functions or qualitative measurements that you like?

Sadler: Yes, with Bitdefender GravityZone, it will sandbox a suspicious activity and watch its actions and then roll back if something bad is going on.

We actually had a situation where a vendor that we use on a regular basis from a large company, well-respected, called in to support a machine that they had in one of our offices. We were immediately notified via email that a ransomware attack was being attempted.

We were notified on our end that a ransomware attack had been stopped, evaluated, and reversed by Bitdefender. Not once, but twice in a row. We were immediately able to contact that office say, “Hey, stop what your are doing.”

So this vendor was using a remote desktop application. Somehow the end-user got directed to a bad site, and when it failed the first time on their end, all they could tell was, “Hey, my remote desktop software is not working.” They stopped and tried it again.

We were notified on our end that a ransomware attack had been stopped, evaluated, and reversed by Bitdefender. Not once, but twice in a row. So we were immediately able to contact that office and say, “Hey, stop what you are doing.”

Then we followed up by disconnecting that computer from the network and evaluating them for infection, to make sure that everything had been reversed. Thank goodness, Bitdefender was able to stop that ransomware attack and actually reverse the activity. We were able to get a clean scan and return that computer back to service fairly quickly.

Gardner: How about looking to the future? What would you like to see next? How would you improve your situation, and how could a vendor help you do that?

Meeting government requirements

Sadler: The State of Virginia just passed a huge bill dealing with election security and everybody knows that that’s a huge, hot topic when it comes to security right now. And because most of the localities in Virginia are independent localities, the state passed a bill that allows state Department of Elections and the US Homeland Security Department to step in a little bit more to the local governments and monitor or control the security of the local governments, which in the end is going to be a good thing.

But a lot of the products or solutions that we are now being required to be able to answer about are already answered by the Bitdefender product. For example, automated patch management notification of security issues.

So, Bitdefender right now is already answering a lot of the new requirements. The one thing that I would like to see … from what I understand the cloud-based version of Bitdefender does not allow you to do mobile device management. And that’s going to be required by some of these regulations that are coming down. So it would be really nice if we could have one product that would do the mobile device management as well as the cloud-based security protection for a network.

pring_Grove_gateGardner: I imagine they hear you loud and clear on that. When it comes to compliance like you are describing from a state down to a county, for example, many times there are reports and audits that are required. Is that something that you feel is supported well? Are you able to rise to that occasion already with what you have installed?

Farmer: Yes, Bitdefender is a big part of us being able to remain compliant. The Criminal Justice Information Services (CJIS) audit is one we have to go through on a regular basis. Bitdefender helps us address a lot of the requirements of those audits as well as some of the upcoming audits that we haven’t seen yet that are going to be required by this new regulation that was just passed this past year in the Commonwealth of Virginia.

But from the previews that we are getting on the requirements of those newly passed regulations, it does appear that Bitdefender is going to be able to help us address some of those needs, which is good. By far, it’s the capability to be able to answer some of those needs with Bitdefender that is superior to the products that we have been using in the past.

Gardner: Given that many other localities, cities, towns, municipalities, counties are going to be facing similar requirements, particularly around election security, for example, what advice would you give them, now that you have been through this process? What have you learned that you would share with them so that they can perhaps have an easier go at it?

Research reaps benefits in time, costs 

Farmer: I have seen in the past a lot of places that look at the first line item, so to speak, and then make a decision on that. Then when they get down the page a little bit and see some of the other requirements, they end up in situations where they have two, three, or four pieces of software, and a couple of different pieces of hardware, working together to accomplish one goal. Certainly, in our situation, Bitdefender checks a lot of different boxes for us. If we had not taken the time to research everything properly and get into the full breadth of what’s capable, we could have spent a lot more money and created a lot more work and headaches for ourselves.

A lot of people in IT will already know this, but you have to do your homework. You have to see exactly what you need and get a wide-angle view of it and try to choose something that helps do all of those things. Then automate off-site and automate as much as you can to try to use your time wisely and efficiently.

Gardner: Dave, any advice for those listening? What have you learned that you would share with them to help them out?

The breadth of the protection that we are getting from Bitdefender has been a major plus. Find the product that your can put together under one big umbrella so you have one point of adjustment from one single control panel.

Sadler: The breadth of the protection that we are getting from Bitdefender has been a major plus. So again, like Bryan said, find the product that you can put together under one big umbrella — so that you have one point of adjustment. For example, we are able to adjust firewalls, virus protection, and off-site USB protection — all this from one single control panel instead of having to manage four or five different control panels for different products.

It’s been a positive move for us, and we look forward to continuing to work with that product and we are watching the new product still under development. We see new features coming out constantly. So if anyone from Bitdefender is listening, keep up the good work. We will hang in there with you and keep working.

But the main thing for IT operators is to evaluate your possibilities, evaluate whatever possible changes you are going to make before you do it. It can be an investment of money and time that goes wasted if you are not sure of the direction you are going in. Use a product that has a good reputation and one that checks off all the boxes like Bitdefender.

Farmer: In a lot of these situations, when you are working with a county government or a school you are not buying something for 30, 60, or 90 days – instead you are buying a year at a time. If you make an uninformed decision, you could be putting yourself in a jam time- and labor-wise for the next year. That stuff has lasting effects. In most counties, we get our budgets and that’s what we have. There are no do-overs on stuff like this. So, it speaks back to making a well-informed decision the first time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in artificial intelligence, Bitdefender, Business intelligence, BYOD, Cloud computing, Cyber security, data analysis, Government, machine learning, mobile computing, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Delivering a new breed of patient access best practices requires an alignment of people, process, and technology

receptionistThe next BriefingsDirect healthcare finance insights discussion explores the rapidly changing ways that caregiver organizations on-board and manage patients.

How patients access their healthcare is transitioning to the digital world — but often in fits and starts. This key process nonetheless plays a major role in how patients perceive their overall experiences and determines how well providers manage both care and finances.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to unpack the people, process, and technology elements behind modern patient access best practices. To learn more, we are joined by an expert panel: Jennifer Farmer, Manager of Patient Access and Admissions at Massachusetts Eye and Ear Infirmary in Boston; Sandra Beach, Manager of the Central Registration Office, Patient Access, and Services and Pre-Services at Cooley Dickinson Healthcare in Northampton, Mass., and Julie Gerdeman, CEO of HealthPay24 in Mechanicsburg, Penn. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jennifer, for you and your organization, how has the act of bringing a patient into a healthcare environment — into a care situation — changed in the past five years?

Jennifer Farmer headshot

Farmer

Farmer: The technology has exploded and it’s at everyone’s fingertips. So five years ago, patients would come to us, from referrals, and they would use the old-fashioned way of calling to schedule an appointment. Today it is much easier for them. They can simply go online to schedule their appointments.

They can still do walk-ins as they did in the past, but it’s much easier access now because we have the ways and means for the patients to be triaged and given the appropriate information so they can make an appointment right then and there, versus waiting for their provider to call to say, “Hey, we can schedule your appointment.” Patients just have it a lot easier than they did in the past.

Gardner: Is that due to technology? It seems to me that when I used to go to a healthcare organization they would be greeting me by handing me a clipboard, but now they are always sitting at a computer. How has the digital experience changed this?

Farmer: It has changed it drastically. Patients can now complete their accounts online and so the person sitting at the desk already has that patient’s information. So the clipboard is gone. That’s definitely something patients like. We get a lot of compliments on that.

It’s easier to have everything submitted to us electronically, whether it’s medical records or health insurance. It’s also easier for us to communicate with the patient through the electronic health record (EHR). If they have a question for us or we have a question for them, the health record is used to go back and forth.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

There are not as many phone calls as there used to be, not as many dropped ends. There is also the advent of telemedicine these days so doctors can have a discussion or a meeting with the patient on their cell phones. Technology has definitely changed how medicine is being distributed as well as improving the patient experience.

Gardner: Sandra, how important is it to get this right? It seems to me that first impressions are important. Is that the case with this first interception between a patient and this larger, complex healthcare organization and even ecosystem?

Sandra Beach headshot

Beach

Beach: Oh, absolutely. I agree with Jennifer that so many things have changed over the last five years. It’s a benefit for patients because they can do a lot more online, they can electronically check-in now, for example, that’s a new function. That’s going to be coming with [our healthcare application] Epicso that patients can do that all online.

The patient portal experience is really important too because patients can go in there and communicate with the providers. It’s really important for our patients as telemedicine has come a huge distance over the years.

Gardner: Julie, we know how important getting that digital trail of a patient from the start can be; the more data the better. How have patient access best practices been helped or hindered by technology? Are the patients perceiving this as a benefit?

Gerdeman: They are. There has been a huge improvement in patient experience from technology and the advent and increase in technology. A patient is also a consumer. We are all just people and in our daily lives we do more research.

So, for patient access, even before they book an appointment, either online or on the phone, they pull out their phones and do a ton of research about the provider institution. That’s just like folks do for anything personal, such as a local service like a dry cleaning or a haircut. For anything in your neighborhood or community, you do the same for your healthcare because you are a consumer.

The same level of consumer support that’s expected in our modern daily lives has now come to be expected with our healthcare experiences. Leveraging technology for access is just beginning and will continue to impact healthcare.

Julie Gerdeman headshot

Gerdeman

The same level of consumer support that’s expected in our modern daily lives has now come to be expected with our healthcare experiences. Leveraging technology for access, and as Jennifer and Sandra mentioned, the actual clinical experience — via telemedicine and digital transformation — is just getting into and will continue to impact healthcare.

Gardner: We have looked at this through the lens of the experience and initial impressions — but what about economics? When you do this right, is there a benefit to the provider organization? Is there a benefit to the patient in terms of getting all those digital bits and bytes and information in the right place at the right time? What are the economic implications, Jennifer?

Technology saves time and money

Farmer: They are two-fold. One, the economic implication for a patient is tht they don’t necessarily have to take a day off from work or leave work early. They are able to continue via telemedicine, which can be done through the evening. When providers offer evening and weekend appointments, that’s to satisfy the patient so they don’t have to spend as much time trying to rearrange things, get daycare, or pay for parking.

For the provider organization, the economic implications are that we can provide services to more patients, even as we streamline certain services so that it’s all more efficient for the hospital and the various providers. Their time is just as valuable as anyone else’s. They also want to reduce the wait times for someone to see a patient.

The advent of using technology across different avenues of care reduces that wait time for available services. The doctors and technicians are able to see more patients, which obviously is an economic positive for the hospital’s bottom line.

Gardner: Sandra, patients are often not just having one point of intersection, if you will, with these provider organizations. They probably go to a clinic, then a specialist, perhaps rehabilitation, and then use pharmaceutical services. How do we make this more of a common experience for how patients intercept such an ecosystem of healthcare providers?

Beach: I go back to the EHRs that Jennifer talked about. With us being in a partner system, no matter where you go — you could go to a rehab appointment, a specialist, to the cancer center in Boston — all your records are accessible for the physicians, and for the patients. That’s a huge step in the right direction because, no matter where the patient goes, you can access the records, at least within our system.

Gardner: Julie, to your point that the consumer experience is dictating people’s expectations now, this digital trail and having that common view of a patient across all these different parts of the organization is crucial. How far along are we with that? It seems to me that we are not really fully baked across that digital experience.

desktop shotGerdeman: You’re right, Dana. I think the partner approach is an amazing exception to the rule because they are able to see and share data across their own network.

Throughout the rest of the country, it’s a bit more fractured and splintered. There remains a lot of friction in accessing records as you move — even in some cases within the same healthcare system — from a clinic or the emergency department (ED) into the facility or to a specialist.

The challenge is one of interoperability of data and integration of that data. Hospitals continue to go through a lot of mergers and acquisitions, and every acquisition creates a new challenge.

From the consumer perspective, they want that to be invisible. It should be invisible, the right data should be on their phones regardless of what the encounter was, what the financial obligation for the encounter was — all of it. So that’s the expectation and what’s still happening. There is a way to go in terms of interoperability and integration from the healthcare side.

Gardner: We have addressed the process and the technology, but the third leg on the stool here is the people. How can the people who interact with patients at the outset foster a better environment? Has the role and importance of who is at that initial intercept with the patient been elevated? So much rides on getting the information up front. Jennifer, what about the people in the role of accessing and on-boarding patients, what’s changed with them?

Get off to a great start

Farmer: That is the crux of the difference between a good patient experience and a terrible patient experience, that first interaction. So folks who are scheduling appointments and maybe doing registration — they may be at the information desk — they are all the drivers to making sure that that patient starts off with a great experience.

Most healthcare organizations are delving into different facets of customer service in order to ensure that the patient feels great and like they belong when they come into an organization. Here at Mass. Eye and Ear, we practice something called Eye Care. Essentially, we think about how you would want yourself and your family members to be treated, to make sure that we all treat patients who walk in the door like they are our family members.

When you lead with such a positive approach it downstreams into that patient’s feelings of, “I am in the right place. I expect my care to be fantastic. I know that I’m going to receive great care.” Their positive initial outlook generally reflects the positive outcome of their overall visit.

Most providers are siloed, with different areas or departments. That means patients would hear, “Oh, sorry, we can’t help you. That’s not our area.” To make it a more inclusive experience, everyone in the organization is a brand ambassador.

This has changed dramatically even within the past two to three years. Most providers are siloed, with different areas or departments. That means patients would hear, “Oh, sorry, we can’t help you. That’s not our area.” To make it a more inclusive experience, everyone in the organization is a brand ambassador.

We have to make sure that people understand that, to make it more inclusive for the patient and less hectic for the patient, no matter where you are within a particular organization. I’m sure Sandra can speak to this as well. We are all important to that patient, so if you don’t know the answer, you don’t have to say, “I don’t know.” You can say, “Let me get someone who can assist you. I’ll find some information for you.”

It shouldn’t be work for them when patients walk in the door. They should be treated as a guest, welcomed and treated as a family member. Three or four years ago, it was definitely the mindset of, “Not my job.” At other organizations that I visit, I do see more of a helpful environment, which has changed the patient perception of hospitals as well.

Beach: I couldn’t agree more, Jennifer. We have the same thing here as with your Eye Care. I ask our staff every day, “How would you feel if you were the patient walking in our door? Are we greeting patients with a nice, warm, friendly smile? Are we asking, ‘How can I help you today?’ Or, ‘Good morning, what can I do for you today?’”

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

We keep that at the forefront for our staff so they are thinking about this every time that they greet a patient, every day they come to work, because patients have choices, patients can go to other facilities, they can go to other providers.

We want to keep our patients within our healthcare system. So it’s really important that we have a really good patient experience on the front end, because Jennifer is correct, it has a positive outcome on the back end. If they start off in the very beginning with a scheduler or a registrar or an ED check-in person, and they are not greeted in a friendly, warm atmosphere, then typically that’s what sets off their total visit. That seems to be what they remember. That first interaction is really what they remember.

Gardner: Julie, this reflects back on what’s been happening in the consumer world around the user experience. It seems obvious.

So I’m curious about this notion of competition between healthcare providers. That might be something new as well. Why do healthcare provider organizations need to be thinking about this perception issue? Is it because people could pick up and choose to go somewhere else? How has competition changed the landscape when it comes to healthcare?

Competing for consumers’ care 

Gerdeman: Patients have choices. Sandra described that well. Patients, particularly in metropolitan or suburban areas, have lots of options for primary care, specialty care, and elective procedures. So healthcare providers are trying to respond to that.

In the last few years you have seen not just consumerism from the patient experience, but consumerism in terms of advertising, marketing, and positioning of healthcare services — like we have never seen before. That competition will continue and become even more fierce over time.

mee-exteriorProviders should put the patient at the center of everything that they do. Just as Jennifer and Sandra talked about, putting the patient at the heart and then showing empathy from the very first interaction. The digital interaction needs to show empathy, too. And there are ways to do that with technology and, of course, the human interaction when you are in the facility.

Patients don’t want to be patients most of the time. They want to be humans and live their lives. So, the technology supporting all of that becomes really crucial. It has to become part of that experience. It has to arm the patient access team and put the data and information at their fingertips so they can look away from a computer or a kiosk and interact with that patient on a different level. It should arm them to have better, empathic interactions and build trust with the patient, with the consumer.

Gardner: I have seen that building competition where I live in New Hampshire. We have had two different nationally branded critical-care clinics open up — pop-up like mushrooms in the spring rain — in our neighborhood.

Let’s talk about the experience not just for the patient but for that person who is in the position of managing the patient access. The technology has extended data across the partner organization. But still technology is often not integrated in the back end for the poor people who are jumping between four and five different applications — often multiple systems — to on-board patients.

What’s the challenge from the technology for the health provider organization, Jennifer?

One system, one entry point, it’s Epic

Farmer: That used to be our issue until we gained the Epic system in 2016. People going into multiple applications was part of the issue with having a positive patient experience. Every entry point that someone would go to, they would need to repeat their name and date of birth. It looked one way in one system and it looked another way in a different system. That went away with Epic.

Epic is one system, the registration or the patient access side. It is also the coding side, it’s billing, it’s medical records, it’s clinical care, medications, it’s everything.

So for us here at Mass. Eye and Ear, no matter where you go within the organization, and as Sandra mentioned earlier, we are part of the same Partners HealthCare system. You can actually go to any Partners facility and that person who accesses your account can see everything. From a patient access standpoint, they can see your address and phone number, your insurance information, and who you have as an emergency contact.

There isn’t that anger that patients had been feeling before, because now they are literally giving their name and date of birth only as a verification point. It does make it a lot easier for our patients to come through the door, go to different departments for testing, for their appointment, for whatever reason that they are here, and not have to show their insurance card 10 times.

If they get a bill in the mail and they are calling our billing department, they can see the notes that our financial coordinators, our patient access folks, put on the account when they were here two or three months ago and help explain why they might have gotten a bill. That’s also a verification point, because we document everything.

So, a financial coordinator can tell a patient they will get a bill for a co-pay or for co-insurance and then they get that bill, they call our billing team, they say, “I was never told that,” but we have documentation that they were told. So, it’s really one-stop shopping for the folks who are working within Epic. For the patient, nine times out of 10 they just can go from floor to floor, doctor to doctor, and they don’t have to show ID again, because everything is already stored in Epic.

Beach: I agree because we are on Epic as well. Prior to that, three years ago, it would be nothing for my registrars to have six, seven systems up at the same time and have to toggle back and forth. You run a risk by doing that, because you have so many systems up and you might have different patients in the system, so that was a real concern.

If a patient came in and didn’t have an order from the provider, we would have to call their office. The patient would have to wait. We might call two or three times.

Now we have one system. If the patient doesn’t have the order, it’s in the computer system. We just have to bring it up, validate it, patient gets checked in, patient has their exam, and there is no wait. It’s been a huge win for us and for our patients.

Now, we have one system. If the patient doesn’t have the order, it’s in the computer system. We just have to bring it up, validate it, patient gets checked in, patient has their exam, and there is no wait. It’s been a huge win for us for sure — and for our patients.

Gardner: Privacy and compliance regulations play a more important role in the healthcare industry than perhaps anywhere else. We have to not only be mindful of the patient experience, but also address these very important technical issues around compliance and security. How are you able to both accomplish caring for the patient and addressing these hefty requirements?

It’s healthy to set limits on account access

Farmer: Within Epic, access is granted by your role. Staff that may be working in admitting or the ED or anywhere within patient access, but they don’t have access to someone’s medication list or their orders. However, another role may have access.

Compliance is extremely important. Access is definitely something that is taken very seriously. We want to make sure that staff are accessing accounts appropriately and that there are guardrails built in place to prevent someone from accessing accounts if they should not be.

For instance, within the Partners HealthCare system, we do tend to get people of a certain status; we get politicians, we get celebrities, we get heads of state, public figures that go to various hospitals, even outside of Partners that are receiving care. So we have locks on those particular accounts for employees. Their accounts are locked.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

So if you try to access the account, you get a hard stop. You have to complete why you are accessing this account, and then it is reviewed immediately. And if it’s determined that your role has nothing to do with it, you should not be accessing this particular account, then the organization does takes necessary steps to investigate and either say yes, they had a reason to be in this account, or no, they did not, and the potential of termination is there.

But we do take privacy very seriously within the system and then outside of the system. We make sure we are providing a safe space for people to be able to provide us with their information. It is on the forefront, it drives us, and folks definitely are aware because it is part of their training.

Cooley-DickinsonBeach: You said it perfectly, Jennifer. Because we do have a lot of people that are high profile and that do come through our healthcare systems the security, I have to say, is extremely tight on records. And so it should be. If you are in a record, and you shouldn’t be there, then there are consequences to that.

Gardner: Julie, in addition to security and privacy we have also had to deal with a significant increase in the complexity around finances and payments given how insurers and the payers work. Now there are more copays, more kinds of deductibles. There are so many different plans: platinum, gold, silver, bronze.

In order to keep the goal of a positive patient experience, how are we addressing this new level of complexity when it comes to the finances and payments? Do they go hand-in-hand, the patient experience, the access, and the economics?

A clean bill of health for payment

Gerdeman: They do, and they should, and they will continue to. There will remain complexity in healthcare. It will improve certainly over time, but with all of the changes we have seen complexity is a given. It will be there. So how to handle the complexity, with technology, with efficient process, and with the right people becomes more and more important.

There are ways to make the complex simple with the right technology. On the back end, behind that amazing patient experience — both the clinical experience and also the financial experience – we try to shield the patient. At HealthPay24 we are focused on financial experience and taking all of the data that’s behind there and presenting it very simply to a patient.

That means one small screen on the phone — with different encounters and different back ends – of being able to present that very simply for our patients to meet their financial obligations. They are not concerned that the ED had one different electronic medical record (EMR) than the specialist. That’s really not the concern of the patient, nor should it be. It’s the concern of how the providers can use technology in the back end to then make it simple and change that experience.

We talked about loyalty, and that’s what drives loyalty. You are going to keep coming back to a great experience, with great care, and ease of use. So for me, that’s all crucial as we go forward with healthcare – the technology and the role it plays.

Gardner: And Jennifer and Sandra, how do you see the relationship between the proper on-boarding, access, and experience and this higher complexity around the economics and finance? Do you see more of the patient experience addressing the economics?

Farmer: We have done an overhaul of our system, where it concerns patients, for paying bills or for not having health insurance. Our financial coordinators are there to assist our patients, whether by phone, email, in person. There are lots of different programs we can introduce patients to.

We are certified counselors for the Commonwealth of Massachusetts. That means we are able to help the patient apply for health insurance through the Health Connector for Massachusetts as well as for the state Medicaid program called MassHealth. And so we are here to help those patients go through that process.

We also have an internal program that can assist patients with paying their bills. We talk to patients about different credit cards that are available for those that may qualify. And essentially, the bottom line too is somebody just once again on a payment plan. So, we take many factors, and we try to make it work as best as we can for the patient.

At the end of the day, it’s about that patient receiving care and making sure that they are feeling good about it. We definitely try to meet their needs and introduce them to different things. We are here to support them, and at the end of the day it’s again about their care. If they can’t pay anything right now, but they obviously need immediate medical services, then we assure them, let’s focus on your care. We can talk about the back end or we can talk about your bills at a different point.

We do provide them with different avenues, and we are pretty proud of that because I like to believe that we are successful with it and so it helps the patient overall.

Gerdeman: It really does go to that patients want to meet their obligations, but they need options to be able to do that. Those options become really important — whether it’s a loan program, a payment plan, applying for financial assistance – and technology can enable all of these things.

For HealthPay24, we enable an eligibility check right in the platform so you don’t have to worry about others knowing. You can literally check for eligibility by clicking a button and entering a few fields to know if you should be talking to financial counseling at a provider.

You can apply for payment plans, if the providers opt for that. It will be proactively offered based on demographic data to a patient through the platform. You can also apply for loans, for revolving credit, through the platform. Much of what patients want and need financially is now available and enabled by technology.

Gardner: Sandra, such unification across the financial, economic, and care giving roles strikes me as something that’s fairly new.

Beach: Yes, absolutely it is. We have a program in our ED, for example, that we instituted a year ago. We offer an ED discharge service so when the patient is discharged, they stop at our desk and we offer these patients a wide variety of payment options. Or maybe they are homeless and they are going through a tough time. We can tell them where they can go to get a free meal or spend the night. There are a whole bunch of programs available.

That’s important because we will never turn a patient away. And when patients come through our ED, they need care. So when they leave, we want to be able to help them as much as we can by supporting them and giving them these options. 

That’s important because we will never turn a patient away. And when patients come through our ED, they need care. So when they leave, we want to be able to help them as much as we can by supporting them and giving them these options.

We have also made phone calls for our patients as well. If they need to get someplace just to spend the night, we will call and we will make that arrangement for those patients. So when they leave, they know they have a place to go. That’s really important because people go through hard times.

Gardner: Sandra, do you have any other examples of processes or approaches to people and technology that you have put in place recently? What have been some of the outcomes?

Check-in at home, spend less time waiting 

Beach: Well, the ED discharge service has made a huge impact. We saw probably 7,000-8,000 patients through that desk over the last year. We really have helped a lot of patients. But we are also there just to lend an ear. Maybe they have questions about what the doctor just said to them, but they really weren’t sure what he said. So it’s just made a huge impact for our patients here.

Gardner: Jennifer, same question, any processes you have put in place, examples of things that have worked and what are the metrics of success?

Farmer: We just rolled out e-check-in. So I don’t have any metrics on it just yet, but this is a process where the patient can go to their MyChart or their EHR and check in for an appointment prior to the day. They can also pay their copay. They can provide us with updates to their insurance information, address or phone number, so when they actually come to their appointment, they are not stopping at the desk to sign in or do check in.

That seems to be a popular option for the office visitor currently piloting this, and we are hoping for a big success. It will be rolled out to other entities, but right now that is something that we are working on. It’s tying in the technology, the patient care, for the patient access. It’s tying in the ease of the check-in with that patient. And so again, we are hoping that we have some really positive metrics on that.

Gardner: What sort of timeframe are we talking about here in terms of start to finish from getting that patient into their care?

Farmer: So if they are walking in the door because they have already done e-check-in, they are immediately going in for their appointment, because they are showing up on time, they are expected, they are going right in, so the time that the patient is sitting there waiting in line, sitting in the waiting area, that’s being reduced; the time that they have to talk to someone about any changes or confirming everything that we have on their account, that time is being reduced.

MEE-ExteriorAnd then we are hoping to test this in a pilot program for the next month to six weeks to see what kind of data we can get and hopefully that will — just across the board, just help with that check in process for patients and reduce that time for the folks who are at the desk and they can focus on other tasks as well. So we are giving them back their time.

Gardner: Julie, this strikes me in the parlance of other industries as just-in-time healthcare, and it’s a good move. I know you deal with a national group of providers and payers. Any examples, Julie, that demonstrate and illustrate the positive direction we are going with patient access and why technology is an important part of that?

Just-in-time wellness

Gerdeman: I refer to Christopher Penn’s model of People, Process, and Technology here, Dana, because when people touch process, there is scale, and when process and technology intersect, there is automation. But most importantly, when people intersect with technology, there is innovation, and what we are seeing is not just incremental innovation — but huge leaps in innovation.

What Jen just described as that experience of just-in-time healthcare, that is literally a huge need, that’s a leap, right? We have come to expect it when we reserve a table via OpenTable, when we e-check-in for a hair appointment. I go back to that consumer experience, but that innovation, right, that’s happening all across healthcare.

Gain a Detailed Look at Patient 

Financial Engagement Strategies 

One of the things that we just launched, which we are really excited about, is predictive analytics tied to the payment platform. If you know and can anticipate the behaviors and the patterns of a demographic of patients, financially speaking, then it will help ease the patient experience in what they owe, how they pay, and what’s offered to them. It boosts the bottom line of providers, because they are going to get increased revenue collection.

So where predictive analytics is going in healthcare and tying that to the patient experience and to the financial systems, I think will become more and more important. And that leads to even more — there is so much emerging technology on the clinical side and we will continue to see more emerging technology on the back-end systems and the financial side as well.

Gardner: Before we close out, perhaps a look to the future, and maybe even a wish list. Jennifer, if you had a wish list for how this will improve in the next few years, what’s missing, what’s yet to come, what would you like to see available with people, process, and technology?

Farmer: I go back to just patient care, and while we are in a very good spot right now, it can always improve. We need more providers, we need more technicians, we need more patient access folks, and the sense of being able to take care of people because the population is growing and whether you know it or not, you are going to need a doctor at some point.

So I think continuing on the path that we are on of providing excellent customer service, listening to patients, being empathetic. Also providing them with options; different appointment times, different finance options, different providers, it can only get better.

Beach: I absolutely agree. We have a really good computer system, we have the EMRs, but I would have to agree with Jennifer as well that we really need more providers. We need more nurses to take care of our patients.

Gardner: So it comes down to human resources. How about those front-line people who are doing the patient access intercept? Should they have an elevated status, role, and elevated pay schedule?

Farmer: It’s really tough for the patient access people because on the front line — every minute of every day, eight to 10 hours a day — they are working on that front line, so sometimes that’s tough.

It’s really important that we keep training with them. We give them options of going to customer service classes, because their role has changed from basically checking in a patient to now making sure if their insurance is correct. We have so many different insurance plans these days. To know each of those elevates that registrar to be almost an expert in that field in order to be able to help the patient and get them through that registration process, and the bottom line — to get reimbursed for those services. So it’s really come a long way.

Gardner: Julie, on this future perspective, what do you think will be coming down the pike for provider organizations like Jennifer and Sandra’s in terms of technology and process efficiency? How will the technology become even more beneficial?

Gerdeman: It’s going to be a big balancing act. What I mean by that is we are now officially more of an older country than a younger country in terms of age. People are living longer, they need more care than ever before, and we need the systems to be able to support that. So, everything that was just described is critical to support our aging population.

We have a whole other generation entering into healthcare as patients, as providers, and as technologists. This new generation has a completely different expectation of what that experience should and will be. 

But what I mean by the balancing act is we have a whole other generation entering into healthcare as patients, as providers, and as technologists. This new generation has a completely different expectation of what that experience should and will be. They might have an expectation that their wearable device should give all of that data to a provider. That they wouldn’t need to explain it, that it should all be there all day, not just that they walk in and have just-in-time, but all the health data is communicated ahead of time, before they are walking in and then having a meaningful conversation about what to do.

This new generation is going to shift us to wellness care, not just care when we are sick or injured. I think that’s all changing. We are starting to see the beginnings of that focus on wellness. And wearables and devices, and how they are used, the providers are going to have to juggle that with the aging population and traditional services — as well as the new. Technology is going to be a key, core part of that going forward.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in artificial intelligence, Business intelligence, Cloud computing, contact center, CRM, electronic medical records, Enterprise transformation, healthcare, Identity, machine learning, professional services, Security, supply chain, User experience | Tagged , , , , , , , , , , | Leave a comment

How security designed with cloud migrations in mind improves an enterprise’s risk posture top to bottom

DominosThe next BriefingsDirect data security insights discussion explores how cloud deployment planners need to be ever-vigilant for all types of cybersecurity attack vectors. Stay with us as we examine how those moving to and adapting to cloud deployments can make their data and processes safer and easier to recover from security incidents.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about taking the right precautions for cloud and distributed data safety we welcome two experts in this field, Mark McIntyre, Senior Director of Cybersecurity Solutions Group at Microsoft, and Sudhir Mehta, Global Vice President of Product Management and Strategy at Unisys. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, what’s changed in how data is being targeted for those using cloud models like Microsoft Azure? How is that different from two or three years ago?

Mark McIntyre

McIntyre

McIntyre: First of all, the good news is that we see more and more organizations around the world, including the US government, but broadly more global, pursuing cloud adoption. I think that’s great. Organizations around the world recognize the business value and I think increasingly the security value.

The challenge I see is one of expectations. Who owns what, as you go to the cloud? And so we need to be crisper and clearer with our partners and customers as to who owns what responsibility in terms of monitoring and managing in a team environment as you transition from a traditional on-premises environments all the way up into a software-as-a-services (SaaS) environment.

Gardner: Sudhir, what’s changed from your perspective at Unisys as to what the cloud adoption era security requirements are?

Sudhir Mehta

Mehta

Mehta: When organizations move data and workloads to the cloud, many of them underestimate the complexities of securing hybrid, on-premises, and cloud ecosystems. A lot of the failures, or what we would call security breaches or intrusions, you can attribute to inadequate security practices, policies, procedures, and misconfiguration errors.

As a result, cloud security breach reports have been on the rise. Container technology adds flexibility and speed-to-market, but it is also introducing a lot of vulnerability and complexity.

A lot of customers have legacy, on-premises security methodologies and technologies, which obviously they can no longer use or leverage in the new, dynamic, elastic nature of today’s cloud environments.

Gartner estimates that through 2022 at least 95 percent of cloud security failures will be the customers’ fault. So the net effect is cloud security exposure, the attack surface, is on the rise. The exposure is growing.

Change in cloud worldwide 

Gardner: People, process, and technology all change as organizations move to the cloud. And so security best practices can fall through the cracks. What are you seeing, Mark, in how a comprehensive cloud security approach can be brought to this transition so that cloud retains its largely sterling reputation for security?

McIntyre: I completely agree with what my colleague from Unisys said. Not to crack a joke — this is a serious topic — but my colleagues and I meet a lot with both US government and commercial counterparts. And they ask us, “Microsoft, as a large cloud provider, what keeps you awake at night? What are you afraid of?”

It’s always a delicate conversation because we need to tactfully turn it around and say, “Well, you, the customer, you keep us awake at night. When you come into our cloud, we inherit your adversaries. We inherit your vulnerabilities and your configuration challenges.”

We need to be really clear with our customers about the technologies that they need to make themselves more secure. We need to give them awareness into their posture so it’s built right into the fabric of the cloud service.

As our customers plan a cloud migration, it will invariably include a variety of resources being left on-premises, in a traditional IT infrastructure. We need to make sure that we help them understand the benefits already built into the cloud, whether they are seeking infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), or SaaS. We need to be really clear with our customers — through our partners, in many cases – about the technologies that they need to make themselves more secure. We need to give them awareness into their posture so that it is built right into the fabric of the cloud service.

Gardner: Sudhir, it sounds as if organizations who haven’t been doing things quite as well as they should on-premises need to be even more mindful of improving on their security posture as they move to the cloud, so that they don’t take their vulnerabilities with them.

From Unisys’s perspective, how should organizations get their housecleaning in order before they move to the cloud?

Don’t bring unsafe baggage to the cloud 

Mehta: We always recommend that customers should absolutely first look at putting their house in order. Security hygiene is extremely important, whether you look at data protection, information protection, or your overall access exposure. That can be from employees working at home or through to vendors or third-parties — wherever they have access to a lot of your information and data.

azure bugFirst and foremost, make sure you have the appropriate framework established. Then compliance and policy management are extremely important when you move to the cloud and to virtual and containerized frameworks. Today, many companies do their application development in the cloud because it’s a lot more dynamic. We recommend that our customers make sure they have the appropriate policy management, assessments, and compliance checks in place for both on-premises and then for your journey to the cloud.

Learn More About  Cyber Recovery

With Unisys Stealth 

The net of it is, if you are appropriately managed when you are on-premises, chances are as you move from hybrid to more of a cloud-native deployment and/or cloud-native services, you are more likely to get it right. If you don’t have it all in place when you are on-premises, you have an uphill battle in making sure you are secured in the cloud.

Gardner: Mark, are there any related issues around identity and authentication as organizations move from on-premises to outside of their firewall into cloud deployment? What should organizations be thinking about specifically around identity and authentication?

Avoid an identity crisis

McIntyre: This is a huge area of focus right now. Even within our own company, at Microsoft, we as employees operate in essentially an identity-driven security model. And so it’s proper that you call this out on this podcast.

Face IDThe idea that you can monitor and filter all traffic, and that you are going to make meaningful conclusions from that in real time — while still running your business and pursuing your mission — is not the best use of your time and your resources. It’s much better to switch to a more modern, identity-based model where you can actually incorporate newer concepts.

Within Microsoft, we have a term called Modern Workplace. It’s a reflection of the fact that government organizations and enterprises around the world are having to anticipate and hopefully provide a collaborative work environment where people can work in a way that reflects their personal preferences around devices and working at home or on the road at a coffee shop or restaurant — or whatever. The concept of work has changed around enterprise and is definitely forcing this opportunity to look at creating a more modern identity framework.

Zero Trust networking and micro-segmentation initiatives recognize that we know people need to keep working and doing their jobs wherever they are. The idea is to accept the fact that people will always cause some level of risk to the organization.

If you look at some of the initiatives in the US government right now, we hear the term Zero Trust. That includes Zero Trust networking and micro-segmentation. Initiatives like these recognize that we know people need to keep working and doing their jobs wherever they are. The idea is to accept the fact that people will always cause some level of risk to the organization.

We are curious, reasonably smart, well-intentioned people, and we make mistakes, just like anybody else. Let’s create an identity-driven model that allows the organization to get better insight and control over authentications, requests for resources, end-to-end, and throughout a lifecycle.

Gardner: Sudhir, Unisys has been working with a number of public-sector organizations on technologies that support a stronger posture around authentication and other technologies. Tell us about what you have found over the past few years and how that can be applied to these challenges of moving to a cloud like Microsoft Azure.

Mehta: Dana, going back in time, one of the requests we had from the US Department of Defense (DoD) on the networking side, was a concern around access to sensitive information and data. Unisys was requested by the DoD to develop a framework and implement a solution. They were looking at more of a micro-segmentation solution, very similar to what Mark just described.

So, fast forward, since then we have deployed and released a military-grade capability called Unisys Stealth®, wherein we are able to manage micro-segmentation, what we classify as key-based, encrypted micro-segmentation, that controls access to different hosts or endpoints based on the identity of the user. It permits only authorized users to communicate with approved endpoints and denies unauthorized communications, and so prevents the spread of east-to-west, lateral attacks.

Gardner: Mark, for those in our audience who aren’t that technology savvy, what does micro-segmentation mean? Why has it become an important foundational capability for security across a cloud-use environment?

Need-to-know access 

McIntyre: First of all, I want to call out Unisys’s great work here and their leadership in the last several years. It means a Zero-Trust environment can essentially gauge or control east-to-west behavior or activity in a distributed environment.

Stealth bugFor example, in a traditional IT environment, devices are not really well-managed when they are centralized, corporate-issued devices. You can’t take them out of the facility, of course. You don’t authenticate once you are on a network because you are already in a physical campus environment. But it’s different in a modern, collaborative environment. Enterprises are generally ahead on this change, but it’s now coming into government requirements, too.

And so now, you essentially can parse out your subjects and your objects, your subjects trying to access objects. You can spit them out and say, “We are going to create all user accounts with a certain set of parameters.” It amounts to a privileged, need-to-know model. You can enforce strong controls with a set of certain release-privilege rights. And, of course, in an ideal world, you could go a step further and start implementing biometrics [to authenticate] to get off of password dependencies.

Learn How Unisys Stealth Security 

Simplifies Zero Trust Networks

But number one, you want to verify the identity. Is this a person? Is this the subject who we think they are? Are they that subject based on a corroborating variety of different attributes, behaviors, and activities? Things like that. And then you can also apply the same controls to a device and say, “Okay, this user is using a certain device. Is this device healthy? Is it built to today’s image? Is it patched, clean, and approved to be used in this environment? And if so, to what level?”

And then you can even go a step further and say, “In this model, now that we can verify the access, should this person be able to use our resources through the public Internet and access certain corporate resources? Should we allow an unmanaged device to have a level of access to confidential documents within the company? Maybe that should only be on a managed device.”

So you can create these flexible authentication scenarios based on what you know about the subjects at hand, about the objects, and about the files that they want to access. It’s a much more flexible, modern way to interact.

Within Azure cloud, Microsoft Azure Active Directory services offer those capabilities – they are just built into the service. So micro-segmentation might sound like a lot of work for your security or identity team, but it’s a great example of a cloud service that runs in the background to help you set up the right rules and then let the service work for you.

Gardner: Sudhir, just to be clear, the Unisys Stealth(cloud) Extended Data Center for Microsoft Azure is a service that you get from the cloud? Or is that something that you would implement on-premises? Are there different models for how you would implement and deploy this?

A stealthy, healthy cloud journey 

Mehta: We have been working with Microsoft over the years on Stealth, and we have a fantastic relationship with Microsoft. If you are a customer going through a cloud journey, we deploy what we call a hybrid Stealth deployment. In other words, we help customers do what we call isolation with the help of communities of interests that we create that are basically groupings of hosts, users, and resources based on like interests.

Then, when there is a request to communicate, you create the appropriate Stealth-encrypted tunnels. If you have a scenario where you are doing the appropriate communication between an on-premises host and a cloud-based host, you do that through a secure, encrypted tunnel.

We have also implemented what we call cloaking. With cloaking, if someone is not authorized to communicate with a certain host or a certain member of a community of interest, you basically do not give a response back. So cloaking is also part of the Stealth implementation.

And in working closely with Microsoft, we have further established an automated capability through a discovery API. So when Microsoft releases new Azure services, we are able to update the overall Stealth protocol and framework with the updated Azure services. For customers who have Azure workloads protected by Stealth, there is no disruption from a productivity standpoint. They can always securely leverage whatever applications they are running on Azure cloud.

For customers leveraging Azure cloud with different workloads, we maintain the appropriate level of secure communications just as they would have in an on-premises deployment.

The net of it is being able to establish the appropriate secure journey for customers, from on-premises to the cloud, the hybrid journey. For customers leveraging Azure cloud with different workloads, we maintain the appropriate level of secure communications just as they would have in an on-premises deployment.

Gardner: Mark, when does this become readily available? What’s the timeline on how these technologies come together to make a whole greater than the sum of the parts when it comes to hybrid security and authentication?

McIntyre: Microsoft is already offering Zero Trust, identity-based security capabilities through our services. We haven’t traditionally named them as such, although we definitely are working along that path right now.

Microsoft Chief Digital Officer and Executive Vice President Kurt DelBene is on the US Defense Innovation Board and is playing a leadership role in establishing essentially a DoD or US government priority on Zero Trust. In the next several months, we will be putting more clarity around how our partners and customers can better map capabilities that they already own against emerging priorities and requirements like these. So definitely look for that.

Stealth cloud XDC for Microsoft AzureIn fact, Ignite DC is February 6 and 7, in downtown Washington, DC, and Zero Trust is certainly on the agenda there, so there will be updates at that conference.

But generally speaking, any customer can take the underlying services that we are offering and implement this now. What’s even better, we have companies that are already out there doing this. And we rely greatly on our partners like Unisys to go out and really have those deep architecture conversations with their stakeholders.

Gardner: Sudhir, when people use the combined solution of Microsoft Azure and Stealth for cloud, how can they react to attacks that may get through to prevent damage from spreading?

Contain contagion quickly 

Mehta: Good question! Internally within Unisys’s own IT organization, we have already moved on this cloud journey. Stealth is already securing our Azure cloud deployments and we are 95 percent deployed on Azure in terms of internal Unisys applications. So we like to eat our own dog food.

If there is a situation where there is an incident of compromise, we have a capability called dynamic isolation, where if you are looking at a managed security operations center (SOC) situation, we have empowered the SOC to contain a risk very quickly.

We are able to isolate a user and their device within 10 seconds. If you have a situation where someone turns nefarious, intentionally or coincidentally, we are able to isolate the user and then implement different thresholds of isolation. If a high threshold level is breached across 8 out of 10, that means we completely isolate that user.

Learn More About  Cyber Recovery

With Unisys Stealth 

If there is a threshold level of 5 or 6, we may still give the user certain levels of access. So within a certain group they would continue to access or be able to communicate.

Dynamic isolation isolates a user and their device with different levels of thresholds while we have like a managed SOC go through their cycles of trying to identify what really happened as part of what we would call an advanced response. Unisys is the only solution where we can actually isolate a user or the device within the span of seconds. We can do it now within 10 seconds.

McIntyre: Getting back to your question about Microsoft’s plans, I’m very happy to share how we’ve managed Zero Trust. Essentially it relies on Intune for device management and Azure Active Directory for identity. It’s the way that we right now internally manage our own employees.

My access to corporate resources can come via my personal device and work-issued device. I’m very happy with what Unisys already has available and what we have out there. It’s a really strong reference architecture that’s already generally available.

Gardner: Our discussion began with security for the US DoD, among the largest enterprises you could conceive of. But I’m wondering if this is something that goes down market as well, to small- to medium-sized businesses (SMBs) that are using Azure and/or are moving from an on-premises model.

Do Zero Trust and your services apply to the mom and pop shops, SMBs, and the largest enterprises?

All sizes of businesses

McIntyre: Yes, this is something that would be ideally available for an SMB because they likely do not have large logistical or infrastructure dependencies. They are probably more flexible in how they can implement solutions. It’s a great way to go into the cloud and a great way for them to save money upfront over traditional IT infrastructure. So SMBs should have a really good chance to literally, natively take an idea like this and implement it.

Gardner: Sudhir, anything to offer on that in terms of the technology and how it’s applicable both up and down market?

Mehta: Mark is spot on. Unisys Stealth resonates really well for SMBs and the enterprise. SMBs benefit, as Mark mentioned, in their capability to move quickly. And with Stealth, we have an innovative capability that can discover and visualize your users. Thereafter, you can very quickly and automatically virtualize any network into the communities of interest I mentioned earlier. SMBs can get going within a day or two.

Enterprises can define their journey depending on what you’re actually trying trying to migrate or run in the cloud. The opportunities are there for both SMBs and enterprises.

If you’re a large enterprise, you can define your journey — whether it’s from on-premises to cloud — depending on what you’re actually trying to migrate or run in the cloud. So I would say absolutely both. And it would also depend on what you’re really looking at managing and deploying, but the opportunities are there for both SMBs and enterprises.

Gardner: As companies large and small are evaluating this and trying to discern their interest, let’s look at some of the benefits. As you pointed out, Sudhir, you’re eating your own dog food at Unisys. And Mark has described how this is also being used internally at Microsoft as well.

Do you have ways that you can look at before and after, measure quantitatively, qualitative, maybe anecdotally, why this has been beneficial? It’s always hard in security to prove something that didn’t happen and why it didn’t happen. But what do you get when you do Stealth well?

Proof is in the protection 

Mehta: There are a couple of things, Dana. So one is there is certainly a reduction in cost. When we deploy for 20,000 Unisys employees, our Chief Information Security Officer (CISO) obviously has to be a big supporter of Stealth. His read is from a cost perspective that we have seen significant reductions in costs.

Prior to having Stealth implemented, we had a certain approach as relates to network segmentation. From a network equipment perspective, we’ve seen a reduction of over 70 percent. If you look at server infrastructure, there has been a reduction of more than 50 percent. The maintenance and labor costs have had a reduction north of 60 percent. Ongoing support labor cost has also seen a significant reduction as well. So that’s one lens you could look at.

The other lens that has been interesting is the virtual private network (VPN) exposure. As many of us know, VPNs are perhaps the best breach route for hackers today. When we’ve implemented Stealth internally within Unisys, for a lot of our applications we have done away with the requirement for logging into a VPN application. That has made for easier access to a lot of applications – mainly for folks logging in from home or from a Starbucks. Now when they communicate, it is through an encrypted tunnel and it’s very secure. The VPN exposure completely goes away.

Those are the best two lenses I could give to the value proposition. Obviously there is cost reduction. And the other is the VPN exposure goes away, at least for Unisys that’s what we’ve found with implementing internally.

Gardner: For those using VPNs, should they move to something like Stealth? Does the way in which VPNs add value change when you bring something like Stealth in? How much do you reevaluate your use of VPNs in general?

Mehta: I would be remiss to say you can completely do away with VPNs. If you go back in time and see why VPNs were created, the overall framework was created for secure access for certain applications. Since then, for whatever reasons, VPNs became the only way people communicate from working at home, for example. So the way we look at this is, for applications that are not extremely limited to a few people, you should look at options wherein you don’t necessarily need a VPN. You could therefore look at a solution like Unisys Stealth.

And then if there are certain applications that are extremely sensitive, limited to only a few folks for whatever reason, that’s where potentially you could consider using an application like a VPN.

Gardner: Let’s look to the future. When you put these Zero Trust services into practice, into a hybrid cloud, then ultimately a fully cloud-native environment, what’s the next shoe to fall? Are there some things you gain when you enter into this level of micro-segmentation, by exploiting these newer technologies?

Can this value be extended to the edge, for example? Does it have a role in Internet of things (IoT)? A role in data transfers from organization to organization? What does this put us in a position to do in the future that we couldn’t have done previously?

Machining the future securely 

McIntyre: You hit on two really important points. Obviously devices, IoT devices, for example, and data. So data increasingly — you see T-shirts out and you see slogans, “Data is the new oil,” and such. From a security point of view there is no question this is becoming the case, when there’s something like 44 to 45 zettabytes of data projected to be out there for the next few years.

You can employ traditional security monitoring practices, for example label-free detection, things like that. But it’s just not going to allow you to work quickly, especially in an environment where we’re already challenged with having enough security workforce. There are not enough people out there, it’s a global talent shortage.

It’s a fantastic opportunity forced on us to rely more on modern authentication frameworks and on machine learning (ML) and artificial intelligence (AI) technologies to take on a lot of that lower-level analysis, the log analysis work, out of human hands and have machines free people up for the higher-level work.

We’re trying to make sure that as we deliver new services to the marketplace that those are built in a way that you can configure and monitor them like any other device in the company.  We can make sure that it is being monitored in the same way as your traditional infrastructure. 

For example, we have a really interesting situation within Microsoft. It goes around the industry as well. We have many organizations go into the cloud, but of course, as we mentioned earlier, it’s still unclear on the roles and responsibilities. We’re also seeing big gaps in use of cloud resources versus security tools built into those resources.

And so we’re really trying to make sure that as we deliver new services to marketplace, for example, IoT, that those are built in a way that you can configure and monitor them like any other device in the company. With Azure, for example, we have IoT Hub. We can literally, as you build an IoT device, make sure that it is being monitored in the same way as your traditional infrastructure monitors.

cloud imageThere should not be a gap there. You can still apply the same types of logical access controls around them. There shouldn’t be any tradeoffs on security for how you do security — whether it’s IT or IoT.

Gardner: Sudhir, same question, what is use of Stealth in conjunction with cloud activities get you in the future?

Mehta: Tagging on to what Mark said, AI and ML are becoming interesting. We obviously had a very big digital workplace solutions organization. We are a market leader for services, for helpdesk services. We are looking at the introduction of a lot of what you would call as AIOps in automation as it leads to robotic process automation (RPA) and voice assistance.

So one of the things we are observing is, as you go on this AI-ML, there is a larger exposure because you are focusing more around the operationalization in automation or AI-ML and certain areas where you may not be able to manage, for instance, the way you get the training done for your bots.

So that’s where Stealth is a capability we are implementing right now with digital workplace solutions as part of a journey for AIOps automation as an example. The other area we are working very closely with some of other partners, as well as Microsoft, is around application security and hardening in the cloud.

How do you make sure that when you deploy certain applications in the cloud you ensure that it is secure and it is not being breached, or are there intrusions when you try to make changes to your applications?

Those are two areas we are currently working on, the AIOps and MLOps automation and then the application security and hardening in the cloud, working with Microsoft as well.

Gardner: If I want to be as secure as I can, and I know that I’m going to be doing more in the cloud, what should I be doing now in order to make myself in the best position to take advantage of things like micro-segmentation and the technologies behind Stealth and how they apply to a cloud like Azure? How should I get myself ready to take advantage of these things?

Plan ahead to secure success 

McIntyre: First thing is to remember how you plan and roll out your security estate. It should be no different than what you’re doing with your larger IT planning anyway, so it’s all digital transformation. First thing to do is close that gap between security teams. All the teams – business and IT — should be working together.

Learn How Unisys Stealth Security 

Simplifies Zero Trust Networks

We want to make sure that our customers go to the cloud in a secure way, without losing this ability to access their data. We continue to put more effort in very proactive services — architecture guidance, recommendations, things that can help people get started in the cloud. It’s called Azure Blueprints, a configuration guidance and predefined templates that can help an organization launch a resource in the cloud that’s already compliant against FedRAMP or NIST or ISO or HIPAA standards.

We’ll continue to invest in the technologies that help customers securely deploy technologies or cloud resources from the get-go so that we close those gaps and configuration and close the gaps in reporting and telemetry as well. And we can’t do it without great partners that provide those customized solutions for each sector.

Gardner: Sudhir, last word to you. What’s your advice for people to prepare themselves to be ready to take advantage of things like Stealth?

Mehta: Look at a couple of things. One is focus on trusted identity in terms of who you work with, who you give access to. Even within your organization you obviously need to make sure you establish that trusted identity. And how you do it is you make sure it is simple. Second, look at an overlay network agnostic framework, which is where Stealth can help you. Make sure it is unique. One individual has one identity. Third is make sure it is refutable. So it’s undeniable in terms of how you implement it, and then the fourth is, make sure it’s got the highest level of efficacy, whether it’s related to how you deploy and it’s also the way you architect your solution.

So, the net of it is, a) trust no one, b) assume a breach can occur, and then c) respond really fast to limit damage. If you do these three things, you can get to Zero Trust for your organization.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsors: Unisys and Microsoft.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Business intelligence, Cloud computing, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, machine learning, Microsoft, multicloud, Security, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

SambaSafety’s mission to reduce risk begins in its own datacenter security partnerships

carsSecurity and privacy protection increasingly go hand in hand, especially in sensitive industries like finance and public safety.

For driver risk management software provider SambaSafety protecting their business customers from risk is core to their mission — and that begins with protection of their own IT assets and workers.

Stay with us now as BriefingsDirect explores how SambaSafety adopted Bitdefender GravityZone Advanced Business Security and Full Disk Encryption to improve the end-to-end security of their operations and business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To share their story, please welcome Randy Whitten, Director of IT and Operations at SambaSafety in Albuquerque, New Mexico. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Randy, tell us about SambaSafety, how big it is, and your unique business approach.

Randy Whitten

Whitten

Whitten: SambaSafety currently employs approximately 280 employees across the United States. We have four locations. Corporate headquarters is in Denver, Colorado. Albuquerque, New Mexico is another one of our locations. There’s Rancho Cordova just outside of Sacramento, California, and Portland, Oregon is where our transportation division is.

We also have a variety and handful of remote workers from coast to coast and from border to border.

Gardner: And you are all about making communities safer. Tell us how you do that.

Whitten: We work with departments of motor vehicles (DMVs) across the United States, monitoring the drivers for companies. We put a partnership together with state governments, and third-party information is provided to allow us to process reporting for critical driver information.

We seek to transform that data into action to protect the businesses and our customers from driver and mobility risk. We work to incorporate top-of-the-line security software to ensure that all of our data is protected while we are doing that.

Data-driven driver safety 

Gardner: So, it’s all about getting access to data, recognizing where risks might emerge with certain drivers, and then alerting those people who are looking to hire those drivers to make sure that the right drivers are in the right positions. Is that correct?

Whitten: That is correct. Since 1998, SambaSafety has been the pioneer and leading provider of driver risk management software in North America. SambaSafety has led the charge to protect businesses and improve driver safety, ultimately making communities safer on the road.

Our mission is to guide our customers, including employers, fleet managers, and insurance providers to make the right decisions at the right time by collecting, correlating and analyzing motor vehicle records (MVRs) and other data resources. We identify driver risk and enable our customers to modify their drivers’ behaviors, reduce the accidents, ensure compliance, and assist with lowering the cost, ultimately improving the driver and the community safety once again.

Gardner: Is this for a cross-section of different customers? You do this for public sector and private sector? Who are the people that need this information most?

Whitten: We do it across both sectors, public and private. We do it across transportation. We do it across drivers such as Lyft drivers, Uber drivers, and transportation drivers — our delivery carriers, FedEx, UPS, etc. — those types of customers.

These transportation drivers are delivering our commodities every day — the food we consume, the clothes we wear, the parts that fix our vehicles, all what’s essential to our everyday living.

Gardner: This is such an essential service, because so much of our economy is on four wheels, whether it’s a truck delivering goods and services, transportation directly for people, and public safety vehicles. A huge portion of our economy is behind the wheel, so I think this is a hugely important service you are providing.

Whitten: That’s a good point, Dana. Yes, it is very much. Transportation drivers are delivering our commodities every day — the food that we consume, the clothes that we wear, also the parts that fix our vehicles to drive, plus also just to be able to get like those Christmas packages via UPS or FedEx — the essential items to our everyday living.

intersectionGardner: So, this is mission-critical on a macro scale. Now, you also are dealing, of course, with sensitive information. You have to protect the privacy. People are entitled to information that’s regulated, monitored, and provided accordingly. So you have to be across-the-board reducing risk, doing it the right way, and you also have to make your own systems protected because you have that sensitive information going back and forth. Security and privacy are probably among your topmost mission-critical requirements.

Securing the sectors everywhere

Whitten: That is correct. SambaSafety has a SOC 2 Type II compliant certification. It actually is just the top layer of security we are using within our company, either for our endpoints or for our external customers.

Gardner: Randy, you described your organization as distributed. You have multiple offices, remote workers, and you are dealing with sensitive private and public sector information. Tell us what your top line thinking, your philosophy, about security is and then how you execute on that.

Whitten: Our top line essentially is to make sure that our endpoints are protected, that we are taking care of our employees internally to be able to set them up for success, so they don’t have to worry about security. All of our laptops are encrypted. We have different types of levels of security within our organization, so that gives all of our employees a way to ease their comfort so that they can concentrate on taking care of our end customer.

Gardner: That’s right, security isn’t just a matter of being very aggressive, it also means employee experience. You have to give your people the opportunity to get their work done without hindrance — and the performance of their machine, of course, is a big part of that.

Tell us about the pain points, what were the problems you were having in the past that led you into a new provider when it comes to security software?

We were seeing threats get through the previous antivirus solution, and the cost of that solution was increasing month over month. Every time we’d add a new license it would seem like the price would jump.

Whitten: Some of the things that we have had to deal with within the IT department here at SambaSafety is when we see our tickets come in, it’s typically about memory usage as applications were locking up the computers, where it took a lot of resources to be able to launch the application.

We also were seeing threats getting through the previous antivirus solution, and then just the cost, the cost of that solution was increasing month over month. Every time we would add a new license it would seem like the price point would jump.

Gardner: I imagine you weren’t seeing them as a partner as much as a hindrance.

Whitten: Yes, that is correct. It started to seem like it was a monthly call, then it turned into a weekly call to their support center just to be able to see if we could get additional support and help from them. So that brought up, “Okay, what do we do next and what is our next solution going to look like?”

Gardner: Tell me about that process. What did you look at, and how did you make your choices?

Whitten: We did an overall scoping session and brought in three different antivirus solutions providers. It just so happens that they all measured up to be the next vendor that we were going to work with. Bitdefender came out on top and it was a solution that we could put into our cloud-hosted solution, it was also something that we could work with on our endpoints and also to be able to ensure that all of our employees are protected.

Gardner: So you are using GravityZone Advanced Business Security, Full Disk Encryption, and the Cloud Management Console, all from Bitdefender, is that correct?

Whitten: That is correct. The previous solution for our disk encryption is just about exhausted. Currently we have about 90 percent of our endpoints for disk encryption on Bitdefender now and we have had zero issues with it.

Gardner: I have to imagine you are not just protecting your endpoints, but you have servers and networks, and other infrastructure to protect. What does that consist of and how has that been going?

truckWhitten: That is correct. We have approximately 280 employees, which equals 280 laptops to be protected. We have a fair amount of additional hardware that has to be protected. Those endpoints have to be secured. And then 30 percent of additional hardware, i.e. the Macs that are within our organization, are also part of that Bitdefender protection.

Gardner: And everyone knows, of course, that management of operations is essential for making sure that nothing falls between the cracks — and that includes patch management, making sure that you see what’s going on with machines and getting alerts as to what might be your vulnerability.

So tell us about the management, the Cloud Console, particularly as you are trying to do this across a hybrid environment with multiple sites?

See what’s secure to ensure success 

Whitten: It’s been vital for the success of Bitdefender and their console that we can log on and we can see what’s happening. It has been very key to the success. I can’t say that enough.

And it goes as far as information gathering, dashboard, data analytics, network scanning, and the vulnerability management – just being able to ensure our assets are protected has been key.

Also, we could watch the alerting that happens to ensure that the behavior is not changing from machine intelligence or machine learning (ML) so that our systems do not get infected in any way.

Gardner: And the more administration and automation you get, the more you are able to devote your IT operations people to other assets, other functions. Have you been able to recognize, not only an improvement in security, but perhaps an easing up on the man hours and labor requirements?

Whitten: Sure. The first 60 days of our implementation I was able to improve return on investment (ROI) quickly. We were able to allow additional team resources to focus on other tickets and also other items that came into our work scope within our department.

Bitdefender was already out there managing itself. It was doing what we paying for it to do. It was actually a really good choice for us. The partnership with them is very solid, we are very pleased with it, a win-win situation for both of our companies.

Bitdefender was already out there, and it was managing itself, it was doing what we were paying for it to do — and it was actually a really good choice for us. The partnership with them is very solid, we are very pleased with it, it is a win-win situation for both of our companies.

Gardner: Randy, I have had people ask me, “Why do I need Full Disk Encryption? What does that provide for me? I am having a hard time deciding whether it’s the right thing for our organization.”

What were your requirements for widespread encryption and why do you think that’s a good idea for other organizations?

sambasafety-logoWhitten: The most common reason to have Full Disk Encryption is you are at the store, someone comes in, they break into your car, they steal your laptop bag or they see your computer laying out, they take it. As the Director of IT and Operations for SambaSafety, my goal is to ensure that our assets are protected. So having Full Disk Encryption on board that laptop gives me a chance to sleep a little easier at night.

Gardner: You are not worried about that data leaving the organization because you know it’s got that encryption wrapper.

Whitten: That is correct. It’s protected all the way around.

Gardner: As we start to close out, let’s look to the future. What’s most important for you going forward? What would you like to see improve in terms of security, intelligence and being able to monitor your privacy and your security requirements?

Scope out security needs

Whitten: The big trend right now is to ensure that we are staying up to date and Bitdefender is staying up to date on the latest intrusions so that our software is staying current and we are pushing that out to our machines.

Also just continue to be right on top of the security game. We have enjoyed our partnership with Bitdefender to date and we can’t complain, and for sure it has been a win-win situation all the way around.

Gardner: Any advice for folks that are out there, IT operators like yourself that are grappling with increased requirements? More people are seeing compliance issues, audit issues, paperwork and bureaucracy. Any advice for them in terms of getting the best of all worlds, which is better security and better operations oversight management?

Bitdefender bug bestWhitten: Definitely have a good scope of what you are looking for, for your organization. Every organization is different. What tends to happen is that you go in looking for a solution and you don’t have all of the details that would meet the needs of your organization.

Secondly, get the buy-in from your leadership team. Pitch the case to ensure that you are doing the right thing, that you are bringing the right vendor to the table, so that once that solution is implemented, then they can rest easy as well.

Every company executive across the world right now that has any responsibility with data, definitely security is at the top of their mind. Security is at the top of my mind every single day, protecting our customers, protecting our employees, making sure that our data stays protected and secured so that the bad guys can’t have it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender.

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, Enterprise architect, Identity, Information management, machine learning, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Why flexible work and the right technology may just close the talent gap

order workers

Companies struggle to find qualified workers in the mature phase of any business cycle. Yet as we enter a new decade in 2020, they have more than a hyper-low unemployment rate to grapple with.

Businesses face a gaping qualitative chasm between the jobs businesses need to fill and the interest of workers in filling them. As a result, employees have more leverage than ever to insist that jobs cater to their lives, locations, and demands to be creatively challenged.

Accordingly, IDC predicts that by 2021, 60 percent of Global 2000 companies will have adopted a future workspace model — flexible, intelligent, collaborative, virtual, and physical work environments.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as BriefingsDirect explores how businesses must adapt to this new talent landscape and find the innovative means to bring future work and workers together. Our flexible work solutions panel consists of Stephane Kasriel, the former Chief Executive Officer and a member of the board at Upwork, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: If flexible work is the next big thing, that means we have been working for the past decade or two in an inflexible manner. What’s wrong with the cubicle-laced office building and the big parking lot next to the freeway model?

Tim Minahan

Minahan

Minahan: Dana, the problem dates back a little further. We fundamentally haven’t changed the world of work since Henry Ford. That was the model where we built big work hubs, big office buildings, call centers, manufacturing facilities — and then did our best to hire as much talent around that.

This model just isn’t working anymore against the backdrop of a global talent shortage, which is fast approaching more than 85 million medium- to high-skilled workers. We are in dire need of more modern skill sets that aren’t always located near the work hubs. And to your earlier point, employees are now in the driver’s seat. They want to work in an environment that gives them flexible work and allows them to do their very best work wherever and whenever they want to get it done.

Gardner: Stephane, when it comes to flexible work, are remote work and freelance work the same? How wide is this spectrum of options when it comes to flexible work?

Kasriel: Almost by definition, most freelance work is done remotely. At this stage, freelancing is growing faster than traditional work, about three times faster, in fact. About 35 percent of US workers are doing some amount of freelancing. And the vast majority of it is skilled work, which is typically done remotely.

Stephane Kasriel

Kasriel

Increasingly what we see is that freelancers become full-time freelancers; meaning it’s their primary source of income. Usually, as a result of that, they tend to move. And when they move it is out of big cities like San Francisco and New York. They tend to move to smaller cities where the cost of living is more affordable. And so that’s true for the freelance workforce, if you will, and that’s pulling the rest of the workforce with it.

What we see increasingly is that companies are struggling to find talent in the top cities where the jobs have been created. Because they already use freelancers anyway, they are also allowing their full-time employees to relocate to other parts of the country, as well as to hire people away from their headquarters, people who essentially work from home as full-time employees, remotely.

Gardner: Tim, it sounds like Upwork and its focus on freelance might be a harbinger of what’s required to be a full-fledged, flexible work support organization. How do you view freelancing? Is this the tip of the arrow for where we are headed?

Minahan: Against the backdrop of a global talent shortage and outdated model of hub-and-spoke-based work models, the more innovative companies — the ones securing the best talent — go to where the talent is, whether using contingent or full-time workers.

They are also shifting from the idea of having a full-time employee staff to having pools of talent. These are groups that have the skills and capabilities to address a specific business challenge. They will staff up on a given project.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

So, work is becoming much more dynamic. The leading organizations are tapping into that expertise and talent on an as-needed basis, providing them an environment to collaborate around that project, and then dissolving those teams or moving that talent on to other projects once the mission is accomplished.

Gardner: So, it’s about agility and innovation, being able to adapt to whatever happens. That sounds a lot like what digital business transformation is about. Do you see flexible work as supporting the whole digital transformation drive, too?

Minahan: Yes, I certainly do. In fact, what’s interesting is the first move to digital transformation was a shift to transforming customer experience, of creating new ways and new digital channels to engage with customers. It meant looking at existing product lines and digitizing them.

upwork chess

And along the way, companies realized two things. Number one, they needed different skills than they had internally. So the idea of the contingent worker or freelance worker who has that specific expertise becomes increasingly vital.

They also realized they had been asking employees to drive this digital transformation while anchoring them to archaic or legacy technology and a lot of bureaucracy that often comes with traditional work models.

And so there is now an increased focus at the executive C-suite level on driving employee experience and giving employees the right tools, the right work environment, and the flexible work models they need to ensure that they not only secure the best talent, but they can arm them to do their very best work.

There is now an increased focus at the C-suite level on driving employee experience and giving employees the right tools, work environment, and flexible work models they need to ensure they can do their very best work.

Gardner: Stephane, for the freelance workforce, how have they been at adapting to the technologies required to do what corporations need for digital transformation? How does the technology factor into how a freelancer works and how a company can best take advantage of them?

Kasriel: Fundamentally, a talent strategy is a critical part of digital transformation. If you think about digital transformation, it is the what, and the talent strategy is the how. And increasingly, as Tim was saying, as businesses need to move faster, they realize that they don’t have all the skills internally that they need to do digital transformation.

They have to tap into a pool of workers outside of the corporation. And doing this in the traditional way, using staffing firms or trying to find local people that can come in part-time, is extremely inefficient, incredibly slow, and incompatible with the level of agility that companies need to have.

citrix-logo-250x250So just as there was a digital transformation of the business firm, there is now also a digital transformation of the talent strategy for the firm. Essentially work is moving from an offline model to an online model. The technology helps with security, collaboration, and matching supply and demand for labor online in real-time, particularly for niche skills in short-duration projects.

Increasingly companies are reassembling themselves away from the traditional Taylorism model of silos, org charts, and people doing the same work every single day. They are changing to much more self-assembled, cross-functional, agile, and team-based work. In that environment, the teams are empowered to figure out what it is that they need to do and what type of talent they need in order to achieve it. That’s when they pull in freelancers through platforms such as Upwork to add skills they don’t have internally — because nobody has those internally.

And on the freelancer side, freelancers are entrepreneurs. They are usually very good at understanding what skills are in demand and acquiring those skills. They tend to train themselves much more frequently than traditional full-time employees because there is a very logical return on investment (ROI) for them to do so.

If I learned the latest Java framework in a few weeks, for example, I can then bill at a much higher rate than I would otherwise could if I didn’t have those skills.

Gardner: Stephane, how does Upwork help solve this problem? What is your value-add?

Upwork secures hiring, builds trust 

Kasriel: We essentially provide three pieces of value-add. One is a very large database of freelancers on one side and a very large database of clients and jobs on the other side. With that scale comes the ability to have high liquidity. The median time to fill a job on Upwork right now is less than 24 hours, compared to multiple weeks in the offline world. That’s one big piece of it.

The second is around an end-to-end workflow and processes to make it easy for large companies to engage with independent contractors, freelancers, and consultants. Companies want to make sure that these workers don’t get misclassified, that they only have access to IT systems they are supposed to, that they have signed the right level of agreements with the company, and that they have been background checked or whatever other processes that the company needs.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

The third big piece is around trust and safety. Fundamentally, freelancers want to know that they are going to be working with reputable clients and that they are going to get paid. Conversely, companies are engaging with freelancers for things that might be highly strategic, have intellectual property associated with them, and so they want to make sure that the work is going to be done properly and that the freelancer is not going to be selling information from the company, as an example.

So, the three pieces around matching, collaboration and security software, and trust and safety are the things that large companies are using Upwork for to meet the needs of their hiring managers.

Fundamentally, we want to be invisible. We want the platform to look simple so that people can get things done by having freelancers — and not have to think about all of the complexities of being compliant with the various roles that large companies have as it relates to engaging with people in general, but with independent contractors in particular.

Mind the gap in talent, skills 

Gardner: Tim, a new study has been conducted by the Center for Business and Economic Research on these subjects. What are some of the findings?

Minahan: At Citrix, we are committed to helping companies drive higher levels of employee experience using technology to create environments that allow much more flexible work models and empower employees to get their very best work done. So we are always examining the direction of overall work models in the market. So we partnered to better understand how to solve this massive talent crisis.

Consider that there is a gap of close to 90 million medium- to high-skilled workers around the globe, all of these unfilled jobs. There are a couple of ways to solve this. The best way is to expand the talent pool. So, as Stephane said, that can be through tapping into freelance marketplaces, such as Upwork, to find a curated path to the top talent, those who have the finest skills to help drive digital transformation.

But we can couple that with digital workspaces that allow flexible work models by giving the talent access to the tools and information they need to be productive and to collaborate. They can do that in a secure environment that leaves the company confident their information and systems remain secure.

The key findings of the study are that we have an untapped market. Some 69 percent of people who currently are unemployed or economically inactive indicate that they would start working if given more flexible work models and the technology to enable them to work remotely.

The key findings of the Center for Business and Economic Research study are that we have an untapped market. Some 69 percent of people who currently are unemployed or economically inactive indicate that they would start working if given more flexible work models and the technology to enable them to work remotely.

Think about the massive shifts in the demographics of the workplace. We talk about millennials coming into the workforce, and new work models, and all of that’s interesting and important. But we have a massive other group of workers at the other end of the spectrum — the baby boomers — who have massive amounts of talent and knowledge and who are beginning to retire.

upwork bugWhat if we could re-employ them on their own terms? Maybe a few days a week or a few hours a day, to contribute some of their expertise that is much needed to fill some of the skills gaps that companies have?

We are in a unique position right now and have an incredible opportunity to embrace these new work models, these new freelance marketplaces, and the technology to solve the talent gap.

Kasriel: We run a study every year called Freelancing in America; we have been running it for six years now. One of the highlights of the study is that 46 percent, so almost half of freelancers, say that they cannot take a traditional full-time job. And that’s usually primarily driven by health issues, by care duties, or by the fact that they live in a part of the US where there are no jobs for their skills. They tend to be more skilled and educated on average than non-freelancers, and they tend to be completely undercounted in the Bureau of Labor Statistic data every month.

So when we talk about no unemployment in the country, and when we talk about the skills gap, there is this other pool of talent that tends to be very resilient, really hardworking, and highly skilled — but who cannot commit to a traditional full-time job that requires them to be on-site.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

So, yes, there is a skills gap overall. If you look at the micro numbers, that is true. But at the macro level, at the business firm level, it’s much more of a gap of flexibility — and a gap of imagination — than anything else. Firms are competing for the same talent in the same way and then wondering why they are struggling to attract new fresh talent and improve their diversity.

I tell them to go online and look at the talent available there. You will find a world of work, of people that are extremely eager to work for you. In fact, they are probably going to be much more loyal to your company than anybody else because you are by far the best employer that they could work with.

Gardner: To be clear, this is not North America or the US only. I have seen similar studies and statistics coming out of Europe and Japan. They differ from market to market, but it’s all about trying to solve the mismatch between employers and available potential talent.

Tim, people have been working remotely for quite a while now. Why is this not an option, but a necessity, when it comes to flexible and remote work?

Minahan: It’s the market dynamics we have been talking about. Companies struggle to find the talent they need at scale in the locations where they traditionally have major office hubs. Out of necessity, to advance their business and access the skills they need, they must embrace more flexible work models. They need to be looking for talent in nontraditional ways, such as making freelance workers part of their regular talent strategies, and not an adjunct for when someone is out on sick leave.

And it’s really accelerating quite dramatically. We talk a lot about that talent crunch, but in addition, it’s also a skills gap. As Stephane was saying, so many of these freelance workers have the much-in-demand skills that people need.

When you think about the innovators in the industry, folks like Amazon who recently said, “Hey, we can’t find all of the talent we need with the skills that we need so we are going to retrain and spend close to $1 billion to retain a third of our workforce.”

They are expanding their talent pool. That’s what innovative companies are beginning to do. They are saying: “Okay, we have these constraints. What can we do, how can we work differently, how can we embrace technology differently, and how can we look at the workforce differently in order to expand our talent pool?”

Gardner: If you seek out the best technology to make that flexible workforce innovative, collaborative, and secure, are there other economic paybacks? If you do it right, can out also put money to the bottom line? What is the economic impact?

More remote workers, more revenue

Minahan: From the study that we did around remote workers and tapping into the untapped talent pool, the research found that this could equate to more than $2 trillion in added value per year — or a 10 percent boost to the US GDP. It’s because otherwise businesses are not able to deliver services because they don’t have the talent.

On a micro level, at an individual business level, when workers are engaged in these more flexible work models they are more stress-free. They are far more productive. They have more time for doing meaningful work. As a result, companies that embrace these work models are seeing far higher revenue growth, sometimes upward of 2.5 times. There are revenue growths, far higher profitability, and far greater worker retention than their peers.

Kasriel: It’s also important to remember that the average American worker spends more time commuting to work than on vacation in a given year. Imagine if all of that time could be reused to be productive at work, spend another couple of hours every day doing work for the company, or doing other things in their lives so they could consume more goods and services, which would drive economic growth.

Right now the amount of waste coming from companies requiring that their workers commute to work is probably the biggest amount of waste that companies are creating in the economy. It also causes income inequality, congestion, and pollution.

Right now the amount of waste coming from companies requiring that their workers commute to work is probably the biggest amount of waste that companies are creating in the economy. By the way, it also causes income inequality, congestion, and pollution. So there are countless negative externalities that nobody is even taking into account. Yet the waste of time by forcing workers to commute to work is increasing every year when it doesn’t need to be.

Some 20 years ago, when people were talking about remote work, it felt challenging from a cultural standpoint. We were all used to working face-to-face. It was challenging from a technological standpoint. We didn’t have broadband, secure application environments such as Citrix, and video conferencing. The tools were not in the cloud. A lot of things made it challenging to work remotely — but now that cultural barrier is not nearly as big.

We are all more or less digital natives; we all use these tools. Frankly, even when you are two floors away in the same building, how many times you take the elevator to go down to meet somebody face-to-face versus chat with them or do a video conference with them?

Human Resources SectionAt this stage, whether you are two floors away or 200 miles away makes almost no difference whatsoever. Where it does make a difference is forcing people to have to come to work every single day when it adds a huge amount of constraint in their lives and it’s fundamentally not productive for the economy.

Minahan: Building on what Stephane said, the study we did found that in addition to unlocking that untapped pool of talent, those folks who do currently have full-time jobs, 95 percent of them said they would work from home at least twice a week if given the opportunity. To Stephane’s point, you just look at that group alone and the time they would save from commuting multiplies to 105 hours of newly free time per year, time they didn’t have to spend commuting and doing unproductive things. Most of them said that they would put more hours into work because they didn’t have to deal with all the hassle of getting there.

Flexible work provides creativity 

Gardner: What about the quality of the work? It seems to me that creative work happens in its own ways, even in a state of leisure. I have to tell you some of the best creative thoughts I have occur when I’m in the shower. I don’t know why. So maybe creativity isn’t locked into a 9-to-5 definition.

Is there something in what we’re talking about that caters to the way the human brain works? As we get into the age of robotic process automation (RPA) should we look more to the way that people are intrinsically creative and free that?

Kasriel: Yes, the World Economic Forum has called attention to such changes in our evolution, the idea that progressively machines are going to be taking over the parts of our jobs that they can do better than we can. This frees us to be the best of ourselves, to be humans. The repetitive, non-cognitive work being done in a lot of offices is progressively going to be automated through RPA and artificial intelligence (AI). That allows us to spend more time on the creative work. The nature of creative work is such that you can’t order it on-demand, you can’t say, “Be creative in the next five minutes.”

It comes when it comes. It’s the inspiration that comes. So putting in artificial boundaries of saying, “You will be creative from 9-to-5, and you will only do this in the office environment,” is unlikely to be successful. Frankly, if you look at workplace management, you see companies increasingly trying to design work environments that are mix between areas of the office where you can be very productive — by just doing the things that you need to do — and places where you can be creative and thinking.

And that’s just a band-aid solution. The real solution is to let people work from anywhere and let them figure out the time at which they are the most creative and productive. Hold people accountable for an outcome, as opposed to holding them accountable for the number of fixed-time hours they are giving to the firm. It is, after all, very weakly correlated to the amount of output, of what they actually generate for the company.

Minahan: I fully agree. If you look at the overall productivity and the GDP, productivity advanced consistently with each new massive innovation right up until recently. The advent of mobile devices, mobile apps, and all of the distractions from communications and chat channels that we have at work have reached a crescendo.

Read the Report: The Potential Economic

Impacts of a Flexible Working Culture

On any given day, a typical employee spends nearly 65 percent of their time on such busy work. That means responding to Slack messages, being distracted by the application alerts about some tasks that may not be pertinent for your job and spending another 20 percent of time just searching for information. These all leave employees with less than two hours a day, by some estimates, on the meaningful and rewarding work that they were hired to do.

If we can free them up from those distractions and give them an environment to work where and how they want, one of the chief benefits is the capability to drive greater innovation and creativity than they can in an interruptive office environment.

Gardner: We have been talking in general terms. Do we have any concrete examples, use cases perhaps, that illustrate what we have been driving at? Why is it good for business and also for workers?

Blended workforce wins 

Kasriel: If you look at tech companies created in the last 15 to 20 years, increasingly you see them as what people call remote first, where they try to hire people outside of their main headquarters first and only put people in the office if they happen to live nearby. And that leads to a blended workforce, a mix between full-time employees and free-lancers.

The companies most visible started in open-source software development. So if you look at Mozilla, the non-profit behind Firefox, or if you look at the Wikipedia foundation, the non-profit building Wikipedia, if you look at Automattic, the for-profit open source company that builds WordPress, or if you look at GitLab. I mean, if you look at Upwork, we ourselves are mostly distributed, 2,000 people working in 800 different cities. InVision would be another example.

So, very well-known tech companies that build products used by hundreds of millions of people. WordPress alone empowers a subset of the Internet. These companies tend to have well over 100,000 workers between full-time employees and freelancers. They either have no office or most of their people are not working in an office.

Microsoft started using Upwork a few years ago. At this stage, they have thousands of different freelancers working on thousands of different projects. They are doing it becuase it’s the right thing to do.

The companies that are a little bit more challenging are the ones that have grown in a world where everybody was a full-time employee. Everybody was on-site. But progressively they have made a shift to more flexible work models.

Probably the company that I’ve seen to be the most publicly vocal about this is Microsoft. Microsoft started using Upwork a few years ago. At this stage, they have thousands of different freelancers working on thousands of different projects. Partly they do it because they struggle to find great talent in Redmond, Wash., just like everybody else. There is a finite talent pool. But partly they are doing it because it’s the right thing to do.

Citrix campusIncreasingly we hear companies say, “We can do well, and we can do a good at the same time.” That means helping people who may be disabled, people that may have care duties, young parents with children at home, people that are retiring but are not fully willing to completely step out of the workforce, or people that just happened to live in smaller cities in the U.S. where increasingly, even if you have the skills, they are not local jobs.

And they have spoken about this in both terms, which is: It’s the right thing for their shareholders, the right thing for their business, but it’s also helping society be more fair and distributed in a way that benefits workers outside of the big tech hubs of San Francisco, Seattle, Boston, New York, and Austin.

Gardner: Tim, any examples that demonstrate why a future workspace model helps encourage this flexible work and why it’s good for both the employees and employers?

May the workforce be with you

Minahan: Stephane did a great job covering the more modern companies built from the ground up on flexible work models. He brought up an interesting point. It’s much more challenging for traditional or established companies to transition to these models. One that stands out and is relevant is eBay.

eBay, as we all know, is one of the largest digital marketplaces in the world. Like many others, they built call centers in major cities and hired a whole bunch of folks to answer and provide support calls to buyers and sellers as they were conducting commerce in the marketplace. However, their competition was setting up call centers right down the street, so they were in constant churning — hiring, training, losing them, and needing to rehire. Finally they said, “This can’t go on. We have to figure out a different model.”

They embraced technology and consequently a more flexible work model. They went where the talent is: The stay-at-home parent in Montana, the retiree in Florida, the gig worker in New York or Boston. They armed them with a digital workspace that gave them the information, tools, and knowledge base they needed to answer questions from customers but in far more flexible work models. They could work three hours a day or maybe one day a week. eBay was able to Uberfy the workforce.

They started a year-and-a-half ago and are now they are close to having 4,000 of these call center workers as a remote workforce, and it’s all transparent to the rest of us. They are delivering a higher-level service to the customers by going to where the talent is and it’s completely transparent. We are unaware that they are not sitting in a call center somewhere. They are actually sitting in a remote office in all corners of the country.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business networks, Citrix, Cloud computing, Cyber security, mobile computing, Security, User experience | Tagged , , , , , , , , , , | Leave a comment

As hybrid IT complexity ramps up, operators look to data-driven automation tools

sphere image

The next edition of the BriefingsDirect Voice of the Innovator podcast series examines the role and impact of automation on IT management strategies.

Growing complexity from the many moving parts in today’s IT deployments are forcing managers to seek new productivity tools. Moving away from manual processes to bring higher levels of automation to data center infrastructure has long been a priority for IT operators, but now new tools and methods are making composability and automation better options than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us learn more about the advancing role and impact from IT automation is Frances Guida, Manager of HPE OneView Automation and Ecosystem Product Management at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What are the top drivers, Frances, for businesses seeking higher levels of automation and simplicity in their IT infrastructure?

Guida: It relates to what’s happening at a business level. It’s a truism that business today is moving faster than it ever has before. That puts pressure on all parts of a business environment — and that includes IT. And so IT needs to deliver things more quickly than they used to. They can’t just use the old techniques; they need to move to much more automated approaches. And that means they need to take work out of their operational environments.

Gardner: What’s driving the complexity that makes such automation beneficial?

IT means business 

Guida: It again starts from the business. IT used to be a support function, to support business processes. So, it could go along on its own time scale. There wasn’t much that the business could or would do about it.

Frances Guida

Guida

In 2020, technology is now part of the fabric of most of the products, services, and experiences that businesses offer. So when technology is part of an offering, all of a sudden technology is how a business is differentiated. As part of how a business is differentiated, business leaders are not going to take, “Oh, we will get to it in 18 months,” as an answer. If that’s the answer they get from the IT department, they are going to go look for other ways of getting things done.

And with the advances of public cloud technology, there are other ways of getting things done that don’t come from an internal IT department. So IT organizations need to be able to keep up with the pace of business change, because businesses aren’t going to accept their historical time scale.

Gardner: Does accelerating IT via automation require an ecosystem of partners, or is there one tool that rules them all?

Guida: This is not a one-size-fits-all world. I talk to customers in our HPE Executive Briefing Centers regularly. The first thing I ask them is, “Tell me about the toolsets you have in your environment.” I often ask them about what kinds of automation toolsets they have. Do you have Terraform or Ansible or Chef or Puppet or vRealize Orchestrator or something else? It’s not uncommon for the answer to be, “Yes.” They have all of them.

So even within a customer’s environment, they don’t have a single tool. We need to work with all the toolsets that the customers have in their IT environments.

Gardner: It almost sounds like you are trying to automate the automation. Is that fair?

Guida: We definitely are trying to take some of the hard work that has historically gone into automation and make it much simpler.

Complexity is Growing in the Data Center

What’s the Solution?

Gardner: IT operations complexity is probably only going to increase, because we are now talking about pushing compute operations — and even micro data centers — out to the edge in places like factories, vehicles, and medical environments, for example. Should we brace ourselves now for a continuing ramp-up of complexity and diversity when it comes to IT operations?

Guida: Oh, absolutely. You can’t have a single technology that’s going to answer everything. Is the end user going to interface through a short message service (SMS) or are they going to use a smartphone? Are they going to be on a browser? Is it an endpoint that interacts with a system that’s completely independent of any user base technology? All of this means that IT has to be multifaceted.

Even if we look at data center technologies, for the last 15 years virtualization has been pretty much the standard way that IT deploys new systems. Now, increasingly, organizations are looking at a set of applications that don’t run in virtual machines (VMs), but rather are container-based. That brings a whole other set of complexity they have to think about in their environments.

Complexity is like entropy; it just keeps growing. When we started thinking about bringing a lot more flexibility to on-premises data center environments, we looked holistically at the problem … at a deeper level.

Complexity is like entropy; it just keeps growing. When we started thinking about bringing a lot more flexibility to on-premises data center environments, we looked holistically at the problem. I don’t think the problem can only be addressed through better automation; in fact, it has to be addressed at a deeper level.

And so with our composable infrastructure strategies, we thought architecturally about how we could bring the same kind of flexibility you have in a public cloud environment to on-premises data centers. We realized we needed a way to liberate IT beyond the boundaries of physical infrastructure by being able to group that physical infrastructure into pools of resources that could be much more fluid and where the physical aspects could be changed.

Now, there is some hardware infrastructure technology in that, but a lot of that magic is done through software, using software to configure things that used to be done in a physical manner.

CISo we defined a layer of software-defined intelligence that captures all of the things you need to know about configuring physical hardware — whether it’s firmware levels or biased headings or connections. We define and calculate all of that in software.

And automation is the icing on that cake. Once you have your infrastructure that can be defined in software, you can program it. That’s where the automation comes in, being able to use everyday automation tools that organizations are already using to automate other parts of their IT environment and apply that to the physical infrastructure without a whole bunch of unnatural acts that were previously required if you wanted to automate physical infrastructure.

Gardner: Are we talking about a fundamental shift in how infrastructure should be conceived or thought of here?

Consolidate complexity via automation 

Guida: There has been a saying in the IT industry for a while about moving from pets to cattle. Now we even talk about thinking about herds. You can brute-force that transition by trying to automate to all of the low-level application programing interfaces (APIs) in physical infrastructure today. Most infrastructure today is programmable, with rare exceptions.

But then you as the organization are doing the automation, and you must internalize that and make your automation account for all of the logic. For example, if you then make a change in the storage configuration, what does that mean for the way the network needs to be configured? What does that mean for firmware settings? You would have to maintain all of that in your own automation logic.

How to Simplify and Automate

Your Data Center

There are some organizations in the world that have the scale of automation engineering to be able to do that. But the vast majority of enterprises don’t have that capability. And so what we do with composable infrastructure, HPE OneView, and our partner ecosystem is we actually encapsulate all of that in our software to find intelligence. So all you have to do is take that configuration file and apply it to a set of physical hardware. It brings things that used to be extremely complex down to what a standard IT organization has the capabilities of doing today.

Gardner: And not only is that automation going to appeal to the enterprise IT organizations, it’s also going to appeal to the ecosystem of partners. They now have the means to use the composable infrastructure to create new value-added services.

How does HPE’s composability benefit both the end-user organizations and the development of the partner ecosystem?

Guida: When I began the composable ecosystem program, we actually had two or three partners. This was about four years ago. We have now grown to more than 30 different integrations in place today, with many more partners that we are talking to. And those range from the big, everyday names like VMware and Microsoft to smaller companies that may be present in only a particular geography.

But what gets them excited is that, all of a sudden, they are able to bring better value to their customers. They are able to deliver, for example, an integrated monitoring system. Or maybe they are already doing application monitoring, and all of a sudden they can add infrastructure monitoring. Or they may already be doing facilities management, managing the power and cooling, and all of a sudden they get a whole bunch of data that used to be hard to put in one place. Now they can get a whole bunch of data on the thermals, of what’s really going on at the infrastructure level. It’s definitely very exciting for them.

Gardner: What jumps out at you as a good example of taking advantage of what composable infrastructure can do?

Guida: The most frequent conversations I have with customers today begin with basic automation. They have many tools in their environment; I mentioned many of them earlier: Ansible, Terraform, Chef, Puppet, or even just PowerShell or Python; or in the VMware environment, vRealize Orchestrator.

They have these tools and really appreciate what we have been able to do with publishing these integrations on GitHub, for example, of having a community, and having direct support back to our engineers who are doing this work. They are able to pretty straightforwardly add that into their tools environment.

How a Software-Defined Data Center

Lets the Smartest People Work for You

And we at HPE have also done some of the work ourselves in the open source tools projects. Pretty much every automation tool that’s out there in mainstream use by IT — we can handle it. That’s where a lot of the conversations we have with customers begin.

If they don’t begin there, they start back in basic IT operations. One of the ways people take advantage of the automation in HPE OneView — but they don’t realize they are taking advantage of automation — is in how OneView helps them integrate their physical infrastructure into a VMware vCenter or a Microsoft System Center environment.

Visualize everything, automatically

For example, in a VMware vCenter environment, an administrator can use our plug-in and it automatically sucks in all of the data from their physical infrastructure that’s relevant to their VMware environment. They can see things in their vCenter environment that they otherwise couldn’t see.

They can see everything from a VM that’s sitting on the VM host that’s connected through the host bus adapters (HBAs) out to the storage array. There is the logical volume. And they can very easily visualize the entire logical as well as physical environment. That’s automation, but you are not necessarily perceiving it as automation. You are perceiving it as simply making an IT operations environment a lot easier to use.

The automation benefits — instead of just going down into the IT operations — can also go up to allow more cloud management. It affects infrastructure and applications.

For that level of IT operations integration, VMware and Microsoft environments are the poster children. But for other tools, like Micro Focus and some of the capacity planning tools, and event management tools like ServiceNow – those are another big use case category.

The automation benefits – instead of just going down into the IT operations – can also go up to allow more cloud management. Another way IT organizations take advantage of the HPE automation ecosystem means, “Okay, it’s great that you can automate a piece of physical infrastructure, but what I really need to do — and what I really care about — is automating a service. I want to be able to provision my SQL database server that’s in the cloud.”

That not only affects infrastructure pieces, it touches a bunch of application pieces, too. Organizations want it all done through a self-service portal. So we have a number of partners who enable that.

Morpheus comes to mind. We have quite a lot of engagements today with customers who are looking at Morpheus as a cloud management platform and taking advantage of how they can not only provision the logical aspects of their cloud, but also the physical ones through all of the integrations that we have done.

How to Simplify, Automate, and

Develop Faster

Gardner: How does HPE and the partner ecosystem automate the automation, given the complexity that comes with the newer hybrid deployment models? Is that what HPE OneView is designed to help do these days?

Automatic, systematic, cost-saving habit 

Guida: I want to talk about a customer who is an online retailer. If you think about the retail world — obviously a highly dynamic world and technology is at the very forefront of the product that they deliver; technology is the product that they deliver.

They have a very creative marketing department that is always looking for new ways to connect to their customers. That marketing department has access to a set of application developers who are developing new widgets, new ways of connecting with customers. Some of those developers like to develop in VMs, which is more old school; some of the developers are more new school and they prefer container-based environments.

multicloud

The challenge the IT department has is that from one week to the next they don’t fully know how much of their capacity needs to be dedicated to a VM versus a container environment. It all depends on which promotions or programs the business decides it wants to run at any time.

So the IT organization needed a way to quickly switch an individual VM host server to be reconfigured as a bare-metal container host. They didn’t want to pay a VM tax on their container host. They identified that if they were going to do that manually, there were dozens and dozens — I think they had 36 or 37 — steps that they needed to do. And they could not figure out a way to automate individually each one of those 37 steps.

When we brought them an HPE Synergy infrastructure — managed by OneView, automated by Ansible — they instantly saw how that was going to help solve their problems. They were able to change their environemnt from one personality to another in a completely automated fashion.

When we brought them an HPE Synergy infrastructure — managed by OneView, automated with Ansible — they instantly saw how that was going to help solve their problems. They were going to be able to change their environment from one personality to another personality in a completely automated fashion. And now they are able to do that changeover in just 30 minutes, and instead of needing dozens of manual steps. They have zero manual steps; everything is fully automated.

And that enables them to respond to the business requirements. The business needs to be able to run whatever programs and promotions it is that they want to run — and they can’t be constrained by IT. Maybe that gives a picture of how valuable this is to our customers.

Gardner: Yes, it speaks to the business outcomes, which are agility and speed, and at the same time the IT economics are impacted there as well.

Speaking of IT economics and IT automation, we have been talking in terms of process and technology. But businesses are also seeking to simplify and automate the economics of how they acquire and spend on IT, perhaps more on a pay-per-use basis.

Is there alignment between what you are doing in automation and what HPE is doing with HPE GreenLake? Do the economics and automation reinforce one another?

How to Drive Innovation and

Automation in Your Data Center

Guida: Oh, absolutely. We bring physical infrastructure flexibility, and HPE GreenLake brings financial flexibility. Those go hand in hand. In fact, the example that I was just speaking about, the online retailer, they are very, very busy during the Christmas shopping season. They are also busy for Valentine’s Day, Mother’s Day, and back-to-school shopping. But they also have times where they are much less busy.

They have HPE GreenLake integrated into their environment so in addition to having the physical flexibility in their environment, they are financially aligning through a flexible capacity program and paying for technology — in the way that their business model works. So, these things go hand-in-hand.

As I said earlier, I talk to a lot of HPE customers because I am based in the San Francisco Bay Area where we have our corporate headquarters. I am frequently in our Executive Briefing Center two to three times a week. There are almost no conversations I am part of that don’t lead eventually to the financial aspects, as well as the technical aspect, of how all the technology works.

Gardner: Because we have opened IT automation up to the programmatic level, a new breed of innovation can be further brought to bear. Once people get their hands on these tools and start to automate, what have you seen on the innovation side? What have people started doing with this that you maybe didn’t even think they would do when you designed the products?

Single infrastructure signals innovation

Guida: Well, I don’t know that we didn’t think about this, but one of the things we have been able to do is make something that the IT industry has been talking about for a while in an on-premises IT environment.

There are lots of organizations that have IT capacity that is only used some of the time. A classic example is an engineering organization that provides a virtual desktop infrastructure (VDI) capability for engineers. These engineers need a bunch of analytics applications — maybe it’s genomic engineering, seismic engineering, or fluid dynamics in the automotive industry. They have multiple needs. Typically they have been running those on different sets of physical infrastructures.

With our automation, we can enable them to collapse that all into one set of infrastructure, which means they can be much more financially efficient. Because they are more financially efficient on the IT side, they are able to then devote more of their dollars to driving innovation — finding new ways of discovering oil and gas under the ground, new ways of making automobiles much more efficient, or uncovering new secrets within our DNA. By spending less on their IT infrastructure, they are able to spend more on what their core business innovation should be.

Gardner: Frances, I have seen other vendors approach automation with a tradeoff. They say, “Well, if you only use our cloud, it’s automated. If you only use our hypervisor, it’s automated. If you only use our database, it’s automated.”

But HPE has taken a different tack. You have looked at heterogeneity as the norm and the complexity as a result of heterogeneity as what automation needs to focus on. How far ahead is HPE on composability and automation? How differentiated are you from others who have put a tradeoff in place when it comes to solving automation?

We have had composable infrastructure on the market for three-plus years. Our HPE Synergy platform now has a $1 billion run rate. We have 3,600 customers around the world. It’s been a tremendously successful business for us.

Guida: We have had composable infrastructure on the market for three-plus years now. Our HPE Synergy platform, for example, now has a more than $1 billion run rate for HPE. We have 3,600 customers and counting around the world. It’s been a tremendously successful business for us.

I find it interesting that we don’t see a lot of activity out there, of people trying to mimic or imitate what we have done. So I expect composability and automation will remain fundamentally differentiating for us from many of our traditional on-premises infrastructure competitors.

HPE-GreenlakeIt positions us very well to provide an alternative for organizations who like the flexibility of cloud services but prefer to have them in their on-premises environments. It’s been tremendously differentiating for us. I am not seeing anyone else who has anything coming on hot in any way.

Gardner: Let’s take a look to the future. Increasingly, not only are companies looking to become data-driven, but IT organizations are also seeking to become data-driven. As we gather more data and inference, we start to be predictive in optimizing IT operations.

I am, of course, speaking of AIOps. What does that bring to the equation around automation and composability? How will AIOps change this in the coming couple of years?

Automation innovation in sight with AIOps

Guida: That’s a real opportunity for further innovation in the industry. We are at the very early stages about how we take advantage in a symptomatic way of all of the insights that we can derive from knowing what is actually happening within our IT environments and mining those insights. Once we have mined those insights, it creates the possibility for us to take automation to another level.

We have been throwing around terms like self-healing for a couple of decades, but a lot of organizations are not yet ready for something like self-healing infrastructure. There is a lot of complexity within our environments. And when you put that into a broader heterogeneous data center environment, there is even more complexity. So there is some trepidation.

How to Accelerate to

A Self-Driving Data Center

Over time, for sure, the industry will get there. We will be forced to get there because we are going to be able to do that in other execution venues like the public cloud. So the industry will get there. The whole notion of what we have done with automation of composable infrastructure is absolutely a great foundation for us as we take our customers toward these next journeys around automation.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Enterprise architect, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud | Tagged , , , , , , , , , , , | Leave a comment