How to migrate your organization to a more security-minded culture

Bringing broader awareness of security risks and building a security-minded culture within any public or private organization has been a top priority for years.

Yet halfway through 2021, IT security remains as much a threat as ever — with multiple major breaches and attacks costing tens of millions of dollars occurring nearly weekly.

Why are the threat vectors not declining? Why, with all the tools and investment, are businesses still regularly being held up for ransom or having their data breached? To what degree are behavior, culture, attitude, and organizational dissonance to blame?

Join us here as BriefingsDirect probes into these more human elements of IT security with a leading chief information security officer (CISO).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about adjusting the culture of security to make organizations more resilient, please welcome Adrian Ludwig, CISO at Atlassian. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Adrian, we are constantly bombarded with headlines showing how IT security is failing. Yet, for many people, they continue on their merry way — business as usual.

Are we now living in a world where such breaches amount to acceptable losses? Are people not concerned because the attacks are perceived as someone else’s problem?

Ludwig: A lot of that is probably true, depending on whom you ask and what their state of mind is on a given day. We’re definitely seeing a lot more than we’ve seen in the past. And there’s some interesting twists to the language. What we’re seeing does not necessarily imply that there is more exploitation going on or that there are more problems — but it’s definitely the case that we’re getting a lot more visibility.

I think it’s a little bit of both. There probably are more attacks going on, and we also have better visibility.

Gardner: Isn’t security something we should all be thinking about, not just the CISOs?

Ludwig: It’s interesting how people don’t want to think about it. They appoint somebody, give them a title, and then say that person is now responsible for making security happen.

But the reality is, within any organization, doing the right thing — whether that be security, keeping track of the money, or making sure that things are going the way you’re expecting — is a responsibility that’s shared across the entire organization. That’s something that we are now becoming more accustomed to. The security space is realizing it’s not just about the security folks doing a good job. It’s about enabling the entire organization to understand what’s important to be more secure and making that as easy as possible. So, there’s an element of culture change and of improving the entire organization.

Gardner: What’s making these softer approaches — behavior, culture, management, and attitude – more important now? Is there something about security technology that has changed that makes us now need to look at how people think?

Ludwig: We’re beginning to realize that technology is not going to solve all our problems. When I first went into the security business, the company I worked for, a government agency, still had posters on the wall from World War II: Loose lips sink ships.

Learn More  


The idea of security culture is not new, but the awareness is, across organizations that any person could be subject to phishing, or any person could have their credentials taken — those mistakes could be originating at any place in the organization. That broad-based awareness is relatively new. It probably helps that we’ve all been locked in our houses for the last year, paying a lot more attention to the media, and hearing about attacks that have been going on at governments, the hacking, and all those things. That has raised awareness as well.

Gardner: It’s confounding that people authenticate better in their personal lives. They don’t want their credit cards or bank accounts pillaged. They have a double standard when it comes to what they think about protecting themselves versus protecting the company they work for.

Data safer at home or work?

Ludwig: Yes, it’s interesting. We used to think enterprise security could be more difficult from the user experience standpoint because people would put up with it because it was work.

But the opposite might be true, that people are more self-motivated in the consumer space and they’re willing to put up with something more challenging than they would in an enterprise. There might be some truth to that, Dana.

Gardner: The passwords I use for my bank account are long and complex, and the passwords I use when I’m in the business environment … maybe not so much. It gets us back to how you think and your attitude for improved security. How do we get people to think differently?

Ludwig: There’s a few different things to consider. One is that the security people need to think differently. It’s not necessarily about changing the behavior of every employee in the company. Some of it is about figuring out how to implement critical solutions that provide security without changing behavior.

Security people need to think differently. It’s not necessarily about changing the behavior of every employee in the company. It’s about implementing solutions that provide security without changing behavior. 

There is a phrase, the paved path or road; so, making the secure way the easy way to do something. When people started using YubiKey U2F [an open authentication standard that enables internet users to securely access any number of online services with a single security key] as a second-factor authentication, it was actually a lot easier than having to input your password all over the place — and it’s more secure.

That’s the kind of thing we’re looking for. How do we enable enhanced security while also having a better user experience? What’s true in authentication could be true in any number of other places as well.

Second, we need to focus on developers. We need to make the developer experience more secure and build more confidence and trustworthiness in the software we’re building, as well as in the types of tools used to build.

Developers find strength

Gardner: You brought up another point of interest to me. There’s a mindset that when you hand something off in an organization — it could be from app development into production, or from product design into manufacturing — people like to move on. But with security, that type of hand-off can be a risk factor.

Beginning with developers, how would you change that hand-off? Should developers be thinking about security in the same way that the IT production people do?

Ludwig: It’s tricky. Security is about having the whole system work the way that everybody expects it to. If there’s a breakdown anywhere in that system, and it doesn’t work the way you’re expecting, then you say, “Oh, it’s insecure.” But no one has figured out what those hidden expectations are.

No alt text provided for this image

A developer expects the code they write isn’t going to have vulnerabilities. Even if they make a mistake, even if there’s a performance bug, that shouldn’t introduce a security problem. And there are improvements being made in programming languages to help with that.

Certain languages are highly prone to security being a common failure. I grew up using C and C++. Security wasn’t something that was even thought of in the design of those languages. Java, a lot more security was thought of in the design of that language, so it’s intrinsically safer. Does that mean there are no security issues that can happen if you’re using Java? No.

Similar types of expectations exist at other places in the development pipeline as well.

Gardner: I suppose another shift has been from applications developed to reside in a data center, behind firewalls and security perimeters. But now — with microservices, cloud-native applications, and multiple application programming interfaces (APIs) being brought together interdependently — we’re no longer aware of where the code is running.

Don’t you have to think differently as a developer because of the way applications in production have shifted?

Ludwig: Yes, it’s definitely made a big difference. We used to describe applications as being monoliths. There were very few parts of the application that were exposed.

At this point, most applications are microservices. And that means across an application, there might be 1,000 different parts of the application that are publicly exposed. They all must have some level of security checks being done on them to make sure that if they’re handling an input that might be coming from the other side of the world that it’s being handled correctly.

Learn More  


So, yes, the design and the architecture have definitely exposed a lot more of the app’s surface. There’s been a bit of a race to make the tools better, but the architectures are getting more complicated. And I don’t know, it’s neck and neck on whether things are getting more secure or they’re getting less secure as these architectures get bigger and more exposed.

We have to think about that. How do we design processes to deal with that? How do you design technology, and what’s the culture that needs to be in place? I think part of it is having a culture of every single developer being conscious of the fact that the decisions they’re making have security implications. So that’s a lot of work to do.

Gardner: Another attitude adjustment that’s necessary is assuming that breaches are going to happen and to stifle them as quickly as possible. It’s a little different mindset, but the more people involved with looking for anomalies, who are willing to have their data or behaviors examined for anomalies makes sense.

Is there a needed cultural shift that goes with assuming you’re going to be breached and making sure the damage is limited?

Assume the worst to limit damage 

Ludwig: Yes. A big part of the cultural shift is being comfortable taking feedback from anybody that you have a problem and that there’s something that you need to fix. That’s the first step.

Companies should let anybody identify a security problem — and that could be anybody inside or outside of the company. Bug bounties. We’re in a bit of arevolution in terms of enabling better visibility into potential security problems.

But once you have that sort of culture, you start thinking, “Okay. How do I actually monitor what’s going on in each of the different areas?” With that visibility, exposure, and understanding what’s going in and out of specific applications, you can detect when there’s something you’re not expecting. That turns out to be really difficult, if what you’re looking at is very big and very, very complicated.

Decomposing an application down into smaller pieces, being able to trace the behaviors within those pieces, and understanding which APIs each of those different microservices is exposing turns out to be really important.

If you combine decomposing applications into smaller pieces with monitoring what’s going on in them and creating a culture where anybody can find a potential security flaw, surface it, and react to it — those are good building blocks for having an environment where you have a lot more security than you would have otherwise.

Gardner: Another shift we’ve seen in the past several years is the advent of big data. Not only can we manage big data quickly, but we can also do it at a reasonable cost. That has brought about machine learning (ML) and movement to artificial intelligence (AI). So, now there’s an opportunity to put another arrow in our quiver of tools and use big data ML to buttress our security and provide a new culture of awareness as a result.

Most applications are so complicated — and have been developed in such a chaotic manner — it’s impossible to understand what’s going on inside of them.Give the robots a shot and see if we can figure it out by turning the machines on themselves. 

Ludwig: I think so. There are a bunch of companies trying to do that, to look at the patterns that exist within applications, and understand what those patterns look like. In some instances, they can alert you when there’s something not operating the way that is expected and maybe guide you to rearchitecting and make your applications more efficient and secure.

There are a few different approaches being explored. Ultimately, at this point, most applications are so complicated — and have been developed in such a chaotic manner — it’s impossible to understand what’s going on inside of them. That’s the right time that the robots give it a shot and see if we can figure it out by turning the machines on themselves.

Gardner: Yes. Fight fire with fire.

Let’s get back to the culture of security. If you ask the people in the company to think differently about security, they all nod their heads and say they’ll try. But there has to be a leadership shift, too. Who is in charge of such security messaging? Who has the best voice for having the whole company think differently and better about security? Who’s in charge of security?

C-suite must take the lead 

That’s a realization it took me several years to realize. If the security person keeps saying, “The sky is falling, the sky is falling,” people aren’t going to listen. They say, “Security is important.” And the others reply, “Yes, of course, security is important to you, you’re the security guy.”If the head of the business, or the CEO, consistently says, “We need to make this a priority. Security is really important, and these are the people who are going to help us understand what that means and how to execute on it,” then that ends up being a really healthy relationship.

The companies I’ve seen turn themselves around to become good at security are the ones such as MicrosoftGoogle, or others where the CEO made it personal, and said, “We’re going to fix this, and it’s my number-one priority. We’re going to invest in it, and I’m going to hire a great team of security professionals to help us make that happen. I’m going to work with them and enable them to be successful.”

Learn More  


Alternatively, there are companies where the CEO says, “Oh, the board has asked us to get a good security person, so I’ve hired this person and you should do what he says.” That’s the path to a disgruntled bunch of folks across the entire organization. They will conclude that security is just lip service, it’s not that important. “We’re just doing it because we have to,” they will say. And that is not where you want to end up.

Gardner: You can’t just talk the talk, you have to walk the walk and do it all the time, over and over again, with a loud voice, right?

Ludwig: Yes. And eventually it gets quieter. Eventually, you don’t need to have the top level saying this is the most important thing. It becomes part of the culture. People realize that’s just the way – and it’s not that it’s just the way we do things, but it is a number-one value for us. It’s the number-one thing for our customers, too, and so culture shift ends up happening.

Gardner: Security mindfulness becomes the fabric within the organization. But to get there requires change and changing behaviors has always been hard.

Are there carrots? Are there sticks? When the top echelon of the organization, public or private, commits to security, how do you then execute on that? Are there some steps that you’ve learned or seen that help people get incentivized — or whacked upside the head, so to speak, when necessary?

Talk the security talk and listen up

Ludwig: We definitely haven’t gone for “whacked upside the head.” I’m not sure that works for anybody at this point, but maybe I’m just a progressive when it comes to how to properly train employees.

What we have seen work is just talking about it on a regular basis, asking about the things that we’re doing from a security standpoint. Are they working? Are they getting in your way? Honestly, showing that there’s thoughtfulness and concern going into the development of those security improvements goes a long way toward making people more comfortable with following through on them.

A great example is … You roll out two-factor authentication, and then you ask, “Is it getting in the way? Is there anything that we can do to make this better? This is not the be-all and end-all. We want to improve this over time.”

No alt text provided for this image

That type of introspection by the security organization is surprising to some people. The idea that the security team doesn’t want it to be disruptive, that they don’t want to get in the way, can go a long way toward it feeling as though these new protections are less disruptive and less problematic than they might otherwise feel.

Gardner: And when the organization is focused on developers? Developers can be, you know … 

Ludwig: Ornery?

Gardner: “Ornery” works. If you can make developers work toward a fabric of security mindedness and culture, you can probably do it to anyone. What have you learned on injecting a better security culture within the developer corps?

Ludwig: A lot of it starts, again, at the top. You know, we have core values that invoke vulgarity to both emphasize how important they are, but also how simple they are.

One of Atlassian’s values is, “Don’t fuck the customer.” And as a result of that, it’s very easy to remember, and it’s very easy to invoke. “Hey, if we don’t do this correctly, that’s going to hurt the customer.” We can’t let that happen as a top-level value.

We also have “Open company, no-bullshit”. If somebody says, “I see a problem over here,” then we need to follow up on it, right? There’s not a temptation to cover it up, to hide it, to pretend it’s not an issue. It’s about driving change and making sure that we’re implementing solutions that actually fix things.

There are countless examples of a feature that was built, and we really want to ship it, but it turns out it’s got a problem and we can’t do it because that would actually be a problem for the customer. So, we back off and go from there.

How to talk about security

Gardner: Words are powerful. Brands are powerful. Messaging is powerful. What you just said made me think, “Maybe the word security isn’t the right word.” If we use the words “customer experience,” maybe that’s better. Have you found that? Is “security” the wrong word nowadays? Maybe we should be thinking about creating an experience at a larger level that connotes success and progress.

Ludwig: Super interesting. Apple doesn’t use the word “security” very much at all. As a consumer brand, what they focus on is privacy, right? The idea that they’ve built highly secure products is motivated by the users’ right to privacy and the users’ desire to have their information remain private. But they don’t talk about security.

Apple doesn’t use the word security very much at all. The idea that they’ve built highly secure products is motivated by the users’ right to privacy and the users’ desire to have their information remain private. But they don’t talk about security. 

I always thought that was a really an interesting decision on their part. When I was at Google, we did some branding analysis, and we also came up with insights about how we talked about security. It’s a negative from a customer’s standpoint. And so, most of the references that you’ll see coming out of Google are security and privacy. They always attach those two things together. It’s not a coincidence. I think you’re right that the branding is problematic.

Microsoft uses trustworthy, as in trustworthy computing. So, I guess the rest of us are a little bit slow to pick up on that, but ultimately, it’s a combination of security and a bunch of other things that we’re trying to enable to make sure that the products do what we’re expecting them to do.

Gardner: I like resilience. I think that cuts across these terms because it’s not just the security, it’s how well the product is architected, how well it performs. Is it hardened, in a sense, so that it performs in trying circumstances – even when there are issues of scale or outside threats, and so forth. How do you like “resilience,” and how does that notion of business continuity come into play when we are trying to improve the culture?

Ludwig: Yes, “resilience” is a pretty good term. It comes up in the pop psychology space as well. You can try to make your children more resilient. Those are the ones that end up being the most successful, right? It certainly is an element of what you’re trying to build.

Learn More  


A “resilient” system is one in which there’s an understanding that it’s not going to be perfect. It’s going to have some setbacks, and you need to have it recoverable when there are setbacks. You need to design with an expectation that there are going to be problems. I still remember the first time I heard about a squirrel shorting out a data center and taking down the whole data center. It can happen, right? It does happen. Or, you know, you get a solar event and that takes down computers.

There are lots of different things that you need to build to recover from accidental threats, and there are ones that are more intentional — like when somebody deploys ransomware and tries to take your pipeline offline.

Gardner: To be more resilient in our organizations, one of the things that we’ve seen with developers and IT operations is DevOps. Has DevOps been a good lesson for broader resilience? Is there something we can do with other silos in organization to make them more resilient?

DevOps derives from experience

Ludwig: I think so. Ultimately, there are lots of different ways people describe DevOps, but I think about taking what used to be a very big thing and acknowledging that you can’t comprehend the complexity of that big thing. Choosing instead to embrace the idea that you should do lots of little things, in aggregate, and that they’re going to end up being a big thing.

And that is a core ethos of DevOps, that each individual developer is going to write a little bit of code and then they’re going to ship it. You’re going to do that over and over and over. You are going to do that very, very, very quickly. And they’re going to be responsible for running their own thing. That’s the operations part of the development. But the result is, over time, you get closer to a good product because you can gain feedback from customers, you’re able to see how it’s working in reality, and you’ll be able to get testing that takes place with real data. There are lots of advantages to that. But the critical part of it, from a security standpoint, is it makes it possible to respond to security flaws in near real-time.

No alt text provided for this image

Often, organizations just aren’t pushing code frequently enough to be able to know how to fix a security problem. They are like, “Oh, our next release window is 90 days from now. I can’t possibly do anything between now and then.” Getting to a point where you have an improvement process that’s really flexible and that’s being exercised every single day is what you get by having DevOps.

And so, if you think about that same mentality for other parts of your organization, it definitely makes them able to react when something unexpected happens.

Gardner: Perhaps we should be looking to our software development organizations for lessons on cultural methods that we can apply elsewhere. They’re on the bleeding edge of being more secure, more productive, and they’re doing it through better communications and culture.

Ludwig: It’s interesting to phrase it that way because that sounds highfalutin, and that they achieved it out of expertise and brilliance. What it really is, is the humbleness of realizing that the compiler tells you your code is wrong every single day. There’s a new user bug every single day. And eventually you get beaten down by all those, and you decide you’re just going to react every single day instead of having this big thing build up.

So, yes, I think DevOps is a good example but it’s a result of realizing how many flaws there are more than anything highfalutin, that’s for sure.

Gardner: The software doesn’t just eat the world; the software can show the world the new, better way.

Ludwig: Yes, hopefully so.

Future best security practices

Gardner: Adrian, any thoughts about the future of better security, privacy, and resilience? How will ML and AI provide more analysis and improvements to come?

Ludwig: Probably the most important thing going on right now in the context of security is the realization by the senior executives and boards that security is something they need to be proponents for. They are pushing to make it possible for organizations to be more secure. That has fascinating ramifications all the way down the line.

If you look at the best security organizations, they know the best way to enable security within their companies and for their customers is to make security as easy as possible. You get a combination of the non-security executive saying, “Security is the number-one thing,” and at the same time, the security executive realizes the number-one thing to implement security is to make it as easy as possible to embrace and to not be disruptive.

And so, we are seeing faster investment in security that works because it’s easier. And I think that’s going to make a huge difference.

No alt text provided for this image

There are also several foundational technology shifts that have turned out to be very pro-security, which wasn’t why they were built — but it’s turning out to be the case. For example, in the consumer space the move toward the web rather than desktop applications has enabled greater security. We saw a movement toward mobile operating systems as a primary mechanism for interacting with the web versus desktop operating systems. It turns out that those had a fundamentally more secure design, and so the risks there have gone down.

The enterprise has been a little slow, but I see the shift away from behind-the-firewall software toward cloud-based and software as a service (SaaS) software as enabling a lot better security for most organizations. Eventually, I think it will be for all organizations.

Those shifts are happening at the same time as we have cultural shifts. I’m really optimistic that over the next decade or two we’re going to get to a point where security is not something we talk about. It’s just something built-in and expected in much the same way as we don’t spend too much time now talking about having access to the Internet. That used to be a critical stumbling block. It’s hard to find a place now that doesn’t or won’t soon have access.

Gardner: These security practices and capabilities become part-and-parcel of good business conduct. We’ll just think of it as doing a good job, and those companies that don’t do a good job will suffer the consequences and the Darwinian nature of capitalism will take over.

Ludwig: I think it will.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: TraceableAI.


●      How API security provides a killer use case for ML and AI

●      Securing APIs demands tracing and machine learning that analyze behaviors to head off attacks

●      Rise of APIs brings new security threat vector — and need for novel defenses

●      Learn More About the Technologies and Solutions Behind

●      Three Threat Vectors Addressed by Zero Trust App Sec

●      Web Application Security is Not API Security

●      Does SAST Deliver? The Challenges of Code Scanning.

●      Everything You Need to Know About Authentication and Authorization in Web APIs

●      Top 5 Ways to Protect Against Data Exposure

●      TraceAI : Machine Learning Driven Application and API Security

Posted in AIOps, API Security, application transformation, artificial intelligence, Business intelligence, BYOD, Cloud computing, Cyber security, data analysis, data center, Data center transformation, digital transformation, disaster recovery, Enterprise architect, enterprise architecture, Identity, machine learning, risk assessment, Security, User experience | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Citrix research shows those ‘Born Digital’ can deliver superlative results — if leaders know what makes them tick

Self-awareness as an individual attribute provides the context to better understand others and to find common ground. But what about self-awareness of entire generations?

Are those born before the mass appeal and distribution of digital technology able to make the leap in their awareness of those who have essentially been Born Digital? Does the awareness gap extend to an even more profound disconnect between how today’s younger generations think and those more likely to be in the leadership positions in businesses?

Do the bosses really get their entry-level cohorts? And what, if any, impact has the COVID-19 pandemic had in amplifying these perception and cognition gaps?

Stay with us as BriefingsDirect explores new research into what makes the Born Digital generation tick. And we’ll also unpack ways that the gap between those born analog and more recently can be closed. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the paybacks and advantages of understanding and embracing the Born Digital Effect, please welcome Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix, and Amy Haworth, Senior Director of Employee Experience at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, your latest research into what makes those Born Digital tick bucks conventional wisdom. Why did Citrix undertake this research in the first place?

Minahan: This is the first generation to grow up in an entirely digital world. In another decade or so, the success or failure of businesses – the entire global economy — will be in the hands of this Born Digital generation. 

We wanted to get inside their heads to see what makes them tick. That helps us to help our customers design their post-pandemic work environments and work models to best support the needs of this emerging group of leaders. 

The good news is that the Born Digital generation — those born after 1997 – is primed to deliver significant economic gains — some $1.9 trillion in corporate profits. But there certainly were some divergences in what they need to do that and how they view work.

Certainly, the pandemic has forever changed the way we all work, but it had a particularly profound impact on the Born Digital generation. Many of them began or had their early careers during the crisis. Remote and technology-driven work is all that they have ever known. Organizations need to be aware of these scenarios as they plan for the future so as to not leave out or disengage from this future generation of leaders.

The Born Digital difference

Gardner: Tim, like me, you were born analog. What surprised you most about this generation? 

Minahan: Certain key findings debunked a lot of the myths around what motivates these workers. Our research reveals a fundamental disconnect. First, job stability and work-life balance are what matter most to these employees.

Citrix Research Shows Leaders Disconnected From Younger Employees. 

 Learn How to Unlock Their Full Potential. 

Largely faced with an uncertain job environment, these younger workers are most focused on fundamental work factors like career stability and security. They also want to work in their own way. So, they are looking for a good work-life balance and more flexible work models.

And this is poorly understood by leaders, who — in the same research – showed that they think, behind access to technology, the Born Digital generation values opportunities for training and meaningful, impactful work. And, while those are important, they’re further down the list.

It turns out that job satisfaction, career stability and security, and a good work-life balance ranks above compensation and the manager they work with.

And it’s become very clear — business leaders overestimate the appeal of the office. Ninety percent of Born Digital generation employees do not want to return to the office full-time post-pandemic. They prefer a more flexible or hybrid model, which is in stark contrast to the leadership where 58 percent believe that young workers will want to spend most or all their time working in an office. And this is a real Catch-22 that we’re all going to need to grapple with not years from now but in the next few months.

Gardner: Amy, does the way companies misinterpret their employees mean we need an employee experience reboot? 

Haworth: After reading this research, I felt an overwhelming sense of the importance of listening. That means getting really curious, and not only curious at big moments, like returning to the office or moving a vast number of employees out of offices — but getting curious all the time. 

If we design employee experience strategies around old assumptions, we’re missing each other in the workplace. Experiences are built in the day-to-day moments. We need to build the hybrid workplace around trust and inclusivity.

It was so clear to me that if we are designing employee experience strategies around old assumptions, we’re missing each other in the workplace. One of the frameworks for employee experience we use heavily at Citrix is the idea that experiences are built in the day-to-day moments. The touchpoints that employees have in the human space, the physical space, and the digital space. At Citrix, we have rethought and rebooted our own experience, coming back into a hybrid workplace, and built around the idea of trust and inclusivity.

And it’s interesting in this research how much trust in autonomy and in inclusivity emerged as critical components for the Born Digitals. Interestingly, that seems to extend into other generations as well. It became the framework for us and our approach to hybrid work — a philosophy — and a way to build the infrastructure for that. We wanted to record and cultivate trust in our own culture.

Work together, even when apart

Minahan: Visionary leaders are using this moment in time to rethink future of work models and turn their work environments to competitive advantage. A growing number of our customers are now trying to navigate through these situations in their post-pandemic work model planning.

One of the big topics is not just about where people work. I think there’s a false-positive that some executives are doing with the belief that everyone wants to get back to the office full-time. Because the initial burst of productivity has declined, they’re using the last 15 months as a proxy for what remote work is.

Let’s be clear. The last year and a half has not been remote work, it’s been remote isolation. There needs to be a deeper level of understanding, as Amy said, as you move into your planning of what truly motivates people. You need to truly understand what’s going to attract the right talent and importantly what’s going to engage them and allow them to be successful in driving the business outcomes that you’re hoping for them to achieve.

Gardner: I find it not just a little ironic that we’re going to be seeking to better listen and better communicate when we’re not together in an office. There may be an inability to see the trees for the forest when you’re in the same office going through the same work patterns. Maybe breaking that pattern leads to even better communication. Amy?

Haworth: I think you are spot-on, Dana. One of the metaphors I’ve come to love is the idea of being at the ocean. If you’ve ever been anywhere where the tide comes in, at first you can’t see certain things. Then as the tide goes back out, there are tide pools full of life and vibrancy. They have been there all along, but you just couldn’t see them.

And that clearly emulates what is happening in organizations. These opportunities around hybrid work give us another chance to break the script. It helps us discover pieces in our organizations that may not have been working that great to start with and were causing friction all along.

Distributed work is happening. We’re having to be more explicit about the conversations around communication, collaboration, the expectations of each other, and what it means to help each other. Raising up those things anew is so important no matter the setting, no matter the workplace.

We’re now in a unique environment where we have this window of time to get very specific and not take it for granted – but to rebuild with intention. I truly hope that organizations are smart and do that with a concerted effort, with concerted energy, and then reap the rewards.

Distributed and dynamic workplaces

Minahan: Amy hits on two great points. One is there’s a real risk, as we move to hybrid work, that we create a culture of unintentional biases for those office-first-focused folks who may be conducting meetings or collaboration styles that preclude, or don’t include, folks working remotely. 

It isn’t just about having the right technology in place, it’s also about having the right policies in place. The cultural aspects and expectations need to create a workplace that has inclusivity and equality — no matter where work is done. The reality is we are going to continue to work in a very distributed mode, where certain team members won’t all be in the same room.

Those Born Digital Will Soon Determine Your Business’s Success. 

 Here’s How to Understand Them Better.

You must harness technology, institute policies, and set the expectations that remote workers are still active participants in the process and that information flows freely. That means investing in collaborative work management solutions that create a secure digital collaboration environment. These solutions align people around similar goals and objectives and key results (OKRs) that have visibility into the status and into how projects are progressing, whether you’re in the office or somewhere else.

By understanding the dependencies between the dispersed teams and other actions that need to be done, you create the business outcomes you want. These are the types of tools and policies that support the hybrid work environments that people are so desperately trying to create right now.

Gardner: The last year and a half has given us an opportunity to change the playbook. What we’re hearing from the younger generations is they’re not opposed to that. As we seek to best change the playbook, what has the Citrix research told you? 

Born free to choose how to work

Minahan: We engaged with two external research partners on this, Coleman Parkes Research and Oxford Analytica. They surveyed and did qualitative interviews with more than 1,000 business leaders and more than 2,000 knowledge workers across 10 countries. To prepare for the future, it was very clear that leaders need to get a grip on the expectations and motivations of this Born Digital generation and adapt their work models, workplaces, and work practices to better cultivate them.

There were three primary findings. You should focus on where this generation wants to work. Prepare them for success in distributed work environments. Companies need to give employees freedom to choose where they work best. 

You should focus on where this generation wants to work. Prepare them for success in distributed work environments. Companies need to give employees freedom to choose where they work best.

To Amy’s point, it’s about fit and function. Sometimes it is important to come together in offices for collaboration and social and cultural connections. For other forms of work, it is optimal for individuals to have the space they need to think, be creative, and succeed. The Born Digital cohort wants and needs that flexibility — to have both work environments purpose-fit for the work they need to get done.

Secondly, beyond where they work, the five-day work week that has vestiges of the industrial revolution is probably not appropriate. Same for the 9 am to 5 pm workday. We’re finding that a lot of folks need to take a break mid-day to recharge. So instead of thinking about one big block of time, think about sub-blocks that allow workers to optimize the work-life balance and to recharge. That drives the best energy to do your best work. And this is a very clear finding from the study on how the Born Digital want to work.

The last part is about how they work. They want autonomy and the opportunity to work in a high-trust environment. They want to have the right tools to have transparency, collaborate, and drive connectivity with their co-workers and peers — even if they’re not physically in the room together. They want compensation that recognizes and rewards performance, as well as strong and visible leadership.
And so those are some of the key attributes that are important as companies design their new work models.

Gardner: Amy, we’re now talking about things like trust and motivation. It seems to me that those are universally important, whether you’re born with digital technology or not.

Why does the digital technology generation have a stronger concept around trust and motivation? Is there a connection between being Born Digital and those intrinsic-but-profound values?

Haworth: Think about how these Born Digital knowledge workers have come into the workforce. Most have had some level of college education. They were used to being very autonomous university students as they figured out their activity-based work habits. How do they get the most done? Where does work happen best — in the library, or in their dorm rooms, or apartments? 

The Future of Work Demands Flexibility,

Choice, and Autonomy. 

 Lean to Foster a More Engaged Workforce.

The transition into an office is simply another step in developing a capability that they’ve been building for years. And so, if organizations are not leading with trust, transparency, autonomy, and allowing the digital tools they’ve come to expect and leverage in their educational path, that feels like there’s a massive disconnect. They’re not only undoing some of the amazing self-leadership that these Born Digitals have grown within themselves, but organizations are also depriving themselves of rethinking the ideas that the Born Digital generation is coming up with.

They are more accustomed than some of their predecessor generations to having seniority when it comes to using digital tools. And as we take an opportunity to flip our mindset, most of the time business leaders with more seniority are thinking, “Well, we have to groom this next generation of leaders.”

We may want to flip that mindset. Instead, think about how this new generation of leaders can groom the current leadership through things like reverse-mentorships or by sharing their voices. A manager with a team that includes Born Digitals can ask for their input and give permission for them to help shape the future of work together.

The organizations that do so are going to be much more well-suited to the economic benefits of this talent, as Tim highlighted at the beginning. It’s latent talent until we unlock it. It will take a conscious decision of leadership to think about how they can we best learn from this generation. They have a whole lot of things to teach us from what they envision as the future of work.

Increase your app-titude

Minahan: Amy brings up a good point that showed up in the research. That is dissonance between what older workers and leaders perceive as their experience and that of the Born Digital generation. That gap extends to both in the tools they use to do their work, as well as on how they communicate.

On the technology side, for example, young workers and leaders inhabit very different digital worlds. The research found that only 21 percent of business leaders use instant messaging apps such as Slack or WhatsAppfor work, as compared with 81 percent of Born Digital employees.

If you want to build trust and communication, it’s very hard if you are not hanging out in the same places. Similarly, only 26 percent of business leaders like using these apps for work compared to 82 percent of the Born Digitals. Clearly, there are very different work habits and work tools that the Born Digitals prefer. As leaders look to cultivate, engage with, and recruit these Born Digital workers, they are going to need to understand what tools to use to communicate to foster the next generation of leaders.

Haworth: That statistic also caught my eye; that 26 percent of business leaders like using these apps for work compared with 82 percent of Born Digital workers. Every organization that I have spoken with in my career, honestly, but especially in the last 36 months, has talked about how hard it is to get messages out into the organization. And when you step back and say, “Well, how are you trying to communicate that message?” Oftentimes what I hear is a company intranet or email.

Citrix Research Shows Leaders Disconnected From Younger Employees. 

 Learn How to Unlock Their Full Potential. 

If we take something as incredibly important as communication and think about what could be applied from this data to specific segments — to communication, to leadership, to recruiting — this becomes a really salient point and very relevant for the planning and strategy of how to best reach these workers. 

In the employee experience space, one of the key ideas is not everybody is the same. Employee experience is built around personalization. Much of this research data is rich with aligning a strategy to personalize the experience for the Born Digitals for both their own benefit as well as the benefit of the organization. If people only take one thing from this report, to me that could be it right there.

Minahan: Yes, we could fill up a whole list of Slack conversations with that topic, absolutely!

Gardner: It strikes me that there is a propensity for these younger workers to naturally innovate. If you give them a task, they are ready and willing to figure out how to do it on their own. Older workers wait around to be told how to do things.

I wonder if this innovation propensity in the younger workers is an untapped, productivity boom, and that allowing people to do things their own way — as long as the job gets done — is a huge benefit to all.

Innovation generation integrates AI

Minahan: I think you are onto something there. With the do-it-yourself or YouTube generation, you see it in your own children, they teach themselves or find ways to figure things out — whether it’s a math problem or a hobby. 

Best practice sharing mentoring as a benefit applies to solving problems, of how to adapt and learn. Reverse mentoring, formal or informal, has a big opportunity to raise all boats.

Amy mentioned earlier the importance of reverse-mentoring, and that’s no joke. We first talked about it as teaching the older generation how to use technology. But there is a best-practice-sharing benefit as applies to solving problems, of how to constantly adapt, and continue to learn. That reverse mentoring, whether it’s formal or informal, has a real big opportunity to lift all boats.

Gardner: As these folks innovate, we also now have the means to digitally track what they are doing. We can learn, on a process basis through the data, what works better, which allows us to improve our processes constantly and iteratively. Before, we were all told how to do things. We did it, and then we redid it, and not much changed.

Is there an opportunity here to create a new business style combining the data-driven capability to measure what people are doing as well as having them continue to do it in an experimental fashion?

Haworth: Yes, there is now an amazing opportunity to think about how machine learning (ML) and artificial intelligence (AI) can become a guide. As the data fuels insights, those insights can help make workers more effective and potentially far more productive.

When I think of reverse-mentoring, I not only would love to have a Born Digital mentor me on technology, but I also wouldn’t mind having an AI coach tap into places where I’m missing things. They could intervene and help me find a better way, to guide my work, or to think about who else might be interested in this topic. That could fuel an interesting discussion and help me make connections within my organization.

Those Born Digital Will Soon Determine Your Business’s Success. 

 Here’s How to Understand Them Better.

The Born Digital generation also specified in the Citrix report how distinct their experiences are when it comes to building new connections within organizations. Technology can play a role in that, not only by removing friction to give us time to connect with other human beings, but to also guide us to where those connections might be productive ones. And by productive, I don’t necessarily mean only output, but where it leads to idea generation, further innovation, scaling, and to creating coalitions and influence that lead to desirable outcomes.

Minahan: The world is moving so quickly today. Technology is advancing at such a rapid pace; it’s changing how we engage and do business. The growth-hack skillset for the individual career right now is those who can continuously learn and quickly adapt. That’s going to be critical.

We think the Born Digital generation has a lot to offer on that front, and they can teach the entire culture to support that. As Amy said, then augmenting that culture with AI or ML and other tools so that it becomes an institutional upgrade in skills, knowledge, and best-practice-sharing — so that everyone is absolutely performing at their best and everyone can begin to see around corners and adapt much quicker — that’s what’s going to create the high-performing, curious, and growth-oriented organizations of the future.

Gardner: How do we now take this research into action? How do we move from the observation that there is an awareness and perceptions gap — and maybe $1.9 trillion at stake — and go about self-evaluating and changing?

Listen fully to learn and lead

Haworth: Number one for me is to listen. And listening is hard for some. It requires time, but I will advocate that it doesn’t take a lot of time.

I have a little game to offer everyone. It’s called 5 for 5, which means talk to five people with five questions, and ask those five questions to all of them. Don’t defend. Don’t explain. Just get genuinely curious — and start with your Born Digitals. Most organizations have an easy way for leaders to find them. They might be on your team. They might be your kids, your nieces, your nephews, or a neighbor down the street. But spend a little bit of time just listening.

And from those five people, we know you are likely to find some themes, just those five conversations. And then put it on your calendar to do that at least once a quarter. These are the most interesting opportunities leaders have to inform strategy, to think about what’s next, and to learn something about a person that they may never have known before. 

Talk to five people and ask five questions of all of them. Start with your Born Digitals. You are likely to find themes that will inform strategy for leaders. 

We recently went through a cycle of this internally at Citrix as part of our hybrid philosophy building and to help develop the capabilities and tools we need in the organization for teams to be effective. I happened to be aligned to interview our Born Digital segment. Most of them were fairly new in their careers, and some had started during COVID.

My favorite question was, “If you were a manager right now, what would you be focused on?” Across the board, each of these interviewees, employees at our organization, said, “I would be very clear on what’s expected as far as working hours and when it’s okay to log off.”

That insight alone was validated in the research. Not only is this generation looking for job stability and security, but they are also very likely to not be the ones to ask for permission. They are looking around to figure out what’s okay and not okay.

We need to be clear about helping them define boundaries and to model those boundaries because Born Digital doesn’t mean born burnt out. We want to be sure that we keep the engagement, curiosity, innovation, creativity, and energy that the Born Digital population brings into organizations. We need to help them be successful by developing a sustainable pattern for work.  

Gardner: Tim, how do you see us closing the gap in the near term? 

Keep it simple to reduce daily din

Minahan: The convergence of the digital workspace demands tools that facilitate open and equitable collaboration and transparency across teams, whether they are in the office or working remotely. That includes driving continuous learning and best-practice-sharing and achieving better business outcomes together. The physical workplace needs to be fitted for purpose when is it important to come together, when we do benefit from that, whether it’s for collaborative projects or the social aspects, such as for creating that water-cooler dynamic. 

The Future of Work Demands Flexibility,

Choice, and Autonomy. 

 Lean to Foster a More Engaged Workforce.

As Amy just mentioned, which I think is so critically important, the ultimate success in this is going to require how you transition your culture. How do you make it okay for people to turn off in this always-connected world? How do you set norms on how we create an equitable, inclusive workplace for those that work in the office and those who work remotely?

Amy has put in place here at Citrix a very good framework. Similar to that, we are advising our customers to triangulate between a Venn diagram of creating the right digital workplace, coupling it with the right purpose-built workspace, and then enabling it all with common policies and culture that foster equality, inclusiveness and focus on business outcomes.  

Gardner: Is there something about the way technology itself has been delivered into the marketplace by vendors, including Citrix, that also needs to change? When we talk about culture, behavior, and motivations, that’s not the way that technology has been shaped and delivered. Is there a lesson from this research?

Haworth: Great employee experiences are shaped by empowerment of employees at a very personal level. When technology guides and automates work experiences to free the person up from the noise, the friction, of having to log-in to multiple tools, to context switch — all of that creates a draining effect on a human. The technology is now positioned to remove that friction by letting technology do what technology does best, which is to automate, guide, and organize based on personal preferences. 

New innovations from platforms such as Citrix help unite work all in one place to simplify tasks for the employee. It means there is more that the employee doesn’t have to think about. It’s seamless. That quality of interaction is a key lever in creating positive employee experiences, which lead to engagement and commitment to an organization in a world that is fraught right now with finding talent, with fighting attrition, and cultivating the right talent to innovate into the future. All of these elements really matter, and technology has a big role to play.  

Gardner: It sounds like automation is another word we should be using. We talked about using ML and AI to help, but the more you can automate, even though that sounds in conflict with allowing people to be flexible, is important.  

Minahan: Amy hit the nail on the head. It is about automating and guiding employees, but it’s also removing the noise from their day. The dirty little secret in business is each of these individual tools that we have introduced into our workday on their own added productivity, helping us do our jobs, but collectively they have created such a cacophony of noise and distraction in our day, it’s actually frustrating employees. 

If you think back to pre-pandemic, one of the dynamics was a Gallup study that showed employees were more disengaged than at any other time in history. Some 86 percent of employees felt they were disengaged at work because they were frustrated with the complexity of the work environment, all the tools, the apps, and chat channels that were interrupting them from doing their jobs. And that’s only been exacerbated throughout the pandemic as people don’t even have a clearly defined beginning and end to their days. And so it continues. 

As we introduce technology, we need to mute the noise. We need to automate mundane tasks so employees don’t change context every two seconds. Create a unifying workspace that allows access to all tools and content in the right context.

One of the things we need to be thinking about as technologists, as we introduce technology or we build solutions, is how do you mute this noise? How do you automate some of the mundane tasks so that employees don’t need to switch context every two seconds? How do you create a unifying workspace that allows them to have access to all the tools, all the apps, all the content, all the business services they need to get their job done without needing to remember multiple passwords and go everywhere else?

And how do you begin to literally use things like AI and ML to guide them through their day, presenting them with the right information at the right time, not all the information, allowing them to execute tasks without needing to navigate multiple different environments? Then, how do you create a collaborative workspace that is equitable and provides transparency and a common place for folks to align around common goals, execute against projects, understand the status, no matter whether they are working in an office in a conference room together or are distributed to all corners of the globe? 

Gardner: For those older leaders or younger entrants into the workspace who want to learn more about this research, how can they? And what comes next for Citrix research? 

Design the future of work 

Minahan: Anyone can find this research available on This research effort, as well as future research efforts, are part of an initiative we took together with academia, research organizations, and governments starting well over a year ago called the Work 2035 Project to try to understand the skills, organizational structures, and role technology plays in shaping the future of work. The only difference is the future of work is arriving a heck of a lot faster than any of us ever expected.

The next big event is that we are hosting a thought leadership event that will be based in part on the latest research effort in October, a virtual summit we are calling Fieldwork, where we are going to bring together some of the industry thought leaders around the topic of how the future of work is evolving and have an open dialogue, and we will be providing more information on that as we get closer.  

Gardner: Amy, for those organizations that may have learned more about the employee experience function of governance, leadership, and management, what advice do you have for organizations should they be interested in setting up an employee experience organization? 

Haworth: First, I say congratulations to those organizations for investing and taking the time to invest in understanding what employee experience means in the context of their particular desire for business outcomes and in their particular culture.

Citrix published this year some very helpful research around the employee experience operating model. It can be found on in the Fieldworksection. I personally have leveraged this in setting up some of the key pillars of our own philosophy and approach to employee experience. It is deep and it will also be a great springboard for moving forward with establishing both a mindset and some practices and programs leading to exceptional stronger employee experiences.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in Cloud computing | Leave a comment

How financial firms are blazing a trail to more predictive and resilient operations come what may

The last few years have certainly highlighted the need for businesses of all kinds to build up their operational resilience. With a rising tide of pandemic waves, high-level cybersecurity incidents, frequent technology failures, and a host of natural disasters — there’s been plenty to protect against.

As businesses become more digital and dependent upon end-to-end ecosystems of connected services, the responsibility for protecting critical business processes has clearly shifted. It’s no longer just a task for IT and security managers but has become top-of-mind for line-of-business owners, too.

Stay with us now as BriefingsDirect explores new ways that those responsible for business processes specifically in the financial sector are successfully leading the path to avoiding and mitigating the impact and damage from these myriad threats.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in rapidly beefing-up operational resilience by bellwether finance companies, BriefingsDirect welcomes Steve Yon, Executive Director of the EY ServiceNow Practice, and Sean Culbert, Financial Services Principal at EY. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Sean, how have the risks modern digital businesses face changed over the past decade? Why are financial firms at the vanguard of identifying and heading off these pervasive risks?

Culbert: The category of financial firms forms a broad scope of types. The risks for a consumer bank, for example, are going to be different than the risks for an investment bank or from a broker-dealer. But they all have some common threads. Those include the expectation to be always-on, at the edge, and able to get to your data in a reliable and secure way.

There’s also the need for integration across the ecosystem. Unlike product sets before, such as in retail brokerage or insurance, customers expect to be brought together in one cohesive services view. That includes more integration points and more application types.

This all needs to be on the edge and always-on, even as it includes, increasingly, reliance on third-party providers. They need to walk in step with the financial institutions in a way that they can ensure reliability. In certain cases, there’s a learning curve involved, and we’re still coming up that curve.

It remains a shifting set of expectations to the edge. It’s different by category, but the themes of integrated product lines — and being able to move across those product lines and integrate with third-parties — has certainly created complexity.

Gardner: Steve, when you’re a bank or a financial institution that finds itself in the headlines for bad things, that is immediately damaging for your reputation and your brands. How are banks and other financial organizations trying to be rapid in their response in order to keep out of the headlines?

Interconnected, system-wide security

Yon: It’s not just about having the wrong headline on the front cover of American Banker. As Sean said, the taxonomy of all these services is becoming interrelated. The suppliers tend to leverage the same services.

Products and services tend to cross different firms. The complexity of the financial institution space right now is high. If something starts to falter — because everything is interconnected — it could have a systemic effect, which is what we saw several years ago that brought about Dodd-Frank regulations.

So having a good understanding of how to measure and get telemetry on that complex makeup is important, especially in financial institutions. It’s about trust. You need to have confidence in where your money is and how things are going. There’s a certain expectation that must happen. You must deal with that despite mounting complexity. The notion of resiliency is critical to a brand promise — or customers are going to leave.

One, you should contain your own issues. But the Fed is going to worry about it if it becomes broad because of the nature of how these firms are tied together. It’s increasingly important — not only from a brand perspective of maintaining trust and confidence with your clients — but also from a systemic nature; of what it could do to the economy if you don’t have good reads on what’s going on with support of your critical business services.

Gardner: Sean, the words operational resilience come with a regulatory overtone. But how do you define it?

The operational resilience pyramid

Culbert: We begin with the notion of a service. Resilience is measured, monitored, and managed around the availability, scalability, reliability, and security of that service. Understanding what the service is from an end-to-end perspective, how it enters and exits the institution, is the center to our universe.

Around that we have inbound threats to operational resilience. From the threat side, you want the capability to withstand a robust set of inbound threats. And for us, one of the important things that has changed in the last 10 years is the sophistication and complexity of the threats. And the prevalence of them, quite frankly.

We have COVID, we have proliferation of very sophisticated cyber attacks that weren’t around 10 years ago. Geopolitically, we’re all aware of tensions, and weather events have become more prevalent. It’s a wide scope of inbound threats.

If you look at the four major threat categories we work with — weather, cyber, geopolitical, and pandemics — pick any one of those and there has been a significant change in those categories. We have COVID, we have proliferation of very sophisticated cyber attacks that weren’t around 10 years ago, often due to leaks from government institutions. Geopolitically, we’re all aware of tensions, and weather events have become more prevalent. It’s a wide scope of inbound threats.

And on the outbound side, businesses need the capability to not only report on those things, but to make decisions about how to prevent them. There’s a hierarchy in operational resilience. Can you remediate it? Can you fix it? Then, once it’s been detected, how can minimize the damage. At the top of the pyramid, can you prevent it before it hits?

So, there’s been a broad scope of threats against a broader scope of service assets that need to be managed with remediation. That was the heritage, but now it’s more about detection and prevention.

Gardner: And to be proactive and preventative, operational resilience must be inclusive across the organization. It’s not just one group of people in a back office somewhere. The responsibility has shifted to more people — and with a different level of ownership.

What’s changed over the past decade in terms of who’s responsible and how you foster a culture of operational resiliency?

Bearing responsibility for services

Culbert: The anchor point is the service. And services are processes: It’s technology, facilities, third parties, and people. The hard-working people in each one of those silos all have their own view of the world — but the services are owned by the business. What we’ve seen in recognition of that is that the responsibility for sustaining those services falls with the first line of business [the line of business interacting with consumers and vendors at the transaction level].

Yon: There are a couple of ways to look at it. One, as Sean was talking about, the lines of defense and the evolution of risk has been divvied up. The responsibilities have had line-of-sight ownership over certain sets of accountabilities. But you also have triangulation from others needing to inspect and audit those things as well.

The time is right for the new type of solution that we’re talking about now. One, because the nature of the world has gotten more complex. Two, the technology has caught up with those requirements.

The move within the tech stack has been to become more utility-based, service-oriented, and objectified. The capability to get signals on how everything is operating, and its status within that universe of tech, has become a lot easier. And with the technology now being able to integrate across platforms and operate at the service level — versus at the component level — it provides a view that would have been very hard to synthesize just a few years ago.

What we’re seeing is a big shot in the arm to the power of what a typical risk resilience compliance team can be exposed to. They can manage their responsibilities at a much greater level.

Before they would have had to develop business continuity strategies and plans to know what to do in the event of a fault or a disruption. And when those things come out, the three-ring binders, the war room gets assembled and people start to figure out what to do. They start running the playbook.

What we’re seeing is a big shot in the arm to the power of what a typical risk resilience compliance team can be exposed to. They can manage their responsibilities at a mch greater level.

The problem with that is that while they’re running the playbook, the fault has occurred, the destruction has happened, and the clock is ticking for all those impacts. The second-order consequences of the problem are starting to amass with respect to value destruction, brand reputational destruction, as well as whatever customer impacts there might be.

But now, because of technology and moving toward Internet of things (IoT)thinking across assets, people, facilities, and third-party services, technology can self-declare their state. That data can be synthesized to say, “Okay, I can start to pick up a signal that’s telling me that a fault is inbound.” Or something looks like it’s falling out of the control thresholds that they have.

That tech now gives me the capability to get out in front of something. That would be almost unheard-of years ago. The nexus of tech, need, and complexity are all hitting right now. That means we’re moving and pivoting to a new type of solution rising out of the field.

Gardner: You know, many times we’ve seen such trends happen first in finance and then percolate out to the rest of the economy. What’s happened recently with banking supervision, regulations, and principles of operational resilience?

Financial sector leads the way

Yon: There are similar forms of pressure coming from all regulatory-intense industries. Finance is a key one, but there’s also power, utilities, oil, and gas. The trend is happening primarily first in regulatory-intensive industries.

Culbert: A couple years ago, the Bank of England and the Prudential Regulation Authority (PRA) put out a consultation paper that was probably most prescriptive out of the UK. We have the equivalent over here in the US around expectations for operational resiliency. And that just made its way into policy or law. For the most part, on a principles basis, we all share a common philosophy in terms of what’s prudent.

A lot of the major institutions, the ones we deal with, have looked at those major tenets in these policies and have said they will be practiced. And there are four fundamental areas that the institutions must focus on.

One is, can it declare and describe its critical business services? Does it have threshold parameters logic assigned to those services so that it knows how far it can go before it sustains damage across several different categories? Are the assets that support those services known and mapped? Are they in a place where we can point to them and point to the health of them? If there’s an incident, can they collaborate around the sustaining of those assets?

As I said earlier, those assets generally fall into small categories: people, facilities, third parties, and technology. And, finally, do you have the tools in place to keep those services within those tolerance parameters and have other alerting systems to let you know which of the assets may well be failing you, if the services are at risk.

That’s a lay-person, high-level description of the Bank of England policy on operational risks for today’s Financial Management Information Systems (FMIS). Thematically most of the institutions are focusing on those four areas, along with having credible and actionable testing schemes to simulate disruptions on the inbound side.

In the US, Dodd-Frank mandated that institutions declare which of those services could disrupt critical operations and, if those operations were disrupted, could they in turn disrupt the general economy. The operational resilience rules and regulations fall back on that. So, now that you know what they are, can you risk-rate them based on the priorities of the bank and its counterparties? Can you manage them correctly? That’s the letter-of-the-law-type regulation here. In Japan, it’s more credential-based regulation like the Bank of England. It all falls into those common categories.

Gardner: Now that we understand the stakes and imperatives, we also know that the speed of business has only increased. So has the speed of expectations for end consumers. The need to cut time to discovery of the problems and to find root causes also must be as fast as possible.

How should banks and other financial institutions get out in front of this? How do we help organizations move faster to their adoption, transform digitally, and be more resilient to head off problems fast?

Preventative focus increases

Yon: Once there’s clarity around the shift in the goals, knowing it’s not good enough to just be able to know what to do in the event of a fault or a potential disruption, the expectation becomes the proof to regulatory bodies and to your clients that they should trust you. You must prove that you can withstand and absorb that potential disruption without impact to anybody else downstream. Once people get their head around the nature of the expectation-shifting to being a lot more preventative versus reactive, the speeds and feeds by which they’re managing those things become a lot easier to deal with.

You’d get the phone call at 3 a.m. that a critical business service was down. You’d have the tech phone call that people are trying to figure out what happened. That lack of speed killed because you had to figure a lot of things out while the clock was ticking. But now, you’re allowing yourself time to figure things out.

Back when I was running the technology at a super-regional bank, you’d get the phone call at 3 a.m. that a critical business service was down. You’d have the tech phone call that people are trying to figure out what happened because they started to notice at the help desk that a number of clients and customers were complaining. The clock had been ticking before 3 a.m. when I got the call. And so, by now, by that time, those clients are upset.

Yet we were spending our time trying to figure out what happened and where. What’s the overall impact? Are there other second-order impacts because of the nature of the issue? Are other services disrupted as well? Again, it gets back to the complexity factor. There are interrelationships between the various components that make up any service. Those services are shared because that’s how it is. People lean on those things — and that’s the risk you take.

Before, the lack of speed literally killed because you had to figure a lot of those things out while the clock was ticking and the impact was going on. But now, you’re allowing yourself time to figure things out. That’s what we call a decision-support system. You want to alert ahead of time to ensure that you understand the true blast area of what the potential destruction is going to be.

Secondly, can I spin up the right level of communications so that everybody who could be affected knows about it? And thirdly, can I now get the right people on the call — versus hunting and pecking to determine who has a problem on the fly at 3 a.m.?

The nature of having speed is when you deal with an issue by buying time for firms to deal with the thing intelligently versus in a shotgun approach and without truly understanding the nature of the impact until the next day.

Gardner: Sean, it sounds like operation resiliency is something that never stops. It’s an ongoing process. That’s what buys you the time because you’re always trying to anticipate. Is that the right way to look at it?

Culbert: It absolutely is the way to look at it. A time objective may be specific to the type of service, and obviously it’s going to be different from a consumer bank to a broker-dealer. You will have a time objective attached to a service, but is that a critical service that, if disrupted, could further disrupt critical operations that could then disrupt the real economy? That’s come into focus in the last 10 years. It has forced people to think through: If you were if a broker-dealer and you couldn’t meet your hedge fund positions, or if you were a consumer bank and you couldn’t get folks their paychecks, does that put people in financial peril?

These involve very different processes and have very different outcomes. But each has a tolerance of filling in the blank time. So now it’s just more of a matter of being accountable for those times. There are two things: There’s the customer expectation that you won’t reach those tolerances and be able to meet the time objective to meet the customers’ needs.

And the second is that technology has made it more manageable as the domino or contagion effect of one service tipping over another one. So now it’s not just, “Is your service ready to go within its objective of half an hour?” It’s about the knock-on effect to other services as well.

So, it’s become a lot more correlated, and it’s become regional. Something that might be a critical service in one business, might not be in another — or in one region, might not be in another. So, it’s become more of a multidimensional management problem in terms of categorically specific time objectives against specific geographies, and against the specific regulations that overhang the whole thing.

Gardner: Steve, you mentioned earlier about taking the call at 3 a.m. It seems to me that we have a different way of looking at this now — not just taking the call but making the call. What’s the difference between taking the call and making the call? How does that help us prepare for better operation resiliency?

Make the call, don’t take the call

Yon: It’s a fun way of looking a day in the life of your chief resiliency officer or chief risk officer (CRO) and how it could go when something bad happens. So, you could take the call from the CEO or someone from the board as they wonder why something is failing. What are you going to do about it?

You’re caught on your heels trying to figure out what was going on, versus making the call to the CEO or the board member to let them know, “Hey, these were the potential disruptions that the firm was facing today. And this is how we weathered through it without incident and without damaging service operations or suffering service operations that would have been unacceptable.”

We like to think of it as not only trying to prevent the impact to the clients but also from the possibility of a systemic problem. It could potentially increase the lifespan of a CRO by showing they can be responsible for the firm’s up-time, versus just answer questions post-disruption. It provides a little bit of levity but it’s also a truth that there are more than just the consequences to the clients, but also to those people responsible for that function within the firm.

Gardner: Many leading-edge organizations have been doing digital transformation for some time. We’re certainly in the thick of digital transformation now after the COVID requirements of doing everything digitally rather than in person.

But when it comes to finance and the services that we’re describing — the interconnections in the interdependencies — there are cyber resiliency requirements that cut across organizational boundaries. Having a moat around your organization, for example, is no longer enough.

What is it about the way that ServiceNow and EY are coming together that helps make operational resiliency an ongoing process possible?

Digital transformation opens access

Yon: There are two components. You need to ask yourself, “What needs to be true for the outcome that we’re talking about to be valid?” From a supply-side, what needs to be true is, “Do I have good signal and telemetry across all the components and assets of resources that would pose a threat or a cause for a threat to happen from a down service?”

With the move to digital transformation, more assets and resources that compose any organization are now able to be accessed. That means the state of any particular asset, in terms of its preferential operating model, are going to be known.

With the move to digital transformation, more assets and resources that compose any organization are now able to be accessed. That means the state of any particular asset, in terms of its preferential operating model, are going to be known. I need to have that data and that’s what digital transformation provides.

Secondly, I need a platform that has wide integration capabilities and that has workflow at its core. Can I perform business logic and conditional synthesis to interpret the signals that are coming from all these different systems?

That’s what’s great about ServiceNow — there hasn’t been anything that it hasn’t been able to integrate with. Then it comes down to, “Okay, do I understand the nature of what it is I’m truly looking for as a business service and how it’s constructed?” Once I do that, I’m able to capture that control, if you will, determine its threshold, see that there’s a trigger, and then drive the workflows to get something done.

For a hypothetical example, we’ve had an event so that we’re losing the trading floor in city A, therefore I know that I need to bring city B and its employees online and to make them active so I can get that up and running. ServiceNow can drive that all automatically, within the Now Platform itself, or drive a human to provide the approvals or notifications to drive the workflows as part of your business continuity plan (BCP) going forward. You will know what to do by being able to detect and interpret the signals, and then based on that, act on it.

That’s what ServiceNow brings to make the solution complete. I need to know what that service construction is and what it means within the firm itself. And that’s where EY comes to the table, and I’ll ask Sean to talk about that.

Culbert: ServiceNow brings to the table what we need to scale and integrate in a logical and straightforward way. Without having workflows that are cross-silo and cross-product at scale — and with solid integration of capabilities — this just won’t happen.

When we start talking about the signals from everywhere against all the services — it’s a sprawl. From an implementation perspective, it feels like it’s not implementable.

The regulatory burden requires focus on what’s most important, and why it’s most important to the market, the balance sheet, and the customers. And that’s not for the 300 services, but for the one or two dozen services that are important. Knowing that gives us a big step forward by being able to scope out the ServiceNow implementation.

And from there, we can determine what dimensions associated with that service we should be capturing on a real-time basis. To progress from remediation to detection on to prevention, we must be judicious of what signals we’re tracking. We must be correct.

We have the requirement and obligation to declare and describe what is critical using a scalable and integrable technology, which is ServiceNow. That’s the big step forward.

Yon: The Now platform also helps us to be fast. If you look under the hood of most firms, you’ll find ServiceNow is already there. You’ll see that there’s already been work done in the risk management area. They already know the concepts and what it means to deal with policies and controls, as well as the triggers and simulations. They have IT and other assets under management, and they know what a configuration management database (CMDB) is.

These are all accelerants that not only provide scale to get something done but provide speed because so many of these assets and service components are already identified. Then it’s just a matter of associating them correctly and calibrating it to what’s really important so you don’t end up with a science fair integration project.

Gardner: What I’m still struggling to thread together is how the EY ServiceNow alliance operational resiliency solution becomes proactive as an early warning system. Explain to me how you’re able to implement this solution in such a way that you’re going to get those signals before the crisis reaches a crescendo.

Tracking and recognizing faults

Yon: Let’s first talk about EY and how it comes with an understanding from the industry of what good looks like with respect to what a critical business service needs to be. We’re able to hone down to talking about payments or trading. This maps the deconstruction of that service, which we also bring as an accelerant.

We know what it looks like — all the different resources, assets, and procedures that make that critical service active. Then, within ServiceNow, it manages and exposes those assets. We can associate those things in the tool relatively quickly. We can identify the signal that we’re looking to calibrate on.

Then, based on what ServiceNow knows how to do, I can put a control parameter on this service or component within the threshold. It then gives me an indication whether something might be approaching a fault condition. We basically look at all the different governance, risk management, and compliance (GRC) leading indicators and put telemetry around those things when, for example, it looks like my trading volume is starting to drop off.

Based on what ServiceNow knows how to do, I can put a control parameter on this service or component within the threshold. It then gives me an indication whether something might be approaching a fault condition.

Long before it drops to zero, is there something going on elsewhere? It delivers up all the signals about the possible dimensions that can indicate something is not operating per its normal expected behavior. That data is then captured, synthesized, and displayed either within ServiceNow or it is automated to start running its own tests to determine what’s valid.

But at the very least, the people responsible are alerted that something looks amiss. It’s not operating within the control thresholds already set up within ServiceNow against those assets. This gives people time to then say, “Okay, am I looking at a potential problem here? Or am I just looking at a blip and it’s nothing to worry about?”

Gardner: It sounds like there’s an ongoing learning process and a data-gathering process. Are we building a constant mode of learning and automation of workflows? Do we do get a whole greater than the sum of the parts after a while?

Culbert: The answer is yes and yes. There’s learning and there’s automation. We bring to the table some highly effective regulatory risk models. There’s a five-pillar model that we’ve used where market and regulatory intelligence feeds risk management, surveillance, analysis, and ultimately policy enforcement.

And how the five pillars work together within ServiceNow — it works together within the business processes within the organization. That’s where we get that intelligence feeding, risk feeding, surveillance analysis, and enforcement. That workflow is the differentiator, to allow rapid understanding of whether it’s an immediate risk or concentrating risk.

And obviously, no one is going to be 100 percent perfect, but having context and perspective on the origin of the risk helps determine whether it’s a new risk — something that’s going to create a lot of volatility — or whether it’s something the institution has faced before.

Werationalize that risk — and, more importantly, rationalize the lack of a risk — to know at the onset if it’s a false positive. It’s an essential market and regulatory intelligence mechanism. Are they feeding us only the stuff that’s really important?

Our risk models tell us that. That risk model usually takes on a couple of different flavors. One flavor is similar to a FICO score. So, have you seen the risk? Have you seen it before? It is characterizable by the words coming from it and its management in the past.

And then some models are more akin to a bar calculator. What kind of volatility is this risk going to bring to the bank? Is it somebody that’s recreationally trying to get into the bank, or is it a state actor?

Once the false-positive gets escalated and disposed of — if it’s, in fact, a false positive — are we able to plug it into something robust enough to surveil for where that risk is headed? That’s the only way to get out in front of it.

The next phase of the analysis says, “Okay, who should we talk to about this? How do we communicate that this is bigger than a red box, much bigger than a red box, a real crisis-type risk? What form does that communication take? Is it a full-blown crisis management communication? Is it a standing management communication or protocol?”

We take that affected function and very quickly understand the health or the resiliency of other impacted functions. We use our own proprietary model. It helps to shift from primary states to alternative states.

And then ultimately, this goes to ServiceNow, so we take that affected function and very quickly understand the health or the resiliency of other impacted functions. We use our own propriety model. It’s a military model used for nuclear power plants, and it helps to shift from primary states to alternative states, as well as to contingency and emergency states.

At the end, the person who oversees policy enforcement must gain the tools to understand where they should be fixing the primary state issue or moving on from it. They must know to step aside or shift into an emergency state.

From our perspective, it is constant learning. But there are fundamental pillars that these events flow through that deliver the problem to the right person and give that person options for minimizing the risk.

Gardner: Steve, do we have any examples or use cases that illustrate how alerting the right people with the right skills at the right time is an essential part of resuming critical business services or heading off the damage?

Rule out retirement risks

Yon: Without naming names, we have a client within Europe, the Middle East and Africa (EMEA) we can look at. One of the things the pandemic brought to light is the need to know our posture to continuing to operate the way we want. Getting back to integration and integrability, where are we going to get a lot of that information for personnel from? Workday, their human resources (HR) system of record, of course.

Now, they had a critical business service owner who was going to be retiring. That sounds great. That’s wonderful to hear. But one of the valid things for this critical business service to be considered operating in its normal state is to check for an owner. Who will cut through the issues and process and lead going forward?

If there isn’t an owner identified for the service, I would be considered at risk for this service. It may not be capable of maintaining its continuity. So, here’s a simple use case where someone could be looking at a trigger from Workday that asks if this leadership person is still in the role and active.

Is there a control around identifying if they are going to become inactive within x number of months’ time? If so, get on that because the regulators will look at these processes potentially being out of control.

There’s a simple use case that has nothing to do with technology but shows the integrability of ServiceNow into another system of record. It turns ServiceNow into a decision-support platform that drives the right actions and orchestrates timely actions — not only to detect a disruption but anything else considered valid as a future risk. Such alerts give the time to get it taken care of before a fault happens.

Gardner: The EY ServiceNow alliance operational resilience solution is under the covers but it’s powering leaders’ ability to be out in front of problems. How does the solution enable various levels of leadership personas, even though they might not even know it’s this solution they’re reacting to?

Leadership roles evolve

Culbert: That’s a great question. For the last six to seven years, we’ve all heard about the shift from the second to the first line of primary ownership in the private sector. I’ve heard many occasions for our first line business manager saying, “You know, if it is my job, first I need to know what the scope of my responsibilities are and the tools to do my job.” And that persona of the frontline manager having good data, that’s not a false positive. It’s not eating at his or her ability to make money. It’s providing them with options of where to go to minimize the issue.

The personas are clearly evolving. It was difficult for risk managers to move solidly into the first line without these types of tools. And there were interim management levels, too. Someone who sat between the first and the second line — level 1.5. or line 1.5. And it’s clearly pushing into the first line. How do they know their own scope as relates to the risk to the services?

Now there’s a tool that these personas can use to be not only be responsible for risk but responsive as well. And that’s a big thing in terms of the solution design. With ServiceNow over the last several years, if the base data is correctly managed, then being able to reconfigure the data and recalibrate the threshold logic to accommodate a certain persona is not a coding exercise. It’s a no-code step forward to say, “Okay, this is now the new role and scope, and that role and scope will be enabled in this way.” And this power is going to direct the latest signals and options.

But it’s all about the definition of a service. Do we all agree end-to-end what it is, and the definition of the persona? Do we all understand who’s accountable and who’s responsible? Those two things are coming together with a new set of tools that are right and correct.

Yon: Just to go back to the call at 3 a.m., that was a tech call. But typically, what happens is there’s also going to be the business call. So, one of the issues we’re also solving with ServiceNow is in one system we manage the nature of information irrespective of what your persona is. You have a view of risk that can be tailored to what it is that you care about. And all the data is congruent back and forth.

It becomes a lot more efficient and accurate for firms to manage the nature of understanding on what things are when it’s not just the tech community talking. The business community wants to know what’s happening — and what’s next? And then someone can translate in between. This is a real-time way for all those personas to become a line around the nature of the issue with respect to their perspective.

Gardner: I really look forward to the next in our series of discussions around operational resilience because we’re going to learn more about the May announcement of this solution.

But as we close out today’s discussion, let’s look to the future. We mentioned earlier that almost any highly regulated industry will be facing similar requirements. Where does this go next?

It seems to me that the more things like machine learning (ML) and artificial intelligence (AI) analyze the many sources of data, they will make it even more powerful. What should we look for in terms of even more powerful implementations?

AI to add power to the equation

Culbert: When you set up the framework correctly, you can apply AI to the thinning out of false positives and for tagging certain events as credible risk events or not credible risk events. AI can also to be used to direct these signals to the right decision makers. But instead of taking the human analyst out of the equation, AI is going to help us. You can’t do it without that framework.

Yon: When you enable these different sets of data coming in for AI, you start to say, “Okay, what do I want the picture to look like in my ability to simulate these things?” It all goes up, especially using ServiceNow.

But back to the comment on complexity and the fact that suppliers don’t just supply one client, they connect to many. As this starts to take hold in the regulated industries — and it becomes more of an expectation for a supplier to be able to operate this way and provide these signals, integration points, telemetry, and transparency that people expect — anybody else trying to lever into this is going to get the lift and the benefit from suppliers who realize that the nature of playing in this game just went up. Those benefits become available to a much broader landscape of industries and for those suppliers.

Gardner: When we put two and two together, we come up with a greater sum. We’re going to be able to deal rapidly with the known knowns, as well as be better prepared for the unknown unknowns. So that’s an important characteristic for a much brighter future — even if we hit another unfortunate series of risk-filled years such as we’ve just suffered.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: ServiceNow and EY.


Posted in AIOps, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, Cyber security, digital transformation, disaster recovery, machine learning, managed services, Security, Software, User experience | Tagged , , , , , , , , , , , , , , , | Leave a comment

How API security provides a killer use case for ML and AI

While the use of machine learning (ML) and artificial intelligence (AI) for IT security may not be new, the extent to which data-driven analytics can detect and thwart nefarious activities is still in its infancy.

As we’ve recently discussed here on BriefingsDirect, an expanding universe of interdependent application programming interfaces (APIs) forms a new and complex threat vector that strikes at the heart of digital business.

How will ML and AI form the next best security solution for APIs across their dynamic and often uncharted use in myriad apps and services? Stay with us now as we answer that question by exploring how advanced big data analytics forms a powerful and comprehensive means to track, understand, and model safe APIs use.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn how AI makes APIs secure and more resilient across their life cycles and ecosystems, BriefingsDirect welcomes Ravi Guntur, Head of Machine Learning and Artificial Intelligence at The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why does API security provide such a perfect use case for the strengths of ML and AI? Why do these all come together so well? 

Guntur: When you look at the strengths of ML, the biggest strength is to process data at scale. And newer applications have taken a turn in the form of API-driven applications.

Large pieces of applications have been broken down into smaller pieces, and these smaller pieces are being exposed as even smaller applications in themselves. To process the information going between all these applications, to monitor what activity is going on, the scale at which you need to deal with them has gone up many fold. That’s the reason why ML algorithms form the best-suited class of algorithms to deal with the challenges we face with API-driven applications. 

Gardner: Given the scale and complexity of the app security problem, what makes the older approaches to security wanting? Why don’t we just scale up what we already do with security?

More than rules needed to secure apps

Guntur: I’ll give an analogy as to why older approaches don’t work very well. Think of the older approaches as a big box with, let’s say, a single door. For attackers to get into that big box, all they must do is crack through that single door. 

No alt text provided for this image

Now, with the newer applications, we have broken that big box into multiple small boxes, and we have given a door to each one of those small boxes. If the attacker wants to get into the application, they only have to get into one of these smaller boxes. And once he gets into one of the smaller boxes, he needs to take a key out of it and use that key to open another box.

By creating API-driven applications, we have exposed a much bigger attack surface. That’s number one. Number two, of course, we have made it challenging to the attackers, but the attack surface being so much bigger now needs to be dealt with in a completely different way.

The older class of applications took a rules-based system as the common approach to solve security use cases. Because they just had a single application and the application would not change that much in terms of the interfaces it exposed, you could build in rules to analyze how traffic goes in and out of that application.

Now, when we break the application into multiple pieces, and we bring in other paradigms of software development, such as DevOps and Agile development methodologies, this creates a scenario where the applications are always rapidly changing. There is no way rules can catch up with these rapidly changing applications. We need automation to understand what is happening with these applications, and we need automation to solve these problems, which rules alone cannot do. 

Gardner: We shouldn’t think of AI here as replacing old security or even humans. It’s doing something that just couldn’t be done any other way.

Guntur: Yes, absolutely. There’s no substitute for human intelligence, and there’s no substitute for the thinking capability of humans. If you go deeper into the AI-based algorithms, you realize that these algorithms are very simple in terms of how the AI is powered. They’re all based on optimization algorithms. Optimization algorithms don’t have thinking capability. They don’t have creativity, which humans have. So, there’s no way these algorithms are going to replace human intelligence.

Learn More  


They are going to work alongside humans to make all the mundane activities easier for humans and help humans look at the more creative and the difficult aspects of security, which these algorithms can’t do out of the box.

Gardner: And, of course, we’re also starting to see that the bad guys, the attackers, the hackers, are starting to rely on AI and ML themselves. You have to fight fire with fire. And so that’s another reason, in my thinking, to use the best combination of AI tools that you can.

Guntur: Absolutely.

Gardner: Another significant and growing security threat are bots, and the scale that threat vector takes. It seems like only automation and the best combination of human and machines can ferret out these bots.

Machines, humans combine to combat attacks

Guntur: You are right. Most of the best detection cases we see in security are a combination of humans and machines. The attackers are also starting to use automation to get into systems. We have seen such cases where the same bot comes in from geographically different locations and is trying to do the same thing in some of the customer locations.

The reason they’re coming from so many different locations is to challenge AI-based algorithms. One of the oldest schools of algorithms looks at rate anomaly, to see how quickly somebody is coming from a particular IP address. The moment you spread the IP addresses across the globe, you don’t know whether it’s different attackers or the same attacker coming from different locations. This kind of challenge has been brought by attackers using AI. The only way to challenge that is by building algorithms to counter them.

No alt text provided for this image

One thing is for sure, algorithms are not perfect. Algorithms can generate errors. Algorithms can create false positives. That’s where the human analyst comes in, to understand whether what the algorithm discovered is a true positive or a false positive. Going deeper into the output of an algorithm digs back into exactly how the algorithm figured out an attack is being launched. But some of these insights can’t be discovered by algorithms, only humans when they correlate different pieces of information, can find that out. So, it requires a team. Algorithms and humans work well as a team.

Gardner: What makes the way in which is doing ML and AI different? How are you unique in your vision and execution for using AI for API security?

Guntur: When you look at any AI-based implementation, you will see that there are three basic components. The first is about the data itself. It’s not enough if you capture a large amount of data; it’s still not enough if you capture quality data. In most cases, you cannot guarantee data of high quality. There will always be some noise in the data. 

But more than volume and quality of data, what is more important is whether the data that you’re capturing is relevant for the particular use-case you’re trying to solve. We want to use the data that is helpful in solving security use-cases. built a platform from the ground up to cater to those security use cases. Right from the foundation, we began looking at the specific type of data required to solve modern API-based application security use cases. That’s the first challenge that we address, it’s very important, and brings strength to the product.

Seek differences in APIs

Once you address the proper data issue, the next is about how you learn from it. What are the challenges around learning? What kind of algorithms do we use? What is the scenario when we deploy that in a customer location?

We realized that every customer is completely different and has a completely different set of APIs, too, and those APIs behave differently. The data that goes in and out is different. Even if you take two e-commerce customers, they’re doing the same thing. They’re allowing you to look at products, and they’re selling you products. But the way the applications have been built, and the API architecture — everything is different.

We realized it’s no use to build supervised approaches. We needed to come up with an architecture where the day we deploy at the customer location; the algorithm then self-learns.

We realized it’s no use to build supervised approaches. We needed to come up with an architecture where the day we deploy at the customer location; the algorithm then self-learns. The whole concept of being able to learn on its own just by looking at data is the core to the way we build security using the AI algorithms we have. 

Finally, the last step is to look at how we deliver security use cases. What is the philosophy behind building a security product? We knew that rules-based systems are not going to work. The alternate system is modeled around anomaly detection. Now, anomaly detection is a very old subject, and we have used anomaly detection in various things. We have used it to understand whether machinery is going to go down, we have used them to understand whether the traffic patterns on the road are going to change, and we have used it for anomaly detection in security.

But within anomaly detection, we focused on behavioral anomalies. We realized that APIs and the people who use APIs are the two key entities in the system. We needed to model the behavior of these two groups — and when we see any deviation from this behavior, that’s when we’re able to capture the notion of an attack.

Learn More  


Behavioral anomalies are important because if you look at the attacks, they’re so subtle. You just can’t easily find the difference between the normal usage of an API and abnormal usage. But very deep inside the data and very deep into how the APIs are interacting, there is a deviation in the behavior. It’s very hard for humans to figure this out. Only algorithms can tease this out and determine that the behavior is different from a known behavior.

We have addressed this at all levels of our stack: The data-capture level, and the choice of how we want to execute our AI, and the choice of how we want to deliver our security use cases. And I think that’s what makes Traceable unique and holistic. We didn’t just bolt things on, we built it from the ground up. That’s why these three pieces gel well and work well together.

Gardner: I’d like to revisit the concept you brought up about the contextual use of the algorithms and the types of algorithms being deployed. This is a moving target, with so many different use cases and company by company.

How do you keep up with that rate of change? How do you remain contextual? 

Function over form delivers context 

Guntur: That’s a very good question. The notion of context is abstract. But when you dig deeper into what context is and how you build context, it boils down to basically finding all factors influencing the execution of a particular API. 

Let’s take an example. We have an API, and we’re looking at how this API functions. It’s just not enough to look at the input and output of the API. We need to look at something around it. We need to see who triggered that input. Where did the user come from? Was it a residential IP address that the user came in from? Was it a hosted IP address? Which geolocation is the user coming from? Did this user have past anomalies within the system?

You need to bring in all these factors into the notion of context when we’re dealing with API security. Now, it’s a moving target. The context — because data is constantly changing. There comes a moment when you have fixed this context, when you say that you know where the users are coming from, and you know what the users have done in the past. There is some amount of determinism to whatever detection you’re performing on these APIs.

No alt text provided for this image

Let’s say an API takes in five inputs, and it gives out 10 outputs. The inputs and outputs are a constant for every user, but the values that go into the input varies from user to user. Your bank account is different from my bank account. The account number I put in there is different for you, and it’s different for me. If you build an algorithm that looks for an anomaly, you will say, “Hey, you know what? For this part of the field, I’m seeing many different bank account numbers.” 

There is some problem with this, but that’s not true. It’s meant to have many variations in that account number, and that determination comes from context. Building a context engine is unique in our AI-based system. It helps us tease out false positives and helps us learn the fact that some variations are genuine.

That’s how we keep up with this constant changing environment, where the environment is changing not just because new APIs are coming in. It’s also because new data is coming into the APIs.

Gardner: Is there a way for the algorithms to learn more about what makes the context powerful to avoid false positives? Is there certain data and certain ways people use APIs that allow your model to work better?

Guntur: Yes. When we initially started, we thought of APIs as rigidly designed. We thought of an API as a small unit of execution. When developers use these APIs, they’ll all be focused on very precise execution between the APIs.

We soon realized that developers bundle various additional features within the same API. We started seeing that they just provide a few more input options, but they get completely different functionality from that same API.

But we soon realized that developers bundle various additional features within the same API. We started seeing that they just provide a few more input options, and by triggering those extra input options you get completely different functionality from the same API.

We had to come up with algorithms that discover that a particular API can behave in multiple ways — depending on the inputs being transmitted. It’s difficult for us to figure out whether the API is going to change and has ongoing change. But when we built our algorithms, we assumed that an API is going to have multiple manifestations, and we need to figure out which manifestation is currently being triggered by looking at the data.

We solved it differently by creating multiple personas for the same API. Although it looks like a single API, we have an internal representation of an API with multiple personas.

Gardner: Interesting. Another thing that’s fascinating to me about the API security problem is that the way hackers try not to abuse the API. Instead, they have subtle logic abuse attacks where they’re basically doing what the API is designed to do but using it as a tool for their nefarious activities. 

How does your model help fight against these subtle logic abuse attacks?

Logic abuse detection

Guntur: When you look at the way hackers are getting into distributed applications and APIs using these attacks – it is very subtle. We classify these attacks as business logic abuse. They are using the existing business logic, but they are abusing it. Now, figuring out abuse to business logic is a very difficult task. It involves a lot of combinatorial issues that we need to solve. When I say combinatorial issues, it’s a problem of scale in terms of the number of APIs, the number of parameters that can be passed, and the types of values that can be passed.

Learn More  


When we built the platform, it was not enough to just look at the front-facing APIs, we call them the external APIs. It’s also important for us to go deeper into the API ecosystem.

We have two classes of APIs. One, the external facing APIs, and the other is the internal APIs. The internal APIs are not called by users sitting outside of the ecosystem. They’re called by other APIs within the system. The only way for us to identify the subtle logic attacks is to be able to follow the paths taken by those internal APIs.

If the internal APIs are reaching a resource like a database, and within the database it reaches a particular row and column, it then returns the value. Only then you will be able to figure out that there was a subtle attack. We’re able to figure this out only because of the capability to trace the data deep into the ecosystem.

If we had done everything at the API gateway, if we had done everything at external facing APIs, we would not have figured out that there was an attack launched that went deep into the system and touched a resource it should never have touched.

It’s all about how well you capture the data and how rich your data representation is to capture this kind of attack. Once you capture this, using tons of data, and especially graph-like data, you have no option but to use algorithms to process it. That’s why we started using graph-based algorithms to discover variations in behavior, discover outliers, and uncover patterns of outliers, and so on.

Gardner: To fully tackle this problem, you need to know a lot about data integration, a lot about security and the vulnerabilities, as well as a lot about algorithms, AI, and data science. Tell me about your background. How are you able to keep these big, multiple balls in the air at once when it comes to solving this problem? There are so many different disciplines involved.

Multiple skills in data scientist toolbox

Guntur: Yes, it’s been a journey for me. When I initially started in 2005, I had just graduated from university. I used a lot of mathematical techniques to solve key problems in natural language processing (NLP) as part of my thesis. I realized that even security use cases can be modeled as a language. If you take any operating system (OS), we typically have a few system calls, right? About 200 system calls, or maybe 400 system calls. All the programs running in the operating system are using about 400 system calls in different ways to build the different applications.

It’s similar to natural languages. In natural language, you have words, and you compose the words according to a grammar to get a meaningful sentence. Something similar happens in the security world. We realized we could apply techniques from statistical NLP into the security use cases. We discovered, for example, way back then, certain Solaris login buffer and overflow vulnerabilities.

That’s how the journey began. I then went through multiple jobs and worked on different use cases. I learned if you want to be a good data scientist — or if you want to use ML effectively — you should think of yourself as a carpenter, as somebody with a toolbox with lots of tools in it, and who knows how to use those tools very well.

No alt text provided for this image

But to best use those tools, you also need the experience from building various things. You need to build a chair, a table, and a house. You need to build various things using the same set of tools, and that took me further along that journey.

While I began with NLP, I soon ventured into image processing and video processing, and I applied that to security, too. It furthered the journey. And through that whole process, I realized that almost all problems can be mapped to canonical forms. You can take any complex problem and break it down into simpler problems. Almost all fields can be broken down into simple mathematical problems. And if you know how to use various mathematical concepts, you can solve a lot of different problems.

We are applying these same principles at as well. Yes, it’s been a journey, and every time you look at data you come up with different challenges. The only way to overcome that is to dirty your hands and solve it. That’s the only way to learn and the only way we could build this new class of algorithms — by taking a piece from here, a piece from there, putting it together, and building something different. 

Gardner: To your point that complex things in nature, business, and technology can be brought down to elemental mathematical understandings, once you’ve attained that with APIs, for example, applying this first to security, and rightfully so, it’s the obvious low-lying fruit.

But over time, you also gain mathematical insights and understanding of more about how microservices are used and how they could be optimized. Or even how the relationship between developers and the IT production crews might be optimized.

Is that what you’re setting the stage for here? Will that mathematical foundation be brought to a much greater and potentially productive set of a problem-solving?

Something for everybody

Guntur: Yes, you’re right. If you think about it, we have embarked on that journey already. Based on what we have achieved as of today, and we look at the foundations over which we have built that, we see that we have something for everybody.

For example, we have something for the security folks as well as for the developer folks. The system gives insights to developers as to what happens to their APIs when they’re in production. They need to know that. How is it all behaving? How many users are using the APIs? How are they using them? Mostly, they have no clue.

The mathematical foundation under which all these implementations are being done is based on relationships, relationships between APIs. You can call them graphs, but it’s all about relationships.

And on the other side, the security team doesn’t know exactly what the application is. They can see lots of APIs, but how are the APIs glued together to form this big application? Now, the mathematical foundation under which all these implementations are being done is based on relationships, relationships between APIs. You can call them graphs, you can call them sequences, but it’s all about relationships. 

One aspect we are looking at is how do you expose these relationships. Today we have this relationship buried deep inside of our implementations, inside our platform. But how do you take it out and make it visual so that you can better understand what’s happening? What is this application? What happens to the APIs?

By looking at these visualizations, you can easily figure out if there are bottlenecks within the application, for example. Is one API constantly being hit on? If I always go through this API, but the same API is also leading me to a search engine or a products catalog page, why does this API need to go through all these various functions? Can I simplify the API? Can I break it down and make it into multiple pieces? These kinds of insights are now being made available to the developer community.

Gardner: For those listening or reading this interview, how should they prepare themselves for being better able to leverage and take advantage of what is providing? How can developers, security teams, as well as the IT operators get ready?

Rapid insights result in better APIs

Guntur: The moment you deploy Traceable in your environment, the algorithms kick in and start learning about the patterns of traffic in your environment. Within a few hours — or if your traffic has high volume, within 48 hours — you will receive insights into the API landscape within your environment. This insight starts with how many APIs are there in your environment. That’s a fundamental problem that a lot of companies are facing today. They just don’t know how many APIs exist in their environment at any given point of time. Once you know how many APIs are there, you can figure out how many services there are. What are the different services, and which APIs belong to which services? 

Traceable gives you the entire landscape within a few hours of deployment. Once you understand your landscape, the next interesting thing to see are your interfaces. You can learn how risky your APIs are. Are you exposing sensitive data? How many of the APIs are external facing? How to best use authentication to give access to APIs or not? And why do some APIs not have authentication? How are you exposing APIs without authentication?

Learn More  


All these questions are answered right there in the user interface. After that, you can look at whether your development team is in compliance. Do the APIs comply with the specifications in the requirements? Because usually the development teams are rapidly churning out code, they almost never maintain the API’s spec. They will have a draft spec and they will build against it, but finally, when you deploy it, the spec looks very different. But who knows it’s different? How do you know it’s different?

Traceable’s insights tell you whether your spec is compliant. You get to see that within a few hours of deployment. In addition to knowing what happened to your APIs and whether they are compliant with the spec, you start seeing various behaviors.

No alt text provided for this image

People think that when you have 100 APIs deployed, all users use those APIs the same way. We think all of them are using the apps the same way. But you’d be surprised to learn that users use apps in many different ways. Sometimes the APIs are accessed through computational means, sometimes they are accessed via user interfaces. There is now insight for the development team on how users are actually using the APIs, which in itself is a great insight to help build better APIs, which helps build better applications, and simplifies the application deployments.

All of these insights are available within a few hours of the deployment. And I think that’s very exciting. You just deploy it and open the screen to look at all the information. It’s just fascinating to see how different companies have built their API ecosystems.

And, of course, you have the security use cases. You start seeing what’s at work. We have seen, for example, what Bingbot from Microsoft looks like. But how active is it? Is it coming from 100 different IP addresses, or is it always coming from one part of a geolocation?

You can see how, for example, what search spiders’ activity looks like. What are they doing with our APIs? Why is the search engine starting to look at the APIs, which are internal language and have no information? But why are they crawling these APIs? All this information is available to you within a few hours. It’s really fascinating when you just deploy and observe.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: 


Posted in AIOps, application transformation, artificial intelligence, big data, Cloud computing, Cyber security, data analysis, digital transformation, Enterprise transformation, open source, Security, User experience | Leave a comment

Securing APIs demands tracing and machine learning to analyze behaviors and head off attacks

The burgeoning use of application programming interfaces (APIs) across cloud-native computing and digital business ecosystems has accelerated rapidly due to the COVID-19 pandemic.

Enterprises have had to scramble to develop and procure across new digital supply chains and via unproven business-to-business processes. Companies have also extended their business perimeters to include home workers as well as to reach more purely online end-users and customers.

In doing so, they may have given short shrift to protecting against the cybersecurity vulnerabilities inherent in the expanding use of APIs. The cascading digitization of business and commerce has unfortunately lead to an increase in cyber fraud and data manipulation.

Stay with us for Part 2 in our series where BriefingsDirect explores how APIsmicroservices, and cloud-native computing require new levels of defense and resiliency.

Listen the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest innovations for making APIs more understood, trusted, and robust, we welcome Jyoti Bansal, Chief Executive Officer and Co-Founder at The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jyoti, in our last discussion, we learned how the exploding use of cloud-native apps and APIs has opened a new threat vector. As a serial start-up founder in Silicon Valley, as well as a tech visionary, what are your insights and experience telling you about the need for identifying and mitigating API risks? How is protecting APIs different from past cybersecurity threats?

Bansal: Protecting APIs is different in one fundamental way — it’s all about software and developers. APIs are created so that you can innovate faster. You want to empower your developers to move fast using DevOps and CI/CD, as well as microservices and serverless.

You want developers to break the code into smaller parts, and then connect those smaller pieces to APIs – internally, externally, or via third parties. That’s the future of how software innovation will be done.

Now, the way you secure these APIs is not by slowing down the developers. That’s the whole point of APIs. You want to unleash the next level of developer innovation and velocity. Securing them must be done differently. You must do it without hurting developers and by involving them in the API security process. 

Gardner: How has the pandemic affected the software development process? Is the shift left happening through a distributed workforce? How has the development function adjusted in the past year or so?

Software engineers at home

Bansal: The software development function in the past year has become almost completely work-from-home (WFH) and distributed. The world of software engineering was already on that path, but software engineering teams have become even more distributed and global. The pandemic has forced that to become the de facto way to do things.

Now, everything that software engineers and developers do will have to be done completely from home, across all their processes. Most times they don’t even use VPNs anymore. Everything is in the cloud. You have your source code, build systems, and CI/CD processes all in the cloud. The infrastructure you are deploying to is also in a cloud. You don’t really go through VPNs nor use the traditional ways of doing things anymore. It’s become a very open, connect-from-everywhere software development process.

Gardner: Given these new realities, Jyoti, what can software engineers and solutions architects do with APIs be made safer? How are we going to bring developers more of the insights and information they need to think about security in new ways?

Bansal: The most important thing is to have the insights. The fundamental problem is that people don’t even know what APIs are being used and which APIs have a potential security risk, or which APIs could be used by attackers in bad ways.

Learn More  


And so, you want to create transparency around this. I call it turning on the lights. In many ways, developers are operating in the dark – and yet they’re building all these APIs.

Normally, these days you have a software development team of maybe five to 10 engineers. If you are developing using many APIs, each with augmentations, you might end up with 200 or 500 engineers. They’re all working on their own pieces, which are normally one or two microservices, and they’re all exposing them to the current APIs.

It’s very hard for them to understand what’s going on. Not only with their own stuff, but the bigger picture across all the engineering teams in the company and all the APIs and microservices that they’re building and using. They really have no idea.

No alt text provided for this image

For me, the first thing you must do is turn on the lights so that everyone knows what’s going on — so they’re not operating in the dark. They can then know which APIs are theirs, and which APIs talk to other APIs? What are the different microservices? What has changed? How does the data flow between them? They can have a real-time view of all of this. That is the number one thing to begin with.

We like to call it a Google Maps kind of view, where you can see how all the traffic is flowing, where the red lights are, and how everything connects. It shows the different highways of data going from one place to another. You need to start with that. It then becomes the foundation for developers to be much more aware and conscious about how to design the APIs in a more secure way.

Gardner: If developers benefit from such essential information, why don’t the older solutions like web application firewalls (WAFs) or legacy security approaches fit the bill? Why do developers need something different?

Bansal: They need something that’s designed to understand and secure APIs. If you look at a WAF, it was designed to protect systems against attacks on legacy web apps, like a SQL injection.

Normally a WAF will just look at whether you have a form field on your website where someone who can type in a SQL query and use it to steal some data. WAFs will do that, but that’s not how attackers steal data from APIs. They are completely different kinds of attacks.

Most WAFs work to protect against legacy attacks but they have had challenges. When it comes to APIs, WAFs really don’t have any kind of solutions to secure APIs.

Most WAFs work to protect against legacy attacks but they have had challenges of how to scale, and how to make them easy and simple to use.

But when it comes to APIs, WAFs really don’t have any kind of solution to secure APIs.

Gardner: In our last discussion, Jyoti, you mentioned how the burden for API security falls typically on the application security folks. They are probably most often looking at point solutions or patches and updates.

But it sounds to me like the insights provides are more of a horizontal or one-size-fits-all solution approach. How does that approach work? And how is it better than spot application security measures?

End-to-end app security

Bansal: At we take a platform approach to application security. We think application security starts with securing two parts of your app. 

One is the APIs your apps are exposing, and those APIs could be internal, external, and third-party APIs.

The second part is the clients that you yourselves build using those APIs. They could be web application clients or mobile clients that you’re building. You must secure those as well because they are fundamentally built on top of the same APIs that you’re exposing elsewhere for other kind of clients.

No alt text provided for this image

When we look at securing all of that, we think of it in a classic way. We think security is still about understanding and taking inventory of everything. What are all of the things that are there? Then, once you have an inventory, you look at protecting those things. Thirdly, you look to do it more proactively. Instead of just protecting the apps and services, can you go in and fix things where and when the problem was created.

Our solution is designed as an end-to-end, comprehensive platform for application security that can do all three of these things. All three must be done in very different ways. Compared to legacy web application firewalls or legacy Runtime Application Self Protection (RASP) solutions that security teams use; we take a very different approach. RASPs also have weaknesses that can introduce their own vulnerabilities.

Our fundamental approach builds a layer of tracing and instrumentation and we make these tracing and instrumentation capabilities extremely easy to use, thanks to the lightweight agents we provide. We have agents that run in different programming environments, like Java.NetPHPNode.js, and Python. These agents can also be put in application proxies or Kubernetesclusters. In just a few minutes, you can install these agents and not have to do any work.

We then begin instrumenting your runtime application code automatically and assess everything that is happening. First thing, in just a minute or two, based on your real-time traffic, we draw a picture of everything -the APIs in your system, all the external APIs, your internal microservices, and all the internal API endpoints on each of the microservices.

Learn More  


This is how we assess the data flows between one microservice to a second and to a third. We begin to help you understand questions such as — What are the third-party APIs you’re invoking? What are the third-party systems you are invoking? And we’ll draw that all in Google Maps kind of traffic picture in just a matter of minutes. It shows you how everything flows in your system.

The ability to understand and embrace all of that is solution’s first part, which is very different from any kind of legacy RASP app security approach out there. 

Once we understand that, the second part starts in our system that creates a behavioral learning model around the actual use of your APIs and applications to help you understand answers to questions such as – Which users are accessing which APIs? Which users are passing what data into it? What is the normal sequence of API calls or clicks in the web application that the users do? What internal microservices are invoked by every API? What pieces of data are being transferred? What volume of data is being transferred?

We develop a scoring mechanism whereby we can figure out what kind of attack someone might be trying to do. Are they trying to steal data? We can then create a remediation mechanism, such as blocking that specific user or blocking that way of invoking that API. 

All of that comes together into a very powerful machine learning (ML) model. Once that model is built, we learn the n-dimensional behavior around everything that is happening. There is often so much traffic, that it doesn’t take us long to build out a pretty accurate model.

Now, every single call that happens after that, we then compare it against the normal behavior model that we built. So, for example, normally when people call an API, they ask for data for one user. But if suddenly a call to the same API asks for data for 100,000 users, we will flag that — there is something anomalous about that behavior.

Next, we develop a scoring mechanism whereby we can figure out what kind of attack someone might be trying to do. Are they trying to steal data? And then we can create a remediation mechanism, such as blocking that specific user or blocking that particular way of invoking that API. Maybe we alert your engineering team to fix the bug there that allows this in the first place. 

Gardner: It sounds like a very powerful platform — with a lot of potential applications. 

Jyoti, as a serial startup founder you have been involved with AppDynamicsand Harness. We talked about that in our first podcast. But one of the things I’ve heard you talk about as a business person, is the need to think big. You’ve said, “We want to protect every line of code in the world,” and that’s certainly thinking big.

How do we take what you just described as your solution platform, and extrapolate that to protecting every line of code in the world? Why is your model powerful enough to do that?

Think big, save the world’s code

Bansal: It’s a great question. When we began, that was the mission we started with. We have to think big because this is a big problem.

If I fast-forward to 10 years from now, the whole world will be running on software. Everything we do will be through interconnected software systems everywhere. We have to make sure that every line of the code is secure and the way we can ensure that every line of code is secure is by doing a few fundamental things, which are hard to do, but in concept they are simple.

Can we watch every line of code when it runs in a runtime environment? If an engineer wrote a thousand lines of code, and it’s out there and running, can we watch the code as it is running? That’s where the instrumentation and tracing part comes in. We can find where that code is running and watch how it is run. That’s the first part.

The second part is, can we learn the normal behavior of how that code was supposed to run? What did the developer intend when they wrote the code? And if we can learn that it’s the second part.

No alt text provided for this image

And the third component is, if you see anything abnormal, you flag it or block it, or do something about it. Even if the world has trillions and trillions of lines of code, that’s how we operate.

Every single line of code in the world should have a safety net built around it. Someone should be watching how the code is used and learn what is the normal developer intent of that code. And if some attacker, hacker, or a malicious person is trying to use the code in an unintended way, you just stop it.

That to me is a no-brainer — if we can make it possible and feasible from a technology perspective. That’s the mission we are on – To make it possible and feasible.

Gardner: Jyoti, one of the things that’s implied in what we’ve been talking about that we haven’t necessarily addressed is the volume and speed of the data. It also requires being able to analyze it fast to stop a breach or a vulnerability before it does much damage.

You can’t do this with spreadsheets and sticky notes on a whiteboard. Are we so far into artificial intelligence (AI) and ML that we can take it for granted that this going to be feasible? Isn’t a high level of automation also central to having the capability to manage and secure software in this fashion?

Let machines do what they do 

Bansal: I’m with you 100 percent. In some ways, we have machines to protect against these threats. However, the amount of data and the volume of things is very high. You can’t have a human, like a security operations center (SOC) person, sitting at a screen trying to figure out what is wrong.

That’s where the challenge is. The legacy security approaches don’t use the right kind of ML and AI — it’s still all about the rules. That generates numerous false positives. Every application security, bot security, RASP, and legacy app security approach defines rules sets to define if certain variables are bad and that approach creates many false positives and junk alerts, that they drown the humans monitoring those- it’s just not possible for humans to go through it. You must build a very powerful layer of learning and intelligence to figure it out.

Learn More  


The great thing is that it is possible now. ML and AI are at a point where you can build the right algorithms to learn the behavior of how applications and APIs are used and how data flows through them. You can use that to figure out the normal usage behaviors and stop them if they veer off – that’s the approach we are bringing to the market.

Gardner: Let’s think about the human side of this. If humans can’t necessarily get into the weeds and deal with the complexity and scale, what is the role for people? How do you oversee such a platform and the horizontal capabilities that you’re describing?

Do we need a new class of security data scientist, or does this is fit into a more traditional security management persona?

Bansal: I don’t think you need data scientists for APIs. That’s the job of products like We do the data science and convert it into actionable things. The technology behind itself could be the data scientist inside.

But what is needed from the people side is the right model of organizing your teams. You hear about DevSecOps, and I do think that that kind of model is really needed. The core of DevSecOps is that you have your traditional SecOps teams, but they have become much more developer, code, and API aware, and they understand it. Your developer teams have become more security-aware than they have been in the past.

In the past we’ve had developers who don’t care about security and security people who don’t care about code and APIs. We need to bridge that from both sides.

Both sides have to come together and bridge the gap. Unfortunately, what we’ve had in the past are developers who don’t care about security, and security people who don’t care about code and APIs. They care about networks, infrastructures, and servers, because that’s where they spend most of their time trying to secure things. From an organization and people perspective, we need to bridge that from both sides.

We can help, however, by creating a high level of transparency and visibility by understanding what code and APIs are there, which ones have security challenges, and which ones do not. You then give that data to developers to go and fix. And you give that data to your operations and security teams to manage risk and compliance. That helps bridge the gap as well.

Gardner: We’ve traditionally had cultural silos. A developer silo and a security silo. They haven’t always spoken the same language, never mind work hand-in-hand. How does the data and analytics generated from help bind these cultures together?

Bridge the SecOps divide

Bansal: I will give you an example. There’s this new pattern of exposing data through GraphQL. It’s like an API technology. It’s very powerful because you can expose your data into GraphQL where different consumers can write API queries directly to GraphQL.

Many developers who write these graphs to allow APIs, don’t understand the security implications. They write the API, and they don’t understand that if they don’t put in the right kind of checks, someone can go and attack them. The challenge is that most SecOps people don’t understand how GraphQL APIs work or that they exist.

So now we have a fundamental gap on both sides, right? A product like helps bridge that gap by identifying your APIs, and that there are GraphQL APIs with security vulnerabilities where sensitive data can potentially be stolen.

No alt text provided for this image

And we will also tell if there is an attack happening. We will tell you that someone is trying to steal data. Once you have that data, and developers see the data, they become much more security-conscious because they see it in a dashboard that they built the GraphQL APIs from, and which has 10 security vulnerabilities and alerts that two attacks are happening.

And the SecOps team, they see the same dashboard. They know which APIs were crafted, and that by these patterns they know which attackers and hackers are trying to exploit them. Thus, having that common shared sense of data in a shared dashboard between the developers and the SecOps team creates the visibility and the shared language on both sides, for sure.

Gardner: I’d like to address the timing of the solution and entry into the market.

It seems to me we have a level of trust when it comes to the use of APIs. But with the vulnerabilities you’ve described that trust could be eroded, which could be very serious. Is there a race to put in the solutions that keep APIs trustworthy before that trust gets eroded?

A devoted API security solution

Bansal: We are in the middle of the API explosion. Unfortunately, when people adopt a new technology, they think about its operational elements, and then security, performance, and scalability after that. Once they start running into those problems, they start challenging them.

We are at a point of time where people are seeing the challenges that come with API security and the threat vectors that are being opened. I think the timing is right. People, the market, and the security teams understand the need and feel the pain.

We already have had some very high-profile attacks in the industry where attackers have stolen data through improperly secured APIs. So, it’s a good time to bring a solution into the market that can address these challenges. I also think that CI/CD in DevOps is being adopted at such a rapid phase that API security and securing cloud-native microservices architectures are becoming a major bottleneck.

In our last discussion, we talked about Harness, another company that I have founded, which provides the leading CI/CD platform for developers. When we talk to our customers at Harness and ask, “What is the blocker in your adoption of CI/CD? What is the blocker in your adoption of public cloud, or using two or more microservices, or more serverless architectures?”

They say that they are hesitant due to their concerns around application security, securing these cloud-native applications, and securing the APIs that they’re exposing. That’s a big part of the blocker.

Learn More  


Yet this resistance to change and modernization is having a big business impact. It’s beginning to reduce their ability to move fast. It’s impacting the very velocity they seek, right? So, it’s kind of strange. They should want to secure the APIs – secure everything – so that they can gain risk mitigation, protect their data, and prevent all the things that can burn your users.

But there is another timing aspect to it. If they can’t soon figure out the security, the businesses really don’t have any option other than to slow down their velocity and slow down adoption of cloud-native architectures, DevOps, and microservices, all of which will have a huge business and financial impact.

 So, you really must solve this problem. There’s no other solution or way out.

Gardner: I’d like to revisit the concept of as a horizontal platform capability.

Once you’ve established the ML-driven models and you’re using all that data, constantly refining the analytics, what are the best early use cases for Then, where do you see these horizontal analytics of code generation and apps production going next?

Inventory, protection, proactivity

Bansal: There’s a logical progression to it. The low-lying fruit is to assume you may have risky APIs with improper authentication that can expose personally identifiable information (PII) and data. The API doesn’t have the right authorization control inside of it, for example. That becomes the first low-hanging fruit. Once, you put in your environment, we can look at the traffic, and the learning models will tell you very quickly if you have these challenges. We make it very simple for a developer to fix that. So that’s the first level.

The second level is, once you protect against those issues, you next want to look for things you may not be able to fix. These might be very sophisticated business logic abuses that a hacker is trying to insert. Once our models are built, and you’re able to compare how people are using the services, we also create a very simple model for flagging and attributing any bad behaviors to a specific user.

The threat actor could be a bot, a particular authenticated user, or a non-authenticated user trying to do something that is not normal behavior. We see the patterns of such abuses around data theft or something happening around the data. We can alert you and block the threat actor.

This is what we call a threat actor. It could be a bot, a particular authenticated user, or a non-authenticated user trying to do something that is not normal behavior. We see the patterns of such abuses around data theft or something that is happening around the data. We can alert you and we can block the threat actor. So that becomes the second part of the value progression.

The third part then becomes, “How do we become even more proactive?” Let’s say you have something in your API that someone is trying to abuse through a sophisticated business logic approach. It could be fraud, for example. Someone could create a fraudulent transaction because the business logic in the APIs allows for that. This is a very sophisticated hacker.

Once we can figure that abuse out, we can block it, but the long-term solution is for the developers to go and fix the code logic. That then becomes the more proactive approach. By bringing in that level of learning, that a particular API has been abused, we can identify the underlying root cause and show it to a developer so that they can fix it. That’s becoming the more progressive element of our solution.

Eventually you want to put this into a continuous loop. As part of your CI/CD process, you’re finding things, and then in production, you are also finding things when you detect an attack or something abnormal. We can give it all back to the developers to fix, and then it goes through the CI/CD process again. And that’s how we see the progression of how our platform can be used.

Gardner: As the next decade unfolds, and organizations are even more digital in more ways, it strikes me that you’re not just out to protect every line of code. You’re out there to protect every process of the business.

Where do the use cases progress to when it comes to things like business processes and even performance optimization? Is the platform something that moves from a code benefit to a business benefit? 

Understanding your APIs

Bansal: Yes, definitely. We think that the underlying model we are building will understand every line of code and how is it being used. We will understand every single interaction between different pieces of code in the APIs and we will understand the developer intent around those. How did the developers intend for these APIs in that piece of code to work? Then we can figure out anything that is abnormal about it.

So, yes, we are using the platform to secure the APIs and pieces of code. But we can also use that knowledge to figure out if these APIs are not performing in the right kinds of way. Are there bottlenecks around performance and scalability? We can help you with that.

What if the APIs are not achieving the business outcomes they are supposed to achieve? For example, you may build different pieces of code and have them interact with different APIs. In the end, you want a business process, such as someone applying for a credit card. But if the business process is not giving you the right outcome, you want to know why not? It may be because it’s not accurate enough, or not fast enough, or not achieving the right business outcome. We can understand that as well, and we can help you diagnose and figure out the root cause of that as well.

No alt text provided for this image

So, definitely, we think eventually, in the long-term, that is a platform that understands every single line of code in your application. It understands the intent and normal behaviors of every single line of code, and it understands every time there is something anomalous, wrong, or different about it. You then use that knowledge to give you a full understanding around these different use cases over time.

Gardner: The lesson here, of course, is to know yourself by letting the machines do what they do best. It sounds like the horizontal capability of analyzing and creating models is something you should be doing sooner rather than later.

It’s the gift that keeps giving. There are ever-more opportunities to use those insights, for even larger levels of value. It certainly seems to lead to a virtuous adoption cycle for digital business.

Bansal: Definitely. I agree. It unlocks and removes the fear of moving fast by giving developers freedom to break things into smaller components of microservices and expose them through APIs. If you have such a security safety net and the insights that go beyond security to performance and business insights, it reduces the fear because you now understand what will happen.

When people start thinking of serverless, Functions, or similar technologies the idea is that you take those 200 microservices and break them into 2,000 micro-functions. And those functions all interact with each other. You can clip them independently, and every function is just a few hundred lines of code at most.

So now, how do you start to understand the 2,000 moving parts? There is a massive advantage of velocity, and reusability, but you will be challenged in managing it all. If you have a layer that understands and reduces that fear, it just unlocks so much innovation. It creates a huge advantage for any software engineering organization. 

Listen the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor:


Posted in AIOps, application transformation, artificial intelligence, big data, Cloud computing, Cyber security, Data center transformation, DevOps, digital transformation, enterprise architecture, Security, Software | Tagged , , , , , , , , , , , , , | Leave a comment

Rise of reliance on APIs brings new security vector — and need for novel defenses

API Application Programming Interface businessman pointing a visual icon.

Thinking of IT security as a fortress or a moat around your compute assets has given way to a more realistic and pervasive posture.

Such a cybersecurity perimeter, it turns out, was only an illusion. A far more effective extended-enterprise strategy protects business assets and processes wherever they are — and wherever they might reach.

As businesses align to new approaches such as zero trust and behavior-modeling to secure their data, applications, infrastructure, and networks, there’s a new, rapidly expanding digital domain that needs such pervasive and innovative protection.

The next BriefingsDirect security trends discussion explores how application programming interfaces (APIs), microservices, and cloud-native computing form a new frontier for cybersecurity vulnerabilities — as well as opportunities for innovative defenses and resilience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about why your expanding use of APIs may be the new weak link in your digital business ecosystem, please welcome Jyoti Bansal, Chief Executive Officer and Co-Founder at The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jyoti, has the global explosion of cloud-native apps and services set us up for a new variety of security vulnerability? How serious is this new threat?

Bansal: Well, it’s definitely new and it’s quite serious. If you look at every time we go through a change in IT architectures, we get a new set of security challenges. The adoption of cloud-native architectures means challenges in a few things. 

No alt text provided for this image

One, you have a lot of APIs and these APIs are doors and entryways into your systems and your apps. If those are not secured properly, you have more opportunities for attackers to steal data. You want to open the APIs so that you can expose data, but attackers will try to exploit that. We are seeing more examples of that happening.

The second major challenge with cloud-native apps is around the software development model. Development now is more high-velocity, more Agile. People are using DevOps and continuous integration and continuous delivery (CI/CD). That creates the velocity. You’re changing things once every hour, sometimes even more often.

That creates new kinds of opportunities for developers to make mistakes in their apps and in their APIs, and in how they design a microservice; or in how different microservices or APIs interact with each other. That often creates a lot more opportunity for attackers to exploit.

Gardner: Companies, of course, are under a lot of pressure to do things quickly and to react to very dynamic business environments. At the same time, you have to always cover your backside with better security. How do companies face the tension between speed and safety?

Speed and safety for cloud-native apps

Bansal: That’s the biggest tension, in many ways. You are forced to move fast. The speed is important. The pandemic has been even more of a challenge for a lot of companies. They had to move to more of a digital experience much faster than they imagined. So speed has become way more prominent.

But that speed creates a challenge around safety, right? Speed creates two main things. One is that you have more opportunity to make mistakes. If you ask people to do something very fast because there’s so much business and consumer pressure, sometimes you cut corners and make mistakes.

Learn More  


Not deliberately. It’s just as software engineers can never write completely bug-free code. But if you have more bugs in your code because you are moving very, very fast, it creates a greater challenge.

So how do you create safety around it? By catching these security bugs and issues much earlier in your software development life cycle (SDLC). If a developer creates a new API and that API could be exploited by a hacker — because there is a bug in that API around security authentication check — you have to try to find it in your test cycle and your SDLC.

The second way to gain security is by creating a safety net. Even if you find things earlier in your SDLC, it’s impossible to catch everything. In the most ideal world, you’d like to ship software that has zero vulnerabilities and zero gaps of any kind when it comes to security. But that doesn’t happen, right?

You have to create a safety net so that if there are vulnerabilities because the business pressure was there to move fast, that safety net that can still block what occurs and stop those from trying to do things that you didn’t intend from your APIs and applications.

Gardner: And not only do you have to be thinking about APIs you’re generating internally, but there are a lot of third-party APIs out there, along with microservices, when doing extended-enterprise processes. It’s a bit of a Wild West environment when it comes to these third-party APIs.

Bansal: Definitely. The APIs you are building and using internally through your microservices may also have an external consumer calling those APIs. Other microservices may also be calling them — and so there is exposure around that.

No alt text provided for this image

Third-party APIs manifest in two different ways. One is that you might be using a third-party API or library in your microservice. There might be a security gap there.

The second way comes when you’re calling on third-party APIs. And now almost everything is exposed as APIs – such as if you want to check for some data somewhere or call some other software as a service (SaaS) service or cloud service, or a payment service. Everything is an API, and those APIs are not always called properly. All of those APIs are not secure, and so your system fundamentally can become more insecure.

It is getting close to a wild, Wild West with APIs. I think we have to take API security quite seriously at this point.

Gardner: We’ve been talking about API security as a function of growing pains, that you’re moving fast, and this isn’t a process that you might be used to.

But there’s also malice out there. We’ve seen advanced, persistent threats in such things as zero-day exploits and with Microsoft Exchange Serversrecently. We’ve certainly seen with the SolarWinds exploits how a supply chain can be made vulnerable.

Have we seen people take advantage of APIs, too, or is that something that we should expect?

API attacks a global threat

Bansal: Well, we should definitely expect that. We are seeing people take advantage of these APIs. If you look at data from Gartner, they stated that by 2022, API abuses will move from an infrequent to the most frequent attack vector. That will result in more data breaches in enterprises and web applications. That is the new direction because of how applications are consumed with APIs.

The API has naturally become a more frequent form of attack vector now.

Gardner: Do you expect, Jyoti, that this is going to become mission-critical? We’re only part way into the “software eats the world” thing. As we expect software to become more critical to the world, APIs are becoming more part of that. Could API vulnerabilities become a massive, global threat vector?

Bansal: Yes, definitely. We are, as you said, only partially into the software-eats-the-world trend. We are still not fully there. We are only 30 to 40 percent there. But as we see more and more APIs, those will create a new kind of attack vector.

For a long time, people didn’t think about APIs. People only thought about APIs as internal. External APIs were very few. Now, APIs are a major source of how other systems integrate across the globe. The traffic coming through APIs is significantly increasing.

It’s a matter of now taking these threats seriously. For a long time, people didn’t think about APIs. People only thought about APIs as internal APIs; that you will put internal APIs between your code and different internal services. The external APIs were very few. Most of your users were coming through a web application or a mobile application, and so you were not exposing your APIs as much to external applications.

If you look at banking, for example, most of the bank services software was about online banking. End users came through a bank web site, and then users came through mobile apps. They didn’t have to worry too much about APIs to do their business.

Now, that’s no longer the case. For any bank, APIs are a major source of how other systems integrate with them. Banks didn’t have to expose their systems through those apps that they built, but now a lot of third-party apps are written on top of those APIs — from a wallet app, to different kinds of payment systems, to all sorts of things that are out there — because that’s what consumers are looking for. So, now — as you start doing that — the amount of traffic coming through that API is not just through the web or mobile front-ends directly. It’s significantly increasing.

The general use of internal APIs is increasing. With the adoption of cloud-native and microservices architectures, the internal-to-external boundary is starting to blur too much. Internal APIs could become external at any point because the same microservice that our engineering team wrote is now being used by your other internal microservices inside of your company. But they are also being exposed to your partners or other third-party systems to do something, right?

Learn More  


More and more APIs are being exposed out there. We will see this continued explosion of APIs because that’s the nature of how modern software is built. APIs are the building block of modern software systems.

I think we have two options as an industry. Either we say, “Okay, APIs could be risky or someone could attack them, so let’s not use APIs.” But that to me is completely wrong because APIs are what’s driving the flexibility and fluidity of modern software systems and the velocity that we need. We have to just learn as an industry to instead secure APIs and be serious about securing them.

Gardner: Jyoti, your role there as CEO and co-founder at is not your first rodeo. You’ve been a serial startup leader and a Silicon Valley tech visionary. Tell us about your other major companies, AppDynamics, in particular, and why that puts you in a position to recognize the API vulnerability — but also come up with novel ways of making APIs more secure.

History of troubleshooting

At that time, we were starting to see a lot of service-oriented architectures (SOA). People were struggling when something was slow and users experienced slowdowns from their websites. How do you figure out where the slowdown is? How do you find the root cause? 

That space eventually became what is called application performance management (APM). What we came up with was, “How about we instrument what’s going on inside the code in production? How about we trace the flow of code from one service to another service, or to a third service or a database? Then we can figure out where the slow down and bottlenecks are.”

No alt text provided for this image

By understanding what’s happening in these complex software systems, you can figure out where the performance bottleneck is. We were quite successful as a company. We were acquired by Cisco just a day before we were about to go IPO.

The approach we used there solves problems around performance – so monitoring, diagnosing, and troubleshooting diagnostics. The fundamental approach was about instrumenting and learning what was going on inside the systems.

That’s the same approach we at apply to solving the problems around API security. We have all these challenges around APIs; they’re everywhere, and it’s the wild, Wild West of APIs.

So how do you get in control? You don’t want to ask developers to slow down and not do any APIs. You don’t want to reduce the velocity. The way you get control over it is fundamentally a very similar approach to what we used at AppDynamics for performance monitoring and troubleshooting. And that is by understanding everything that can be instrumented in the APIs’ environment.

That means for all external APIs, all internal APIs, and all the third-party APIs. It means learning how the data flows between these different APIs, which users call different APIs, what they are trying to achieve out of it, what APIs are changed by developers, and which APIs have sensitive data in them.

Once you are in control of what is there, you can learn if some user is trying to use these APIs in a bad way. You know what seems like an attack, or if something wrong is happening. Then you can quickly go into prevention mode. You can block that attack.

Once you automatically understand that — about all of the APIs – then you start to get in control of what is there. Once you are in control of what’s there, you can learn if some user is trying to use these APIs in a bad way. You know what seems like an attack, or if something wrong is happening. There might be a data breach or something. Then you can quickly go into prevention mode. You can then block that attack.

There are a lot of similarities from my experience at my last company, AppDynamics, in terms of how we solve challenges around API security. I also started a second company, Harness. It’s in a different space, targeting DevOps and software developers, and helping them with CI/CD. Harness is now one of the leading platforms for CI/CD or DevOps.

So I have a lot of experience from the vantage point of what do modern software engineer organizations have to do from a CI/CD DevOps perspective, and what security challenges they start to run into.

We talk to Harness customers doing modern CI/CD about application and API security. And it almost always comes as one big challenge. They are worried about microservices, about cloud-native architectures, and about moving more to APIs. They need to get in control and to create a safety net around all of this.

Gardner: Does your approach of trace, monitor, and understand the behavior apply to what’s going on in operations as well as what goes on in development? Is this a one-size-fits-all solution? Or do you have to attack those problems separately?

One-size-fits-all advantages

Bansal: That’s the beauty of this approach. It is in many ways a one-size-fits-all approach. It’s about how you use the data that comes out of this trace-everything instrument. Fundamentally it works in all of these areas.

It works because the engineering teams put in what we call a lightweight agent. That agent goes inside the runtime of the code itself, running in different programming languages, such as JavaPHP, and Python. The agents could also run in your application proxies in your environment.

You put the same kinds of instruments, lightweight agents, in for your external APIs, your internal microservices APIs, as well as the third-party APIs that you’re calling. It’s all the same.

Learn More  


When you have such instrumentation tracing, you can take the same approach everywhere. Ideally, you put the same in a pre-production environment while you are going through the software testing lifecycle in a CI/CD system. And then, after some testing, staging, and load testing, you start putting the same instrumentation into production, too. You want the same kind of approach across all of that.

In the testing cycle, we will tell you — based on all instrumentation and tracing, looking at all the calls based on your tests – that these are the places that are vulnerable, such as these are the APIs that have gaps and could be exploited by someone.

Then, once you do the same approach in production, we tell you not only about the vulnerabilities but also where to block attacks that are happening. We say, “This is the place that is vulnerable, right now there is an attacker trying to attack this API and steal data, and this is how we can block them.” This happens in real-time, as they do it.

But it’s fundamentally the same approach that is being used across your full SDLC lifecycle.

Gardner: Let’s look at the people in these roles or personas, be it developer, operations, SecOps, and traditional security. Do you have any examples or metrics of where API vulnerabilities have cropped up? What vulnerabilities are these people already seeing?

Vulnerable endpoints to protect

Bansal: A lot of API vulnerabilities crop up around unauthenticated endpoints, such as exposing an API and it doesn’t have the right kind of authentication. Second is around not using the right authorization, such as calling an API that is supposed to give you data for you as user 1, but the authorization had a flaw that could be exploited for you to take data — not just as user 1 but from someone else, a user 2, or maybe even a large number of users. That’s a common problem that happens too often with APIs.

No alt text provided for this image

There are also leaky APIs that give you more data than they should, such as it’s only supposed to give the name of someone, but it also includes more sensitive data.

In the world of application security, we have the OWASP Top Ten list that the app security teams and the security teams have followed for a long time. And normally you would have things like SQL injection and cross-site scripting, and those were always in that list.

Now there’s an additional list called the OWASP API Security Top Ten, which lists the top threats when it comes to APIs. Some of the threats I described are key parts of it. And there are a lot of examples of these API-involved attacks these days.

Just recently in 2020, we had a Starbucks vulnerability in API calls, which potentially exposed 100 million customer records. It was around an authentication vulnerability. In 2019, Capital One was a high-profile example. There was an Amazon Web Services (AWS) configuration API that wasn’t secured properly and an attacker got access to it. It exposed all the AWS resources that Capital One had.

We are starting to see patterns emerge on the vulnerabilities attackers are exploiting in APIs. No one should take API security lightly these days. It’s a big mistake if companies are not getting to this faster.

There was a very high-profile attack that happened on T-Mobile in 2018, where there was an API leaking more data than it was supposed to. Some 2.3 million customers’ data was stolen. In another high-profile attack, at Venmo, a public API was not exposing the data for the right users so 200 million transactions of data were stolen from Venmo. As you can see from these examples, we are starting to see patterns emerge on the vulnerabilities attackers are exploiting in APIs.

Gardner: Now, these types of attacks and headlines are going to get the attention of the very top of any enterprise, especially now where we’re seeing GDPR and other regulations require disclosure of these sorts of breaches and exposures. This is not just nice to have. This sounds like potentially something that could make or break a company if it’s not remediated.

Bansal: Definitely. No one should take API security lightly these days. A lot of the traditional cybersecurity teams have put a lot of their focus and energy in securing the networks and infrastructure. And many of them are just starting to get serious about this next API threat vector. It’s a big mistake if companies are not getting to this faster. They are exposing themselves in a big way.

Gardner: The top lesson for security teams, as they have seen in other types of security vulnerabilities, is you have to know what’s there, protect it, and then be proactive. What is it about the way that you’re approaching these problems that set you up to be able to be proactive — rather than reactive — over time?

Know it, protect it, and go proactive

Bansal: Yes, the fundamentals of security are the same. You have to know what is there, you have to protect it, and then you become proactive about it. And that’s the approach we have taken in our solution at

Number one is all about API discovery and risk assessment. You put us there in your environment and very quickly we’ll tell you what all the APIs are. It’s all about discovery and inventory as the very first thing. These are all your external APIs. These are all your internal APIs. These are all the third-party APIs that you are invoking. So it starts with discovery. You have to know what is there. And you create an inventory of everything.

No alt text provided for this image

The second part, when you create that inventory, is to give a risk score. We give every API a risk score: internal API, external API, and third-party, all of them. The risk score is based on many dimensions, such as which APIs have sensitive data flowing through them, which APIs are exposed publicly versus not, which APIs have what kind of authentication to them, and what APIs are internally using your critical database systems and reading data from those. Based on all of these factors, we are creating a risk heat map of all of our APIs.

The most important part for APIs security is to do this continuously. Because you’re living in the world of CI/CD, any kind of API discovery or assessment cannot be static, like you do it once a month, once a quarter, or even once a week. You have to do it dynamically all the time because code is changing. Developers are putting new code continuously out there. So the APIs are changing, with new microservices. All of the discovery and risk assessment has to happen continuously. So, that’s really the first challenge we handle at

The second problem we handle is to build a learning model. That learning model is based on a very sophisticated machine learning (ML) approach on what is the normal usage behavior of each of these APIs. What users are calling an API? What sequence do they get called? What kind of data passes through them? What kinds of data are they fetching out of where? And on and on.

We are learning all of that automatically. Once you learn that, you start comparing every new API request with what the normal model of how your APIs are supposed to be used.

Now, if you have an attacker trying to use an API to extract much more data than what is normal for that data, you know that something is abnormal about it. You could flag it, and that’s a key part of how we think of the second part, which is how do you protect these APIs from bad behavior.

Learn More  


That cannot be done with the traditional web application firewall (WAF)and runtime application self-protection (RASP), and those kinds of approaches. Those are very rule-based or static-rules-type of base approaches. For APIs, you have to build a behavioral learning-based system. That’s what our solution is about. That’s how we get to a very high degree of protection for these APIs.

The third element to the solution is the proactive part. After a lot of this learning, we also examine the behavior of these APIs and the potential vulnerabilities, based on the models. The right way to proactively use our system is to feed that into your testing and development cycle. That brings the issues back to the developers to fix the vulnerabilities. We can help find them earlier in the lifecycle so you can integrate that into what you’re doing in your application security testing processes. It closes the loop on you doing all of this – only proactively now.

Gardner: Jyoti, what should businesses do to prepare themselves at an early stage for API security? Who should be tasked with kicking this off?

Build your app security team

Bansal: API security falls under the umbrella of app security. In many businesses, app security teams are now tasked to secure the APIs in addition to the traditional web applications.

The first thing every business has to do is to create a responsibility around securing APIs. You have to bring in something to understand the inventory. They don’t even know what all of the APIs are. Then you can start securing and getting a better posture.

In many places, we are also seeing businesses create teams around what they call product security. If you are a company with FinTechproducts, your product is an API because your product is primarily exposed to APIs. Then people start building out product security teams who are tasked with securing all of these APIs. In some cases, we see the software engineering team directly responsible for securing APIs.

The problem is they don’t even know what all of their APIs are. They may have 500 or 2,000 developers in the company. They are building all of these APIs, and can’t even track them. So most businesses have to get an understanding and some kind of control over the APIs that are there. Then you can start securing and getting a better security posture around those.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor:


Posted in application transformation, artificial intelligence, Cloud computing, containers, Cyber security, data analysis, Data center transformation, digital transformation, machine learning, Security, Software | Tagged , , , , , , , , , , , , , | Leave a comment

Creating business advantage with technology-enabled flexible work

As businesses plan for a future where more of their workforce can be located just about anywhere, how should they rethink hiring, training, and talent optimization? This major theme for 2021 and beyond poses major adjustments for both workers and savvy business leaders.

The next BriefingsDirect modern workplace strategies discussion explores how a global business process outsourcing leader has shown how distributed employees working from a “Cloud Campus” are improving productivity and their end users’ experience. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about best practices and advantageous outcomes from a broadly dispersed digital workforce, we are now joined by José Güereque, Executive Vice President of Infrastructure and Nearshore Chief Information Officer at Teleperformance SE in Monterrey, Mexico; Lance Brown, Executive Vice President Global Network, Telecom, and Architecture at Teleperformance, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, when it comes to flexible and hybrid work models we often focus on how to bring the work to the at-home workforce. But this new level of flexibility also means that we can find and attract workers from a much broader potential pool of talent.

Are companies fully taking advantage of this decentralized talent pool yet? And what benefits are those who are not yet expanding their workforce horizons missing out on?

Pick your talent anywhere

Minahan: We are at a very interesting inflection point right now. If there is any iota of a silver lining in this global pandemic it’s that it has opened people’s minds to both accelerating digitization of their business, but also opening their minds to new ways of work. It’s now been proven that work can indeed occur outside the office. Smart companies like Teleperformance are beginning to look at their entire workforce strategies — their work models — in different ways. 

It’s not about should Sam or Susie work in the office or work at home. It’s, “Gee, now that I can enable everyone with the work resources they need, and in a secure workspace environment to do their best work wherever it is, does that allow me to do new things, such as tap into new talent pools that may not be within commuting distance of my work hubs?”

This now allows me to even advance sustainability initiatives or, in some cases, we have companies now saying, “Hey, now I can also reach workers that allow me to bring more diversity into my workforce. I can enable people to work from inner cities or other locations — rural locations — that I couldn’t reach before.”

This is the thought process that a lot of forward-thinking companies are going through right now. 

Gardner: It seems that a remote, hybrid, flexible work capability is the gift that keeps giving. In many cases we have seen projections of shortages of skilled workers and gaps between labor demand and supply. Are we in just the early innings of what we can expect from the benefits of remote work? 

Minahan: Yes. If you think way back in history, about a year ago, that’s exactly what the world was grappling with. There was a global shortage of skilled workers. In fact, McKinsey estimated that there was a global shortage of 95 million medium- to high-skilled workers. So managers were trying to hire amid all that. 

But, in addition, there was a shortage of the actual modern skills that a lot of companies need to advance their business, to digitize their business. And the third part is a lot of employees were challenged and frustrated with the complexity of their work environment.

Now, more flexible work models enabled by a digital workspace that ensures employees have access to all the work resources they need, wherever work needs to get done, begins to address each of those issues. Now you can reach into new areas to find new talent. You can reach skills that you couldn’t before because you were competing in a very competitive market.

Now you can enable your employees to work where and how they want in new ways that doesn’t limit them. They no longer have a long commute that gives them added stress in their lives. In fact, our research found that 80 percent of workers feel they are being as, if not more, productive working remotely than they could be in the office.

Gardner: Let’s find out from an organization that’s been doing this. José, at Teleperformance, tell us the types of challenges you faced in terms of the right fit between your workforce and your demands for work. How have you been able to use technology to help solve that?

Güereque: Our business was mostly a finite structure of brick-and-mortar operations. When COVID struck, we realized that we faced a challenge of not being able to move people to and from the work centers. So, we rushed to move all of our people, as much as possible, to work from home (WFH).

At-Home Workers May Explore Their Options. 

Here’s What They Will Be Looking For. 

Technically, the first challenge was to restructure our network, services, and all kinds of resources to move the workforce to WFH. As you can imagine, that came in hand with security measures. Security is one of the most important things we need to address and have in place. 

But while there were big challenges, big opportunities also arose for us. The new model allows us to be more flexible in how we look for new talent. We can now find that talent in places we didn’t search before.

Our team has helped expedite this work-at-home model for us. It was not embraced in the massive way it is right now. 

Gardner: Lance, tell us about Teleperformance, your workforce, your reach, and your markets.

Remote work: Simpler, faster, safer

Brown: Teleperformance is a global customer experience company based in France. We have more than 383,000 employees worldwide in 83 countries serving over 170 markets. So it’s a very large corporation. We have a number of agents who support many Fortune 500 companies all over the world, and our associates obviously have to be able to connect and talk [in over 265 languages and dialects] to customers. 

We sent more than 220,000 of these associates home in a very quick time frame at the onset of the pandemic.

Our company is all about being simpler, faster, and safer — and working with Citrix allowed us to meet all of our transition goals. Remote work is now a simpler, faster process — and it’s a safer process. All of our security that Citrix provides is on the back end. We don’t have to worry as much with the security on our endpoint as we would in other traditional models. 

Gardner: As José mentioned, you had to snap to it and solve some major challenges from the crisis. Now that you have been adjusting to this, do you agree that it’s the gift that keeps giving? Is flexible work here to stay from your perspective?

Our company is all about being simpler, faster, and safer — and working with Citrix allowed us to meet all of our transition goals. Remote work is now a simpler, faster process — and it’s a safer process.

Brown: Yes, from Teleperformance’s perspective, we fully are working to get our clients to remain at WFH — for a large percentage of the workforce. We don’t ever see the days of going back to 100 percent brick and mortar, or even mostly brick and mortar. We were at 90 percent on-site before the pandemic. Now, at the end of the day, that will become between 50 percent to 65 percent work at home.

Gardner: Tim, because they have 390,000 people, there is going to be a great diversity of how people will react to this. One of the nice things about remote work and digital workspaces is you can be dynamic. You can adjust, change, and innovate.

How are organizations such as Teleperformance breaking new ground? Are they finding innovation that goes beyond what they may have expected from flexible work at the outset? 

Minahan: Yes, absolutely. This isn’t just about can we enable ourselves to tap into new talent in some remote locations or for disenfranchised parts of the workforce. It’s about creating an agile workforce model. Teleperformance is on the frontlines of enabling that for its own workforce. But Teleperformance is also part of the solution, due to their business process outsourcing (BPO) solutions and how they serve their clients. You begin to rethink the workforce. 

We did a study as part of our Work 2035 Project, in which we went out over the past year-and-a-half and interviewed tens of thousands of employees, thousands of senior executives, and probed into what the world of work will look like in 2035. A lot of things we are talking about here have been accelerated by the pandemic.

One of those things is moving to a more agile workforce model, where you begin to rethink your workforce strategies, and maybe where you augment full-time employees with contractors or gig workers, so you have that agility to dial up your workforce. 

Maybe it’s due to seasonality, and you need for a call center or other services to be able to dial up or back down. Or work locations shift, moving due to certain needs or responses to certain catastrophes. And like I said, that’s what a lot of forward-thinking companies are doing.

What’s so exciting about Teleperformance is they are not only doing it for their own organization — but they are also providing the solution for their own clients.

Gardner: José, please describe for us your Cloud Campus concept. Why did you call it Cloud Campus and what does it do? 

Cloud Campus engages worldwide

Güereque: Enabling people to WFH is only part of what you need. You also need to guarantee the processes in place perform as well as they used to in a brick-and-mortar environment. So our cloud solution pushes subsets of those processes and enables control — to maintain the operational procedures – at a level where our clients feel confident of how we are managing their operations. 

In the past, you needed to do a lot of things if you were an agent in our company. You needed to physically go to a central office to fulfill processes, and then you’d be commuting. Today, the Cloud Campus digitalizes these processes. Now a new employee, in many different countries, can be hired, trained, and coached — everything — on a remote basis.

We use video technology to do virtual face-to-face interactions, which we believe is important to be successful. We still are a very human-centric company. If we don’t have this face-to-face contact, we won’t succeed. So, the Cloud Campus, which is maintained by a really small team, guarantees the needed processes so people can WFH on a permanent basis. 

Gardner: Lance, it’s impressive to think about you dealing face-to-face virtually with your clients in 83 different countries and across many cultures and different ways of doing business. How have you been able to use the same technology across such a diversity of business environments? 

Brown: That’s an excellent question. As José said, the Teleperformance Cloud Campus gives us the flexibility and availability to do just that. For our employees, it just becomes a one-on-one human interaction. Our employees are getting the same coaching, counseling, and support from all aspects of the business – just as they were when they were in the brick-and-mortar office.

Planning a Post-Pandemic Workplace Strategy? 

These Timeless Insights Will Help. 

We are leveraging, like José said, video technology and other technologies to deliver the same user experience for our associates, which is key. Once we deliver that, then that translates out to our clients, too, because once we have a good associate experience, that experience is the same for all of the clients that the associate is handling. 

Gardner: Lance, when you are in a brick-and-mortar environment, a physical environment, you don’t always have the capability to gather, measure, and digitize these interactions. But when you go to a digital workspace, you get an audit trail of data.

Is that something you have been able to utilize, or how do you expect that to help you in the future? 

Digital workspaces offer data insights 

Brown: Another really good question. We continue to gather data, especially as the world is all digitized. And, like you said, we provide many digital solutions for our clients. Now we are taking those same solutions and leveraging them internally for our employees.

We continue to see a large amount of data that we can work with for our process improvements and our technology, analysis, and process excellence (T.A.P.) teams and the transformation our agents do for our clients every day. 

Gardner: Tim, when it comes to translating the value through the workforce to the end user, are there ways we can measure that productivity benefit?

Minahan: One of the key things that came up early-on in the pandemic was a huge spike in worker productivity. Companies settled into a hybrid work model, and that phase was about unifying work and providing reliable access for employees in a remote environment to all the resources they needed.

The second part was, as José said, ensuring that all employees can safely access applications and information — that our corporate information remains secure.

A solid digital workspace environment provides an environment where employees can perform at their best and collaborate from the most remote locations.

Now we have moved into the simplify-and-optimize phase. A lot of companies are asking, “Gee, what are the tools I need to introduce to remove the noise from my employees’ day? How do I guide them to the right information and the right decisions? How do I support more collaboration or collaborative work execution, even in a distributed environment?”

If you have a foundation of a solid digital workspace environment that delivers all the work resources, that secures all the work resources, and then leverages things like machine learning (ML), virtual assistants, and new collaborative work management tools that we are introducing — it provides an environment where employees can perform at their best and can collaborate from the most remote locations.

Gardner: José, most businesses nowadays want to measure everything. With things like Net Promoter Scores (NPS) from your agents and employees, when it comes to looking for the metrics of whether your return on investment (ROI) or return on innovation is working, what have you found? Have you been able to verify what we have been talking about? Does this move beyond theory into practice, and can it be measured well?

Güereque: Yes, that’s very important. As I mentioned, being able to create a Cloud Campus concept, which has all the processes and metrics in place, allows us to compare apples with apples in a way that we can understand the behavior and the performance of an agent at home — same as in brick-and-mortar. We can compare across those models and understand exactly how they are performing. 

We found that a lot of our agents live in cities, which have a lot of traffic. The commuting time for them, believe it or not, was around one-and-a-half hours – as many as two hours for some of them — just going to and from work. Now, all that commuting time is eliminated when they WFH.

At-Home Workers May Explore Their Options. 

Here’s What They Will Be Looking For. 

People started to give lot of value to those things because they can spend their time smarter — or have more family time. So from customer, client, and employee satisfaction, those employees are more motivated — and they’re performing great. Their scores are similar – and in some cases better — than before. 

So, again, if you are able to measure everything through the digitalization of the processes, you can understand the small things you need to tweak in order to maintain better satisfaction and improve all scores across both clients and employees.

No alt text provided for this image

Gardner: Lance, over the past 30 years in IT, we’ve been very fortunate that we can often do more with less. Whether it’s the speed of the processor, or the size of the disk drive. I’m wondering if that’s translating into this new work environment.

Are you able to look at cost savings when it comes to the type of client devices for your users? Are your networks more efficient? Is there a similar benefit of doing more with less when we get to remote work and digital workspaces?

Cost savings accumulate via BYOD

Brown: Yes, especially for the endpoint device costs. It becomes an interesting conversation when you’re leveraging technology like Citrix. For that [thin client] endpoint, all of the compute is back in the data center or in the cloud.

Your overall total cost of ownership continues to go down because you’re not spending as much money on your endpoint, as you had in the past. The other thing is the technology allows us to take an existing PC and make it a thin client, too. That gives you a longer life of that endpoint, which, overall, reduces your cost.

It’s also much, much safer. I can’t stress enough about the security benefits, especially in this current environment. It just makes you so much safer because your target environment and exposed landscape is reduced. Your data center is housing all the proprietary information. And your endpoint is just a dumb endpoint, for lack of better word. It doesn’t have a large attack vector. So you really reduce your attack vector by leveraging Citrix and putting more IT infrastructure in your data center and in your cloud.

Güereque: There is another really important factor, which is to enable bring your own device (BYOD) to be a reality. With the pandemic, the manufacturers of equipment, the PCs and everything, their time to deliver has been longer.

What used to take them two to three weeks to deliver now takes up to 10 weeks. Sometimes the only way to be on time is to leverage the employees’ equipment and enable its use in a secure way. So, this is not just an economic perspective of avoiding the investment in the end device, but is an opportunity to enable them to work faster rather than waiting on the delivery time of new equipment.

No alt text provided for this image

Minahan: At Citrix, we’re seeing other clients do that, too. I was recently talking with the CIO of a financial services company. For them, as the world moved through the pandemic, they saw the demand for their digital banking services quadruple or more. They needed to hire thousands of new financial guidance agents to support that.

And, to José’s point, they couldn’t be bothered with sending each one a new laptop. So BYOD allowed them to gain a distributed digital workspace and to onboard these folks very quickly. They attained the resources they needed to service their end banking clients much faster.

Güereque: Just following on Tim’s comments, I want to give you an example. Two weeks ago we were contacted by a client who needed to have 1,200 people up and running within a week. At the beginning, we were challenged. We wanted to be able to put 1,200 new employees with equipment in place, and weirdly our team came back with a plan. I can tell you that last week they were all in production. So, without this flexibility, and these enablers like Citrix, we wouldn’t be able to do it in such a small time frame.

Gardner: Lance, as we seek work-from-home solutions, we’re using words like “life” and “work balance.” We’re talking about employee behaviors and cultures. It sounds like IT is closer to human resources (HR) than ever.

Has the move to remote work using Citrix helped bond major parts of your organization — your IT capability and your HR capability, for example?

IT enables business innovation

Brown: Yes, now they’re seeing IT as an enabler. We are the enabler to allow those types of successes, from a work-life balance and human standpoint. We’re in constant contact with our operations team, our HR team, and our recruiting team. We are the enabler to help them deliver everything that we need to deliver to our clients.

In the old days, IT wasn’t viewed as an enabler. Now we’re viewed as an enabler. We come up with innovative solutions to enable the business to meet its business needs.

In the old days, IT wasn’t viewed as an enabler. Now we’re viewed as an enabler, and José and I are at the table for every conversation that’s happening in the company. We come up with innovative solutions to enable the business to meet those business needs.

Gardner: Tim, I’m going to guess that this is a nice way of looking at the glass as half full. IT enabling such business innovation is going to continue. How do you expect in the future that we’re going to continue the trend of IT as an enabler? What’s in the pipeline, if you will, that’s going to help foster that?

Minahan: With the backdrop of the continued global shortage of skills, particularly the modern skills that are needed, companies such as Teleperformance are looking at what it means for their workforce strategies. What does it mean for their customer success strategies? Employee experience is certainly becoming a top priority to recruit the best talent, but also to ensure that they can perform at their best and deliver the best services to clients.

In fact, if you look at what employees are looking for going forward, there’s the salary thing and there’s the emergence of purpose. Is this company doing something that I believe in that’s contributing to the world, the environment?

Planning a Post-Pandemic Workplace Strategy? 

These Timeless Insights Will Help. 

But right behind that is, “What are the tools and resources? How effectively are they delivering them to me so I can perform at my best?” And so IT, to Lance’s point, is a critical pillar, a key enabler, of ensuring that every company can work on making employee experience a competitive advantage.

Gardner: José, for other companies trying to make the most of a difficult situation and transitioning to more flexible work models, what would you recommend to them now that you’ve been through this at such a large, global scale? What did you learn in the process that you think they should be mindful of?

Change, challenge, partner up

Güereque: First of all, be able to change, and to challenge yourself. We can do much more than we believe sometimes. That’s definitely something that one can be skeptical of, because of the legacy we have been working through over many years. Today, we have been challenged to reinvent ourselves.

The second one is, there is tons of public information that we can leverage to be able to find successful use cases and learn from them. And the third one is, approach one consultant or partner that has experience in putting all these things in place. Because it is, as I mentioned, not a matter of just enabling people to WFH, it’s a matter of putting all the security environment in place, and all of the tools that are required to be able to perform as a team so you can deliver the results.

No alt text provided for this image

Brown: I’ll add one thing to that. It was about a year ago that I was visiting with Tim and the pandemic was starting to come to fruition. The pandemic had started overseas and was rapidly moving toward the US and other parts.

I met with Tim at Citrix and I said, “I’m not sure exactly what’s going to happen. I don’t know if this is going to be 100 people that go home or 300,000 people. But I know we need a partner to work with, and I know we have to partner through this process.”

So the big thing is that Citrix was that partner for us. You have to rely on your partners to do this because you just can’t simply do it by yourself.

Gardner: Tim, it sounds like an IT organization within Teleperformance is much more of an enabler to the rest of the organization, but you, at Citrix, are the enabler to the IT department at Teleperformance.

Minahan: Dana, to borrow a phrase, “It takes an ecosystem.” You move up that chain. We certainly partner with Teleperformance to enable their vision for a more agile workforce.

But, again, I’ll repeat that they’re doing that for their clients, allowing them to dial up and dial down resources as they need, to work-shift around the globe. So it is a true kind of agile workforce value chain that we’re creating together.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.


Posted in BYOD, Citrix, Cloud computing, contact center, Cyber security, data center, Data center transformation, digital transformation, Enterprise transformation, Help desk, Information management, professional services, Security, User experience | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Disaster Recovery to Cyber Recovery–The New Best Future State

The clear and present danger facing businesses and governments from cybersecurity threats has only grown more clear and ever-present in 2021. As the threats from runaway ransomware attacks and state-sponsored backdoor access to networks deepen, too many businesses have a false sense of quick recovery using traditional business continuity and backup measures.

That’s because the criminals are increasingly compromising vulnerable backup systems and data first — before they attack. As a result, visions of flipping a switch to get back to a ready state may be a dangerous illusion that keeps leaders under a false sense of business as usual. 

The next BriefingsDirect security strategies discussion explores new ways of protecting backups first and foremost so that cyber recovery becomes an indispensable tool in any IT and business security arsenal. We will now uncover how Unisys and Dell Technologies are elevating what was data redundancy to protect against natural disasters into something much more resilient and powerful.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in rapid cyber recovery strategies and technologies, please welcome Andrew Peters, Director of Global Business Development for Security at Unisys, and David Finley, Director of Information Assurance and Security in the Data Protection Division at Dell Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: David, what’s happened during the last few years — and now especially with the FireEye and SolarWinds attacks — that makes cyber recovery as opposed to disaster recovery (DR) so critical?

Best defense is good offense

Finley: I have been asked that question a few times just in the last few weeks, as you might imagine. And there are a couple of things to note with these attacks, SolarWinds and FireEye.

One, especially with FireEye, it was demonstrated to the entire world something that we didn’t really have our eyes on, so to speak, and that is the fact that folks that have really good security — where they sit back and the Chief information security officer (CISO) and the security team say, “We have really good security, we spent a lot of money, we have done a lot of things, we feel pretty good about what we have done.” That’s all great, but what was demonstrated with FireEye is that even the best can be compromised. 

If you have a nation state-led attack or you are targeted by a cybercrime family, then all bets could be off. They can get in and they have demonstrated that with these latest attacks. 

The other thing is, they were able to steal tools. Nothing worse can happen than the bad guys having new toolsets that they can actually use. We believe that with the increased threat from the bad actors because of these things, we really, really need the notion of a cyber vault or the third copy, if you will. Think about the 3:1 rule — three copies, two different locations, one off-site or offline. This is really where we need to be. 

Gardner: Andrew, it sounds like we have to assume that we are going to be or are already attacked. Just having a good defense isn’t enough. What’s the next level that we need to attain? 

Peters: A lot of times organizations think their security and their defenses are strong enough to mitigate virtually anything that happens to the organization. But what’s been proven now is that the bad guys are clever and are finding ways in. With SolarWinds, they found a backdoor into organizations and are coming in as a trusted entity.Just because you have signed Security Assertion Markup Language (SAML) tokens and signed certificates that you trust, you are still letting them in. It’s just been proven that you can’t exactly trust them. And when they come inside an organization and they win, what do you do next? What do you do when you lose? The concept here is to plan to win, but at the same time prepare to lose.

Gardner: David, we have also seen an uptick in the success of ransomware payouts. How is that also changing the landscape for how we protect ourselves? 

Finley: I was recently was thinking about that and I saw something written, it might have been a Wall Street Journal article, on security recently. They said CISOs in organizations have a decision to make after these kinds of attacks. The decision really becomes pretty simple. Do they pay the ransom or do they not pay the ransom? 

We would all like to say, “Don’t pay the ransom.” The FBI says don’t pay the ransom, because of the obvious reasons. If you pay it, they may come back, they are going to want more, and it sets a bad precedent, all those things. But the reality is when this actually happens to a company, they have to sit down and make the hard decision: Do I pay or do I not pay? It’s based upon getting the business running again. 

We want to position ourselves together with Unisys to create a cyber vault that is secured in a way that our customers will never have to pay the ransom.

If we have a protected set of data, and it’s protected in a vault secured by zero trust, to be able to get it back into play — that’s the best answer. It means not paying the ransom.

If we have a protected set of data that is the most important data to the firm – the stuff that they have to have tomorrow morning to actually run the business — and it’s in a protected vault secured by zero trust, through Unisys Stealth software, to be able to secure it and get it back out and put it back into play, that’s the best answer.

So that means not paying the ransom and still having the data available to bring the business back into play the next day. A lot of these attacks, as we know, are not only stealing data, like they did recently with FireEye, but also encrypting, deleting, and destroying the data.

Gardner: Another threat vector these days is that more people are working remotely, so there are more networks involved and more vulnerable endpoints. People are having to be their own IT directors in their own homes, in many cases. How does the COVID-19 work-from-home (WFH) trend impact this, Andrew? 

Work from home opens doors 

Peters: There are far more points of entry. Whereas you might have had anywhere from 10 percent to 15 percent of your workforce remotely accessing the network, and that access was fairly controllable, now you have up to 100 percent of your knowledge workers working remotely and accessing the network. There are more points of entry. From a security perspective, more rules need to be addressed to control access into the network and into operations. 

Then one of the challenges an organization has is that once they are on the inside of these big, flat networks the bad guys can map that network. They learn the systems that are there and they learn the operations extremely well and manipulate them, taking advantage of zero-day vulnerabilities in the systems and so operate within that environment without even being discovered. Once again, going back to the SolarWinds, they were operating for about eight months before they were eventually discovered.

No alt text provided for this image

Gardner: And so are we at a point going on 30 years of using wide area networks (WANs), and we are still under a false sense of security. David, do we not understand the threats around us?

Finley: There is the notion within our organizations and within the public sector that we believe what we have done is good enough. And good enough can be our enemy. I can’t tell you the number of times I have spoken with folks during incident response or after incident response from a cyberattack where they said, “We thought we were secured. We didn’t know that this could happen to us, but it did happen to us.”

That false sense of security is very real, evidenced by these high-level attacks on firms that we never thought it would happen to. It’s not just FireEye and it’s not just SolarWinds. We have had attacks on COVID-19 clinical trial providers, we have had attacks on our own government entities. Some of these attacks have been successful. And a lot of these attacks don’t even get publicized.

The most dangerous thing is a false sense of security. A lot of times these attacks happen and get swept under the rug. They quietly get cleaned up. That leads to a false sense of security.

Here is the most dangerous thing in this false sense of security we are talking about. I ask customers what percentage of the attacks do you actually believe you have visibility into within your own region? And the answer, the honest answer, is usually probably less than 20 percent.

But because I do this every day for a living, as does Andrew, and we probably have visibility to maybe 50 percent, because a lot of times these attacks happen and they get swept under the rug. They quietly get cleaned up, right? So we don’t know what’s happening. That also leads us to a false sense of security.

So again, I believe that we do everything we can upfront to secure our systems, but in the event that something does get through, we need to make sure that we have a secure offline copy of these backups and of our data.

Be prepared to resist ransom

Peters: An interesting dynamic I have noticed since the pandemic is that organizations, while they recognize it’s important to have that cyber recovery third copy to bring themselves back from the brink of extinction, say they can’t afford to do it right now. The pandemic has squeezed them so much. 

Well, we know that they are invested in backup. We know they are invested in DR, but they say, “Okay, we may table this one because it’s something that is a bit too expensive right now.”

However, on the other side, there are organizations that are picking up on this at this time, saying, “You know what? We see this is way more critical because we know the attacks are picking up.”

No alt text provided for this image

But the challenge here is the organizations that are feeling squeezed, that they can’t afford to invest in a solution like this, the question is, can they afford not to invest in this given all the exposure of the threats to their organizations. And we keep going back to SolarWinds, which is a big wake-up call.

But if we go back to other attacks that happened to organizations in the recent past — such as the WastedLocker backdoor and the procedures the bad guys are using to get into organizations to learn how they operate, to find additional backdoors and operate within that environment, and to even learn to avoid the security technologies that were put in there specifically to detect such breaches – they can operate with impunity within that environment. Then they eventually learn that environment well enough to shut them down enough so that the company has two choices. That company can either pay the ransom or go out of business. 

And if you are a bad guy, what would be your goal? Do you want to expose the company’s information and embarrass them? No, you want to make money. And if they are in the process of making money, how do they do it? You have to squeeze an organization as much as possible. And that’s what ransomware and these backdoors are designed to do — squeeze an organization enough to where they are forced to pay the ransom.

Gardner: So we need a better, fuller digital insurance policy. Yet many organizations have insurance in the form of DR designed for business continuity, but that might not be enough.

So what are we talking about when we make this shift from business continuity to cyber recovery, David? What are the fundamental challenges organizations need to overcome to make that transition? 

Cyber more likely than natural disaster

Finley: The number-one challenge I have seen over the past four or five years is that we need to realize that DR — and all the tenets of DR — will not cover us in the event of a cyber disaster. So those are two very different things, right? 

Oftentimes I challenge people with the notion of how they differ. And just to paint a picture, we have been doing DR basically the same way for many decades. The way it normally works is we have our key systems and their data connected to another site outside of a disaster radius, such as for earthquakes, floods, tornados, and hurricanes. We copy that data through a wide-open pipe to the other side on a regular basis. It’s an always-open circuit to the other side, and we have been doing it that way for 40 years.

What I often ask customers is based on that, how much do you spend every year to do DR? What does it really cost? Do you test? What are the real costs for DR for you? And there is usually a tangible answer.

The probability of cyber events is much higher than disaster events.The IT infrastructure and security groups have been making cyber recovery part of DR planning — and it’s taken a long time to get there. We have to change how we approach this.

With that in mind, the next question is, “If you look at the probability of something happening in the future to you, what do you think is more probable — a natural disaster event or a cyber disaster? What’s more probable?” And the answer is unanimously, it’s been 100 percent in recent years, it’s going to be a cyber disaster.Of course, the next question is, “How do you deal with cyber recoveries and is it a function of DR within your organization?” And the answer usually is, “Well, we don’t deal with it very well.”

So the IT infrastructure and security groups have in the last year been making cyber recovery part of DR planning — and it’s taken a long time to get there. When you think about that, if the probability of cyber events is much higher than disaster events — and we spend $1 million a year on DR — how much do we spend for cyber recovery? The answer historically has been that they spend very little on true cyber recovery.

That’s what has to change. We have to change how we approach this. We have to bring the security and risk folks into those decisions on protecting data. We need to look at it through the lens of a cyber event destroying all of the data, just as a hurricane may destroy all of the data. 

Peters: You know, Dave, in talking to a lot of organizations on what exactly they are going to do if they have a ransomware meltdown, we ask, “How are you going to recover?” They say, “We are going to go to our DR.” 

Hmm, okay. But what if you discover in your recovery process those files are polluted? That’s going to be a bad situation. Then they may go find some tapes and stuff. I ask, “Okay, do you have a runbook for this?” They say, “No.” Then how will they know exactly what to do?

No alt text provided for this image

And then the corollary to that is, how long is this recovery going to take? How long can you sustain your operations? How long can you sustain your company, and what kinds of losses are you prepared to sustain? 

Wow, and you are going to figure this all out when you are going through the process of trying to bring your organization back after a meltdown? That’s usually the tipping point where you are going to say, like other organizations have said, “You know what? We are just going to have to pay the ransom.”

Finley: Yes, and that also begs the question that we often see folks miss. And that is, “Do you believe that your CEO and/or your board of directors — the folks who don’t do IT as an everyday job, the folks who are running the business — do they understand the difference between DR and cyber recovery?”

If I were to ask people on the board of any organization if they were secure in their DR plans, most of them would say, “Yes, that’s what we pay our teams to do.”

If I were to ask them, “Well, do you believe that being able to recover from cyber disasters is included in that and done well?” The answer would also be, “Yes.” But oftentimes that is simply not the truth.

They don’t understand the difference between DR and cyber recovery. The data can all be gone from a cyber event just as easily as it can be gone from a hurricane or a flood. We have to approach it from that perspective and start thinking through these things.

We have to take that to our boards and have them understand, “You know what? We’ve spent a lot of money for 40 years on DR, but we really need to start spending money on cyber recovery.”

Yet we still get a lot of pushback from customers saying, “Well, yes, of course making a third copy and storing it somewhere secure in a way that we can always get it back — that’s a great idea — but that costs money.”

Well, you have been spending millions of dollars on DR, so make cyber recovery part of that effort.

Gardner: To what degree are the bad guys already targeting this discrepancy? Do they recognize a capability to go in and compromise the backups, the DR, in such a way that there is no insurance policy? How clever have the bad guys become at understanding this vulnerability?

Bad guys targeting backups

Peters: What would you do if you were the bad guy and you wanted to extort money from an organization? If you know they have any way of quickly recovering, then it’s going to be pretty hard to extort from them. It’s going to be hard to squeeze them. 

These guys are not broke, they are often professional organizations. There’s a lot of focus on the GRU, the former KGB operation that’s in Russia, and Cozy Bear and a number of these different organizations are well-funded. They have very clever people there. They are able to obtain technologies, reverse engineer them, understand how the security technologies operate, and understand how to build tools to avoid them. They want to get inside of organizations and learn how the operation runs and learn specifically what’s key and critical to an organization. 

No alt text provided for this image

The second thing, while they want to take out the primary systems, they also want to make sure you are not able to restore them. This is not rocket science. 

So, of course they are going to target backups. Are they going to pollute the files that you are going to actually put in your backups so if an organization tries to recover, they can create a situation that is bad, if not worse, than it was previously? What would you do? You have to figure that this is exactly what the bad guys are doing in organizations — and they are getting better at it. 

Finley: Andrew, they are getting better at it. We have been watching this pretty closely for the last year now. If you go out to any of the pundits or subscribe to folks like Bleeping ComputerSecurity, or CISO, you see the same thing. They talk about it getting worse. It’s getting worse on a regular basis. 

They are targeting backups. We are finding it actually written in the code. The first part of what they are going to do when they drop this on the network is they are going to go seek out security tools to disable them. Then they are going to seek out shadow copies to link to them and seek out backup catalogs and link to them. 

And this is the one that a lot of people miss. I just read this recently, by the FDIC, and they are publishing this to their member banks. They said DR has been done well for a number of decades. You copy information from one bank to another or from one banking location to another and you are able to recover from disasters and spin up applications and data in a secondary location. That’s all great. 

But realize that if you have malware attacking you in your primary location, it very often will make its way to your DR location, too. The FDIC said this pointblank, they said, “And you will get infected in both locations.”

A lot of people don’t think about that. I had a conversation last year with a CISO who said that if an attack gets to your production environment they can manage to move laterally and get to your DR site. And then the date is gone. And this particular CISO said, “You know, we call that an ‘Oh, crap’ moment because there is nothing we can do.”

That’s what we now need to protect against. We have to have a third copy. I can’t stress it nearly enough.

Gardner: We have talked about this third copy concept quite a bit. Let’s hear more about the Dell-Unisys partnership. What’s the technology and strategy for getting in front of this so that cyber recovery becomes your main insurance policy, not your afterthought insurance policy?

Essential copy keeps data dynamic

Finley: We want everyone to understand the reality. The bad guys can get in, they can destroy DR data, we have seen it too many times. It is real. These backups can be encrypted, deleted, or exfiltrated. And that is the fact, so why not have that insurance policy of a third copy?

There’s only way to truly protect this information. If the bad guys can see it, get to the machines that hold it, and get to the data – whether the data is locked on disk or not – they can destroy it. It’s a real simple proposition. 

No alt text provided for this image

We identified many years ago that the only way to really, truly protect against that is to make a copy of the data and get it offline. That is evidenced today by the guidance being given to us by the US federal governmentHomeland Security agency, and FBI. Everybody is giving us the same guidance. They are saying take the backups, the copies of your data, and store them somewhere away from the data that you are protecting – and ideally on the other side of an air gap and offline. 

When we create this third copy from our Dell solution for cyber recovery we take the data that we backup every day and move that key data to another site, across an air gap. The idea is the connection between the two locations is dark until we run a job to actually move the data from production to a cyber recovery vault

With that in mind, there is no way in until we bring up that connection. Now, that connection is secured through Unisys Stealth and through key exchanges and certificate exchanges to where the bad guys can’t get across that connection. They can’t get in. In other words, if you have a vault that’s going to hold all your important data, the bad guys can’t get in. They can’t get through the door. Even though we open a connection, they can’t use that connection to ride into our vault. 

And with that in mind we can take that third copy and store it in this cyber vault and keep it safe. Now, getting the data there and having the systems outside the vault communicate to the machines inside the vault – to make sure that all of that is secure – is something we partnered with Unisys on. I will let Andrew tell you about how that works.

Secure data swiftly in cyber vault

Peters: Okay. First off, Dave, you are not talking about putting all of the data into the vault, right? Specifically people are looking at only the data that’s critical to an operation, right?

Finley: Yes. And a quick example of that, Andrew, is an unnamed company in the paint industry. They create paint around the world and one of their key assets is their color-matching databases. That’s the data they put into the cyber vault, because they have determined that if that proprietary data is gone, they can lose $1 million per day.

We can take a third copy and store it in the cyber vault and keep it safe. We have partnered with Unisys on getting the data there and making the communication with all of the machines secure. 

Another example is an investment firm we work with. This investment firm puts their trade databases inside of the cyber vault because they have discerned that if their trade databases are infected, affected, or deleted or encrypted – and they go down – then they lose multiple millions of dollars per hour. So, to your point, Andrew, it’s usually about the critical business systems and essential information, things like that. But we also have to be concerned with the critical IT materials on your networks, right?

Peters: That’s right, other key assets like your Active Directory and your domain servers. If you are a bad guy, what are you going to attack? If they want to cripple you so much that even if you had that essential data, you couldn’t use it. They are going to try and stop you in your tracks. 

From a security perspective, there are a few things that are important – and one is data efficacy. First is knowing what I am going to protect. Next, how best am I going to securely move that critical data to a cyber vault? There is going to be automation so I am not depending on somebody to do this. This should happen automatically. 

So, to be clear, I am going to move it into the secure vault, and I want that vault to be air gapped. I want it to be abstracted from the network and the environment so bad guys can’t find it. Even if they could find it, they can’t see anything, and they can’t talk to it. 

The second thing I want is to make sure that the data I’m moving has high efficacy. I want to know that it’s not been polluted because bad guys are going to want to pollute that data. Typically, the things you put into the backup – you don’t know, is it good, is it bad, has it been corrupted? So if it’s going to be moved into the vault, we want to know if it’s good or if it’s bad. That way, if we are going to be going into a recovery, I can select the files that I know are good and I can separate them from the bad.

This is really important. That’s one of the critical things when you’re going into any form of cyber recovery. Typically you aren’t going to know what’s good data unless you have a system designed to discern good from bad.

No alt text provided for this image

You don’t want to be rebuilding your domain server and have the thing find out that it’s been polluted, that it’s locked, and that it has ransomware embedded in it. Bad guys are clever. You have to ask, “What would I do if I were a clever bad guy?” Sometimes it’s hard to think like that unless you put your bad guy hat on. 

There’s another important element here, too. The element of time. How quickly am I going get to this protected data? I have all of this data, these files and these applications, and they’re in my protected vault. Now, how am I going to move them back into my production environment?

But my production environment actually might still be polluted. I might still have IT and security personnel trying to clean up that environment. At the same time, I have to get my services back up and running, but I have a compromised network. And what’s the problem? The problem is time.

Ultimately, all of this comes down to business continuity and time. How quickly can I continue my critical operations? How quickly am I going to be able to get them up and running – despite the fact that I still have a lot of issues with ransomware and with hackers inside my IT operations?

From a security and rapid recovery perspective, there are some unique things that we can do with a cyber recovery approach. A cyber recovery solution automates the movement of your critical data into a secure vault, then analyzes it for data efficacy to determine if the data has been compromised. It also provides you with a runbook so you know how you’re going to get that data back out and get those systems operating so you can get users back online.

So even with a zero-day attack, by being able to use things like cryptography, cloaking, and basically hiding things from the rest of the network, I can get cryptographic micro-segmentation to restore the operations of critical services and get users back up on those services. Even if my network is compromised, I can start doing that very, very quickly.

When you put the whole cyber recovery solution that we have together – with automation, the security built in, to get to the critical data on a daily basis, move it into a vault, analyze it, and then obtain a runbook capability – you can quickly move it all back out and get those critical services back up and running. 

Manage, monitor, and restore data

Finley: One of the things that I hope everyone understands is that we can create a secure vault, put information in it, and do that all securely. But as Andrew was saying, most folks also want the ability to monitor, manage, and update that secure vault from their security operations center (SOC) or from their network operating system (NOS).

When we first began our relationship with Unisys, around the Stealth software, I was very excited. For a couple years before that, we were working with folks to show them how to use firewalls to protect information going in and out of our cyber vault, or how to configure virtual private networks (VPNs) to make that happen.

But when we got together and I looked at the Unisys Stealth software a few years ago, from a zero trust networks perspective – instead of just agents on the machines – it becomes invisible.

When I saw the tunnels that Unisys creates to our Dell vault I realized it not only allows us to have a new way to manage everything from the outside, it allows us to take clean data inside the vault and restore it quickly through the secure tunnels back to the outside.

When I first saw that those tunnels Unisys creates to our Dell vault are as secure as they are, I quickly realized that not only did it allow us to have a new way to manage everything from outside – we can also monitor everything from outside. It allows us to take what we know is clean data inside the vault and be able to restore it quickly through one of those secure Stealth tunnels back out to the outside.That is hugely important. We all know there are various ways to secure communications like this. Probably the least secure nowadays are VPNs, or remote access, if you will. The next secure, quite frankly, is viral access, or import access, and then the most secure is, I believe, zero trust software like we get with Unisys Stealth.

Peters: It’s not that I want to beat down on firewalls, because firewalls and ancillary technologies are very effective in protecting organizations – but they’re not 100 percent effective. If they were, we wouldn’t be talking about ransomware at all. The reason that we are is because breaches occur. The bad guys go after the low-hanging fruit, and they’re going to hit those organizations first. Then they’re going to get better at their craft and they’re going to go after more-and-more organizations.

Even when organizations have excellent security, you can’t always prevent against the things that people do. Or now, with SolarWinds, you can’t even trust the software that you’re supposed to trust. There are more avenues into an organization. There are more means to compromise. And the bad guys can monetize what they are doing through Bitcoin in these demands for ransoms.

So, at the end of the day, the threats to organizations are changing. They’re evolving, and even with the best defenses an organization has, you’re probably going to have to plan on being compromised. When the compromise happens, you have to ask, “What do we do now?”

Gardner: Are there any examples that you can point to and show how well recovery can work? Do we have use cases or actual customer stories that we can relate to show how zero trust cyber recovery works when it’s done properly?

Get educated on recovery processes

Finley: Sure, one happened not too long ago. It was a school system in California. And that particular school system worked with us to procure the cyber recovery solution, created a cyber vault, the third copy, and secured all of that. We installed it and got it all up and running and moved data into the vault on a Thursday of a particular week. And then they had a cyber event happen to the school system. This is one of the biggest school systems in that part of California. They had a cyber event over the weekend in that school system, and they had just gotten the vault up and running and had copied all of the critical data into it.

The data in the vault was secure. They were able to recover it as soon as they forensically could, according to the FBI, because the data was secure. It saved a bunch of time and a lot of effort and money.

Now, I contrast that to a couple other major attacks on other companies that happened in the last 120 days. One where they had no cyber vault, the customer data was attacked in production and a lot of DR was attacked. That particular set of events was done through a whole series of social engineering, but they were taken down encrypted and a lot of the data was destroyed.

No alt text provided for this image

It took them days, if not weeks, to begin the recovery process because of a lot of things that we all need to be aware of that happen. If you don’t have data that you know is secured somewhere else and that is clean, you’re going to have to verify that it’s clean before you can recover it. You’re going to have to do test recoveries to systems and make sure you’re not restoring malware. That’s going to take a long period of time. You’re not even going to be able to do that until law enforcement tells you that you can.

Also, when you’re in the middle of an incident response, regardless of who you are, the last thing you’re going to do is connect to the Internet. So if your data is stuck somewhere in a public cloud or clouds, you’re not going to be able to get it while you’re in the middle of an incident response.

The FBI characterizes your systems as a crime scene, right? They put up yellow tape around the crime scene, which is your network. They are not going to allow anybody in or out until they’re satisfied they’ve gathered all the date to be able figure out what happened. A lot of folks don’t know that, but it is simply true.

So having your critical data accessible offline, on the other side of the crime area, having it scrubbed every day do make sure it is absolutely clean, is very important. 

In a case of a second company, it took days if not weeks before they could recover information.

There is a third example. The IT people there told me the cyber vault saved their company, and “saved our butts,” they said. In this particular case, the data was encrypted in all of their systems. They were using backup software to write to a virtual client and they were copying that day from virtual clients into our cyber vault.

They also had our physical clients, called Data Domain from Dell, in production and writing into the cyber vault. They did not have our analytics software to scrub and make sure it was clean because it was an older implementation. But at the end of the day, everything in production was gone. But they went to the vault data and realized that the data there was all still good.

The bad guys couldn’t get there. They couldn’t see the cyber vault, didn’t know how to get there, and so there was no way they could get to that information. In this case, they were able to spin up and restore it rather quickly.

In another incident example, in the cyber vault, they had our CyberSense software, which does cyber analytics on the data being stored. We can verify the data is clean at a 99.7 percent effective level to tell the customer the data is restorable and clean. In this case the FBI got involved.

The FBI actually used the information from our CyberSense software to help them to ascertain the who, what, when, and where of what happened. Once they knew who, what, when, and where, they knew the stored data was clean and we were able to do a more rapid rescue.

Plan ahead with precise processes

Peters: What’s important too is knowing what to do. For example, what applications are you going to recover first? What do you need to do to get your operations running? Where are you going to find the needed files? Who’s going to actually do the work? What systems you are going to recover them onto?

Have a plan of action versus, “Okay, we’re going to figure this out right now.” Have a pre-prescribed runbook that’s going to take you through the processes, procedures, and decisions that need to be made. Where is the data going to be recovered from? What’s going to be determined? How is it recovered? Who’s going to get access to it?

This is different than DR. This is different than backup, it’s way different. It’s its own animal. You can define the runbook so that you can recover fully.

All of these things. There’s a whole plan that goes into this. This is different than DR. This is different than backup, it’s way different, it’s its own animal. And this is another place where Dell expertise comes in, being able to do the consulting work with an organization to define the plan or the runbook so that they can recover.

Finley: I wanted to also point out a consideration about ransomware payments. It’s not always a clean option to actually make the payment because of the U.S. Treasury Office of Foreign Assets’ controls. If an organization pays the ransom, and the recipients of that payoff are considered a threat to the United States, they may be breaking another law if you pay them the ransom.

So that needs to be taken into consideration if an organization is breached for ransom. If they pay the ransom off, they may be breaking a federal law.

Gardner: Do the Dell cyber recovery vault and Unisys Stealth technologies enable a crawl, walk, and run approach to cyber recovery? Can you identify those corporate jewels and intellectual property assets, and then broaden it from there? Is there a way to create a beachhead and then expand?

Build the beachhead first

Finley: Yes, we like to protect what we call critical rebuild materials first. Build the beachhead around those critical materials first, then get those materials Active Directory and DNS zone tables in the vault.

Next put the settings for networks, security logs, and event logs into the vault — the stuff in your production environment that you could get out of the vault and make everything work again.

If you have studied the Maersk attack in 2017, they didn’t have any of that, and that was a very bad day. They finally found those copies in Africa, but if they hadn’t found them it would’ve been a very bad month or year. So with that kind of a thing in mind, it has happened to many folks besides just them where this had to be most publicized.

So with that in mind, get those materials into the vault as a beachhead, if you will. Let’s build together the notion of this third location, let’s secure it with Unisys Stealth, and let’s secure it with an air gap that’s engulfed in Stealth, and with all of the connections in and out of the vaults protected by Stealth using zero trust. Let’s take those critical materials and build that beachhead there. Ideally, I’ve seen great success when I was doing that, and then gathering maybe total of three to five of the most critical business applications that a firm may have and concentrating on them first.

No alt text provided for this image

Here’s what we don’t want to do. I see no success in sitting down and saying, “Okay, we’re going to go through 150 different applications, with all of their dependencies, and we’re going to decide which of those pieces go into the cyber vault.”

It can be done, it has been done, and we have consulting that can help do that between Dell and Unisys, but let’s not start that way. Let’s instead start like we did recently with a big, big company in the U.S. We started with critical materials, we chose five major applications first, and for the first six months that’s what we did.

We protected that environment and those five major applications. And as time goes on, we will move other key applications into that cyber vault. But we decided not to boil the ocean, not look at 2,000 different applications and put all that data into the vault.

I recently talked to a firm that does pharmaceuticals. Intellectual property is huge for them. Putting their intellectual property into the cyber vault is really key. It doesn’t mean all of their systems. It means they want intellectual property in the vault, those critical materials. So build the beachhead and then you can move any number of things into it over time.

Peters: We have a demonstration to show what this whole thing looks like. We can show what it looks like to make things disappear on your network through cloaking, moving data from a production environment into a vault, and in-retention locking that, analyzing the data, and finding out if something is bad on it, and being able to select the last known good copy of data and start to rebuild systems in your production environment. 

If somehow you had an environment you’re recovering and malware manages to slip inside of that we can detect that and we can shut it down in about 10 to 15 seconds. For organizations interested in seeing this working in real-time, we have a real live demo.

Finley: That’s a powerful, powerful demo for all of the folks who are listening. You can see this thing work from beginning to end to see how the buttons are put in and how the data essentially moves out of scrubbing of the data to make sure it’s clean. It was fascinating for me the first time I saw this. It was great. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Dell Technologies.


Posted in Cloud computing, Cyber security, Dell, disaster recovery, Identity, Security, Software, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

Rethinking employee well-being means innovative new support for the digital work-life balance

The tumultuous shift over the past year to nearly all-digital and frequently at-home work has amounted to a rapid-fire experiment in human adaptability.

While there are many successful aspects to the home-exodus experiment, as with all disruption to human behavior, there are also some significant and highly personal downsides.

The next BriefingsDirect work-from-home strategies discussion explores the current state of employee well-being and examines how new pressures and complexity from distance working may need new forms of employer support, too.

 Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about coping in an age of unprecedented change in blended work and home life, we’re now joined by Carolina Milanesi, Principal Analyst at Creative Strategies and founder at The Heart of TechAmy Haworth, Senior Director, Employee Experience at Citrix, and Ray Wolf, Chief Executive Officer at A2K Partners. The panel discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Amy, how are predominantly digital work habits adding to employee pressures and complexities? And at this point, is it okay not to be okay with all of these issues and new ways of doing things?

Haworth: Thanks, Dana. It’s such an important question. What we have witnessed in the last 12 months is an unfolding of the humanness of a very powerful transformational experience in the world. It is absolutely okay not to be okay. To be able to come alongside those who are courageous enough to admit it is one of the most important roles that organizations are being called upon to play in the lives of our employees. 

Oftentimes, I think about what’s happened in 2020 and 2021. It’s as if the tide went out. It exposes fissures in our connectedness in the way organizations operate — even in the support systems we have in place for employees.

We’ve learned that unless employees are okay, our organizational health is at risk, too. Taking care of employees and enabling employees to take care of themselves shifts the conversation to new, innovative ways of doing that.

The last 12 months have shown us that we’ve never faced something like this before, so it’s only natural that we lacked a lot of the support systems and mechanisms to enable us to get through it.

There has been some amazing innovation to help close that gap. But it’s also been as if we’ve been flying the plane, while also figuring out how to do this all better. So, absolutely, yes, there are new challenges — but also a lot of growth. Being able to come alongside and being able to raise the white flag when needed makes it worth doing.

Gardner: Carolina, the idea for corporations of where their responsibility is has shifted a great deal. It used to be that employees would drive out of the parking lot — and they’d be off on their way, and there was no further connection. But when they’re working at home and facing new forms of fatigue or emotional turmoil, the company is part of that process, too. Do you see companies recognizing that?

Milanesi: Absolutely. To be honest with you, it’s been a long time in coming because although I might drive away from the parking lot — for a lot of employees — that’s not when the work stops.

Either because you’re working across different time zones or because you’re on call, if you’re a knowledge worker, chances are that your days are not a nine-to-five kind of experience. That had not been fully understood. The balance that people have to find in working and their private life has been under strain for quite some time.

Now that we’ve been at home, there’s no escape [from work]. That’s the realization companies have come to — that we are in this changed world and we are all at home. It’s not just that I decided to be a remote worker, and it’s just me. It’s me and whoever else is living with me — a partner, or maybe parents that I’m looking after, and children, all co-sharing apartments and all of that.

So, the stress is not just mine. It’s the stress of all of the people living with me. That is where more attentiveness needs to come in, to understand the personal situations that individuals are in — especially for under-represented groups.

For example, if you think about women and how they feel about talking — or not talking — about their children or caregiver responsibilities, they often shy away from talking about it. They may think it reflects badly on them.

All of those stresses were there before, but they became exacerbated during the pandemic. This has made organizations realize how much weight is on the shoulders of their employees, who are human beings after all.

Gardner: Ray at A2K Partners, you probably find yourself between the companies and their employees, helping with the technology that joins them and makes them productive. How are you seeing the reaction of both the employees and the businesses? Are they coming together around this — or are we just starting that process?

Wolf: I think we’re only in the second inning here, Dana. In our conversations with chief human resources officers (CHROs), they come to the conversation saying, “Ray, is there a better way? Do we really need to live with the way things are for our employees, particularly with the way they interface with technology and the applications that we give them to get their jobs done?”

We’re able to reassure them that, yes, there is a better way. The level of dissatisfaction and anxiety that employees have working with technology doesn’t have to be there anymore. What’s different now is that people are not accepting the status quo. They’re asking for a better way forward. The great news — and we’ll get into this a little bit later — is there are a lot of things that can be done.

The concept of work-life balance, right? It’s no longer two elements at the end of a see-saw that’s in balance. It looks more like a puzzle, where you’re shifting in and out — often in 15-minute or 30-minute intervals — between your personal life and your work life.

So how can technology better facilitate that? How can we put people into their flow state so they have a clear cognitive view of what they need to get done, set the priorities, and lead them into a good state when they need to return to their family activities and duties?

Gardner: Amy, what hasn’t changed is the fundamental components of this are people, process, and technology. The people part, the human resources (HR) part, perhaps needs to change because of what we’ve seen in the last year.

Do you see the role of HR changing? Is it even being elevated in importance within the organization?

Empowered employees blend life, work 

Haworth: The role of HR really has elevated. I see it as an amplification of employee voice. HR is the employee advocate and the employee’s voice into the organization.

It’s one thing to be the voice when no one’s listening. It’s much more interesting to be the voice when people are listening and to steer the organization in the direction that puts talent at the center, with talent first.

We’re having discussions and dialog about what’s needed to create the most powerful employee experience, one where employees are seen or heard and feel included in the path forward. One thing that’s so clear is we are shaping this all together, collectively. We are together shaping the future in which we will all live.

Being able to include that employee voice as we craft what it means to go to work or to do work in the years ahead means in many ways that it’s an open canvas. There are many ways to do hybrid work.

Being able to include that employee voice as we craft what it means to go to work or to do work in the years ahead means in many ways that it’s an open canvas. There are many ways to do hybrid work, which clearly seems to be the direction most organizations are going. Hybrid is quite possibly the future direction education is heading, too.

A lot of rethinking is happening. As we harness that collective voice, HR’s leadership is bringing that to the table, bringing it into decisions, and entering into a more experimental mindset. Where we are looking to in the future and how we find ways to innovate around hybrid work is increasingly important.

Gardner: Carolina, when we look at the role of technology in all of this, how should an HR organization such as Amy’s use technology to help — rather than contribute to the problem?

Milanesi: That’s the key question, right? Technology cannot come as another burden that I have to deal with when it comes to employees.

I love Ray’s analogy of the puzzle of the life we live. I stopped talking about work-and-life balance years ago and started talking instead about working-life-blend because if you blend there’s room to maneuver and change. You can compromise and put less stress on one area versus the other.

So, technology needs to come in to help us create that blend – and it has to be very personal. The most important thing for me is that one size doesn’t fit all. We’re all individuals, we’re all different. And although we might share some commonalities, the way that my workflow is setup is very different from yours. It has to speak to me because otherwise it becomes another burden.

So, one part is helping with that blend. Another part for technology to play is not making me feel that the tool I’m using is an overseer. There are a lot of concerns when it comes to remote working, that organizations are giving you tools to manage you — versus help you. That’s where the difference lies, right? For me, as an employee, I need to make sure that the tool is there to just help me do my work.

No alt text provided for this image

It doesn’t have to be difficult. It has to be straightforward. It keeps me in the flow, and helps me with my blended life. I also think that the technology needs to be context-aware. For example, what I need in the office is different from what I need when I’m at home or when I’m at the airport — or wherever I might be to doing work.

The idea that your task is dependent or is influenced by the context you’re in is important as well. But simplicity, security, and my privacy are all three components that are important to me and should be important to my organization.

Gardner: Ray, Carolina just mentioned a few key words: context, feelings, and the idea of an experience rather than fitting into what the coder had in mind. It wasn’t that long ago that applications pretty much forced people to behave in certain ways in order to fit set processes. 

What I’m hearing, though, is that we have to have more adaptable processes and technologies to account for a person’s experiences and feelings. Is that not possible? Or is it pie-in-the-sky to bring the human factor and the technology together?

Technology helps workers work better

Wolf: Dana, the great news is the technology is here today with the capability to that. The sad part is the benchmark is still pretty low. The fact is when it comes to providing technology to enable workers to get their jobs done, there is really very little forethought as to how it’s architected and orchestrated.

People are often simply given login information to the multiple applications that they need to use to get things done during the day. The most that we do in terms of consideration for these employees is create a single sign-on. So, for the first five minutes of your day, we have a streamlined, productive, and secure way to login — but then it’s a free for all. Processes are standard across employee types. There’s no consideration for how the individual employee wants to get work done, of what works best for them.

We subject very highly talented and creative people to a lot of low-value, repetitive tasks. Citrix Workspace allows you to automate out those mundane tasks, allowing workers to contribute more to critical business needs.

In addition, we subject very highly talented and creative people to a lot of low-value, repetitive tasks. One of the things that CHROs bring up to me all the time is, “How can I get my employees working at the top of their skills range, as opposed to the bottom of their skills range?”

Today there are platforms such as Citrix Workspace that allow you to automate out those mundane tasks, take into consideration where the employees should be spending their time, and allowing them to contribute more to the critical business needs of an organization.

Gardner: Amy, to that point of the way employees perceive of their work value, are you seeing people mired in doing task-based work? Or are you seeing the opportunity for people to move past that and for the organization to support them so that they can do the work they feel most empowered by? How are organizations helping them move past task to talent?

Haworth: Great question, and I love how you phrase that move from task to talent. So a couple things come to mind. Number one, organizations are looking to take friction out of the work-day. That friction is energy, and that energy could be better spent for an employee doing something they love to do — something that is their core skill set or why they were hired into that organization to start with.

A recent statistic I heard was that average workflow tends to involve at least four different stops along an application’s path. Think about what it takes to submit an expense report.

As much as possible, we’re looking for ways that take friction out of those interactions so employees get a sense of progress at the end of the day. The energy they’re expending in their jobs and roles should feel like it’s coming back threefold.

No alt text provided for this image

Ray touched on the idea of flow, but the conversation in 2021, based on the data we’ve seen, shows that employees feel fatigued because of the workload. What emerged from a lot of the survey work across multiple research firms last year was this sense of fatigue. You know, “My workload doesn’t match the hours that I have in the day.”

So, in HR circles, we’re beginning to think about, “Well, what do we do about that?” Is this a conversation more about energy and energetic spend? Initially [in the pandemic] there was a lot of energy spent just transforming how things were done. And now we get to think about when things are done. When do I have the most energy to do that hard thing? And then, “How is the technology helping me to do it? And is it telling me when it’s probably time to take a break?”

At Citrix we’ve recently introduced some really interesting notifications to help with this idea of well-being so that integration of technology into the workday helps as an employee manages their energy – to take, for example, a five-minute meditation break because they have been working solid for three hours. That might be a really good idea rather than that cup of coffee, for example.

So we’re starting to see a combination of the helpfulness of technology in a way that’s invited by employees. Carolina makes a great point about the privacy concerns, and so it comes in a way that’s invited by employees. That ultimately enables a state of flow and that feeling of progress and good use of the talent that each employee brings into the organization.

Gardner: Carolina, when we think about technology of 10 or more years ago, oftentimes developers would generate a set of requirements, create the application, and throw it over the wall. People would then use it. 

But what I just heard from Amy was much more about the employee having agency in how they use the technology, maybe even designing the flow and processes based on what works for them.

Have we gotten to the point where the technology is adaptive and people have a role in how services — maybe micro-services — are assembled? Are people becoming more like developers, rather than waiting for developers to give them the technology to use?

Optimize app workflows

Milanesi: Absolutely. Not everybody is in that kind of no-code environment yet to create their own applications from scratch, but certainly a lot of people are using micro-apps that come together into a workflow in both their private and work lives. 

Smartphone growth marked the first time that each of us started to be more in control of the applications that create workflows in a private way. The arrival of your own device into enterprise also meant bringing your own applications into enterprise.

As you know, it was a bit of the Wild West for a while, and then we harnessed that. Organizations that are most successful are the ones that stopped fighting this change and actually embraced it. To Amy’s point, there are ways to diminish and lower the friction that we feel as employees when we want to work in a certain way and to use all of the applications and tools, even ones that an IT department may not want us to. 

There is more friction and time loss in someone trying to go around that problem and creating back doors that bypass IT than for IT to empower me to do that work, as long as my assets and data are secure. As long as it’s secure, I should have a list of applications and tools that I can choose from and create my own best workflows.

Gardner: Ray, how do you see that balance between employee-agency and -agility and what the IT department allows? How do we keep the creativity flowing from the user, but at the same time put in the necessary guardrails?

Wolf: You can achieve both. This is not employee workflow at the sacrifice of security. That’s the state of technology today. Just in terms of where to get started with the idea of employees designing their workflows, this is exactly how we’re going about it with many customers today.

I mean, what an ironic thought: To actually ask the people involved in the day-to-day work what’s working for them and what’s not. What’s causing you frustration and is high-value to the company? So you can easily identify five places to go get started to automate and streamline.

What an ironic thought: To actually ask the people in the day-to-day work what’s working for them and what’s not. What’s causing you frustration and is high-value to the company? 

And the beautiful thing about it is when you ask the worker where that frustration is, and you solve it, two things happen. One, they have ownership and the adoption is very high as opposed to leadership-driven decisions. And we see this happening everyday today. It’s kind of the “smart guy in the room” syndrome where the people who don’t actually have to do the work are telling everybody what and how the workers actually want to get things done. It doesn’t work that way. 

The second is, once employees see — with as little as two to three changes in their daily workflow — what’s possible, their minds open up in terms of all the automation capabilities, all the streamlining that can occur, and they feel invigorated and energized. They become a much more productive and engaged member of the team. And we’re seeing this happen. It’s really an amazing process overall.

No alt text provided for this image

We used to think of work as 9 am to 5 pm — eight hours out of your awake hours. Today, work occurs across every waking hour. This is something that remote workers have known for a long time. But now some 45 percent to 50 percent of the workforce is remote. Now it’s coming to light. Many more people are feeling like they need to do something about it.

So we need to sense what’s going on with those employees. Some of the technology that we’re working on is evaluating and looking at someone’s schedule. How many back-to-back meetings have they had? And then enforcing a cognitive break in their schedules so people can take a breather — maybe take care of something in their personal lives.

And then, even beyond that — with today’s technology such as smart watches — we could look at things such as blood pressure and heart rates and decide if the anxiety level is too high or if an employee is in their proper flow. Again, we can then make adjustments to schedules, block out times on their calendars — or, you know, even schedule some well-being visits with someone who could help them through the stresses in their lives.

Gardner: Amy, building on Ray’s point of enhancing well-being, if we begin using technology to allow employees to be productive, in their flow, but also gain inference information to support them in new ways — how does that change the relationship between the organization and the employee? And how do you see technology becoming part of the solution to support well-being?

Trust enhances well-being

Haworth: There’s so much interesting data coming out over the last year about how the contract between employees and the organization is changing. There has been, in many cases, a greater level of trust. 

According to the research, many employees have trusted what their organizations have been telling them about the pandemic — more than they trusted state and local governments or even national governments. I think that’s something we need to pay attention to.

Trust is that very-hard-to-quantify organizational benefit that fuels everything else. When we think about attraction, retention, engagement, and commitment — some in HR believe that higher organizational commitment is the real driver to discretionary effort, loyalty, and tenure.

No alt text provided for this image

As we think about the role of the organization when it comes to well-being and how we build on trust where it’s healthy — how can we uphold that with high regard? How can we better bridge that into a different employer-employee relationship — perhaps one that’s better than we’ve ever seen before?

If we stand up and say, “Our talent is truly the human capital that will be front-and-center to helping organizations achieve their goals,” then we need to think about this. What is our new role? According to Maslow’s hierarchy of needs, it’s hard to think about being a high-performing employee if things are falling apart on the home front, and if we’re not able to cope.

For our organization, at Citrix, we are thinking about not only our existing programs and bolstering those, but we’re also looking for other partners who are truly experts in the well-being space. We can perhaps bring that new information into the organization in a way that integrates with and intersects into an employee’s day.

For us at Citrix, that is done through Citrix Workspace, and in many cases with the rapport of a managerial capability. That’s because we know so much of the trust relationship is between the employee and the manager, and it is human first and foremost.

Then we also need to think about how we continue to evolve and learn as we go. So much of this is uncharted. We want to make sure we’re open to learning. We’re continuing to invest. We’re measuring how things are working. And we’re inviting that employee voice in — to help co-create.

Gardner: Carolina, from what we just heard from Amy, it sounds like there’s a richer, more intertwined relationship between the talent pool and the organization. And that they are connected at the hip, so to speak, digitally. It sounds like there’s a great opportunity for new companies and a solutions ecosystem to develop around this employee well-being category.

Do you see this as a growth opportunity for new companies and for organizations within existing enterprises? It strikes me that there’s a huge new opportunity.

Tech and the human touch

Milanesi: I do think there’s a huge opportunity. And that’s good and bad in my view because obviously, when there’s a lot of opportunity, there also tends to be fragmentation.

Many different things are going to be tried. And not everybody has the expertise to help. There needs to be an approach from the organization’s perspective so that these solutions are vetted.

But what is exciting is the role that companies like Citrix are taking on to become a platform for that. So there might be a start-up that has a great idea and then leverages the Citrix Workspace platform to deliver that idea.

Then you have the opportunity to use the expertise that Citrix brings to the table. They have been focused on workflows and employee empowerment for many years. What I’m excited to see is organizations that come out and offer that platform to make the emerging ecosystem even richer.

No alt text provided for this image

I also love what Amy said about human trust as first-and-foremost. That’s what I caution people to make it all about. Technology should not be a crutch, where technology comes in to try and make you suffer less, but still does not solve the problem. And technology should not be the only solution you adopt.

I might have a technological check-in that tells me that I’m taking on too many meetings or that I should take a break, but there is nothing better than a manager giving you a call or sending you an email to let you know you are seen as a human, that your work is seen by other humans.

I love what you were saying earlier about the difference between the task and the talent. That’s another part where — if we have more technology that helps us with the mundane stuff and we can focus on what we enjoy doing — that also helps us showcase the value that we bring as an employee and then the value of the task, not just the output.

A lot of times, some of these technology solutions that are delivered are about making me more productive. I don’t know about you guys, but I don’t wake up in the morning and say, “I want to be more productive today.” I wake up and want to get through the day. I want to enjoy myself; I want to make a contribution and to feel that I make a difference for the company I’m working for.

And that’s what technology should be able to do: Come in and take away the mundane, take away the repetitive, and help me focus on what makes a difference — and what makes me feel like I’m contributing to the success within my company.

Gardner: Ray, I would like to visit the idea of consequences of the remote-work era. Just as people can work from anywhere, that also means they can work for just about anyone.

If you’re working for a company that doesn’t seem to have your well-being as a priority and doesn’t seem to be interested in your talents as much as your tasks, you can probably find a few other employers quite easily from the very same spot that you’re in.

How has the competitive landscape shifted here? Do companies do this because it’s a nice thing to do? Or will they perhaps find themselves lacking the talent if the talent wants to work for someone who better supports them?

Employees choose work support

Wolf: Dana, that ultimately is the consequence. Once we get through this immediate situation from the pandemic, and digest the new learning about working remote, we will have choices.

Employers are paying attention to this in a number of ways. For example, I was just on the phone with a CHRO from a Fortune 50 company. They have added a range of well-being applications that help in the taking care of the employees there.

But there are also some cultural changes that need to occur. This CHRO was explaining to me that even though they have all these benefits — including 12 hours off a month or more so-called mental health days – they are struggling with some of the managers. They are having trouble getting managers, some of whom may be later on in their careers, to actually model these new behaviors and give the employees and workers permission to take advantage of the benefits from these well-being applications.

The ones who evolve culturally, and who pay attention to this change, are ultimately going to be the winners. It may be another 6 or 18 months, but we’ll get there.

So we have a way to go. But the ones who evolve culturally, and who pay attention to this change, are ultimately going to be the winners. It may be another 6 or 18 months, but we’ll definitely get there. In the interim, though, workers can do something for themselves.

There are a lot of ways to stay in-tune with how you’re feeling and give yourself a break and better scheduling of time. I know we would like to have technology that forces that into the schedule, but you can do that for yourself now as an interim step. And I think there are a lot of possibilities here — and more not that far in the future.

There are things that could be done immediately to bring a little bit of relief, help people see what’s possible, and then encourage them to continue working down this path of the intersection of well-being and employee workflow.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.


Posted in Citrix, digital transformation, Enterprise transformation, Security, User experience | Tagged , , , , , , , , , , , , , , , , | Leave a comment

How HPE Pointnext Tech Care changes the game for delivering enhanced IT solutions and support

The next BriefingsDirect Voice of Innovation discussion explores how services and support for enterprise IT have entered a new era.

For IT technology providers, the timing of the news couldn’t be better. That’s because those now consuming tech support are demanding higher-order value — such as improved worker productivity from hybrid services delivered across many remote locations.

At the same time, the underlying technologies and intelligence to enhance traditional help desk support are blossoming to deliver proactive — and even consultative — enhancements.

Stay with us here to examine how Hewlett Packard Enterprise (HPE) Pointnext Services has developed new solutions to satisfy those enhanced expectations for the next era of IT support. HPE’s new generation of readily-at-hand IT expertise, augmented remote services, and ongoing product-use guidance together propel businesses to exploit their digital domains — better than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the Pointnext vision for the future of advanced IT operational services are Gerry Nolan, Director of Operational Services Portfolio at HPE Pointnext Services, and Rob Brothers, Program Vice President, Datacenter and Support Services, at IDC. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. 

Here are some excerpts:

Gardner: Rob, what are enterprise IT leaders and their consumers demanding of tech support in early 2021? How are their expectations different from just a year or two ago?

Brothers: It’s a great question, Dana. I want to jump back a little bit further than just a year or so ago. That’s because support has really evolved so much over the past five, six, or seven years.

If you think about product support and support in general back in the day, it was just that. It was an add-on. It was great for fix services. It was about being able to place a phone call to get something fixed.

But that evolved over the past few years due to the fact that we have more intelligent devices and customers are looking for more proactive, predictive capabilities, with direct access to experts and technicians. And now that all has taken a fast-track trajectory during the pandemic as we talk about digital transformation.

During COVID-19, customers need new ways to work with tech-support organizations. They need even more technical assistance. So, we see that a plethora of secure, remote-support capabilities have come out. We see more connected devices. We see that customers look for expertise over the phone — as well as via chat or via augmented reality. Whatever the channel, we see a trajectory and growth that has spurred on a lot of innovation — and not just the innovation itself, but the consumption of that innovation.

Those are a couple of the big differences I’ve seen in just the past couple of years. It’s about the need for newer support models, and a different way of receiving support. It’s also about using a lot of the new, proactive, and predictive capabilities built inside of these newer systems — and really getting connected back to the vendor.

Those enterprises that connect back to their vendors are getting that improved experience and can then therefore pass that better experience to their customers. That’s the important part of the whole equation.

Those enterprises that connect back to their vendors are getting that improved experience and can then therefore pass that better experience to their customers. That’s the important part of the whole equation — making sure that better IT experiences translate to those enterprise customers. It’s a very interesting time.

Gardner: I sense this is also about more collective knowledge. When we can gather and share how IT systems are operating, it just builds on itself. And now we have the tools in place to connect and collaborate better. So this is an auspicious time — just as the demand for these services has skyrocketed.

Brothers: Yes, without a doubt. I find the increased use of augmented reality (AR) to deliver support extremely interesting, too, and a great use case during a pandemic.

If you can’t send an engineer to a facility in-person, maybe you can give that engineer access to the IT department using Google Glass or some other remote-access technology. Maybe you can walk them through something that they may not have been able to do otherwise. With all of the data and information the vendor collects, they can more easily walk them through more issues. So that’s just one really cool use case during this pandemic.

Gardner: Gerry, do you agree that there’s been auspicious timing when it comes to the need for these innovative support services and the capability to deliver them technically?

Pandemic accelerates remote services

Nolan: Yes, there’s no question. I totally agree with Rob. We saw a massive spike with the pandemic in terms of driving to remote access. We already had significant remote capabilities, but many of our customers all of a sudden have a huge remote workforce that they have to deal with.

They have to keep their IT running with minimal on-site presence, and so you have to start quickly innovating and delivering things such as AR and virtual reality (VR), which is what we did. We already have that solution.

But it’s amazing how something like a pandemic can elevate that use to our thousands and thousands of technical engineers around the world who are now using that technology and solution to virtually join customer sites and help them triage, diagnose, and even do installations. It’s allowing them to keep their systems and their businesses running during a very tough period.

Another insight is we’ve seen customers struggling, even before the pandemic, with having enough technical personnel bandwidth. You know, how they need more people resources and skills as more new technologies hit the streets.

To Rob’s point, it’s difficult for customers to keep pace with the speed of change in IT. There’s more hunger for partners who can go deep on expertise across a wide plethora of technologies. So, there’s a variety of new support activities going on.

Brothers: Yes, around those technical capabilities, one of the biggest things I hear from enterprises is just trying to find that talent pool. You need to get employees to do some of the technical pieces of the equation on a lot of these new IT assets. And they’re just not out there, right?

They need programmers and big data data scientists. Getting folks to come in to assist on that level is more and more difficult. Hence, working with the vendor for a lot of these needs and that technical expertise really comes in handy now.

Gardner: Right, when you can outsource — people do outsource. That’s been a trend for 10 or 15 years now.

What are the challenges enterprises — as the IT vendors and providers — have in closing that skills gap? 

DX demands collaboration

Brothers: I actually did a big study around digital transformation. One of the big issues I’ve seen within enterprises is a lot of siloed structures. The networking team is not talking to the storage team, or not talking to the server team, and protecting their turf.

As an alternative, you can have the vendor come in and say, “Look, we can do this for you in a simpler fashion. We can do it a little bit faster, too, and we can keep downtime out of your environment.”

But trying to get the enterprise convinced [on the outsourcing] can sometimes be tricky and difficult. So I see that as one of the inhibitors to getting some of these great tech services that the vendors have into these environments.

A lot of these legacy systems are mixed in with the newer systems. This is where you see a struggle within enterprises. It’s still the stovepipe silos in enterprises that can make transitions very difficult. 

A second big challenge I see is around the big, legacy IT environments. This goes back to that connectedness piece I talked about. A lot of these legacy systems are mixed in with the newer systems. This is where you see a struggle within enterprises. They are asking, “Okay, well, how do I support this older equipment and still migrate to this new platform that I want to do a lot of cloud-based computing with and become more operationally efficient?” The vendors can assist with that, but it’s still the stovepipe silos you sometimes see in enterprises that can make transitions very difficult.

Gardner: Right. The fact is we have hybrid everything, and now we have to adjust our support and services to that as well.

Gerry, around these challenges, it seems we also have some older thinking around how you buy these tech services. Perhaps it has been through a warranty or a bolt-on support plan. Do we need to change the way we think about acquiring these services?

Customer experience choice 

Nolan: Yes, customers are all about experiences these days. Think about pretty much every part of your life — whether you’re going to the bank, booking a vacation, or even buying an electric car. They’ve totally transformed the experience in each of those areas.

IT is no different. Customers are trying to move beyond, as Rob was saying, that legacy IT thinking. Even if it’s contacting a support provider for a break-fix issue, they want the solution to come with an end-to-end experience that’s compelling, engaging, and in a way that they don’t need to think about all the various bits and pieces. The fewer decisions a customer has to make and the more they can just aim for a particular outcome, the more successful we’re going to be.

Brothers: Yes, when a customer invested $1 million in a solution set, the old mindset was that after three or four years it would be retired and they would buy a new one — but that’s completely changed.

Now, you’re looking at this technology for a longer term within your environment. You want to make sure you’re getting all the value out of it, so that support experience becomes extremely important. What does the system look like from a performance perspective? Did I get the full dollar value out of it?

No alt text provided for this image

That kind of experience is not just between the vendor and with my own internal IT department, but also in how that experience correlates out to my end-user customer. It becomes about bringing that whole experience circle around. It’s really about the experience for everybody in the environment — not just for the vendor and not just for the enterprise. But it’s for the enterprise’s customers. 

Gardner: Rob, I think it behooves the seller of the IT goods if they’ve moved from a CapEx to an OpEx model so that they can make those services as valuable as possible and therefore also apply the right and best level of support over time. It locks the customer in on a value basis, rather than a physical basis.

Brothers: Yes, that’s one great mindset change I’ve seen over the past five years. I did a study about six years ago, and I asked customers how they bought support. Overwhelmingly they said they just bought a blanket support contract. It was the same contract for all of the assets within the environment.

But just recently, in the past couple of years, that’s completely changed. They are now looking at the workloads. They’re looking at the systems that run those workloads and making better decisions as to the best type of support contract on that system. Now they can buy that in an OpEx- or CapEx-type manner, versus that blanket contract they used to put on it.

It’s really great to see how customers have evolved to look at their environments and say, “I need different types of support on the different assets I have, and which provide me different experiences.” That’s been a major change in just the past couple of years.

Nolan: We’re also seeing customers seek the capability to evolve and move from one support model to another. You might have a customer environment where they have some legacy products where they need help. And they’re implementing some new technologies and new solutions, and they’re developing new apps.

It’s really helpful for that customer if they can work with a single vendor — even if they have multiple, different IT models. That way they can get support for their legacy, deploy new on-premises technologies, and integrate that together with their legacy. And then, of course, having that consumption-as-a-service model that Rob just talked about, they also have a nice easy way of transitioning workloads over to hybrid models where appropriate.

I think that’s a big benefit, and it’s what the customers seem to be looking for more and more these days.

Gardner: Gerry, what’s the vision now behind HPE to deliver on that? What’s Pointnext Services doing to provide a new generation of tech support that accommodates these new and often hybrid environments?

Tech Care’s five steps toward support

Nolan: We’re very excited to launch a new support experience called HPE Pointnext  Tech Care. It’s all about delivering on much of what’s just been said in terms of moving beyond a product break-fix experience to helping customers get the most out of that product — all the way from purchasing through its lifecycle to end-of-life.

Our main goal for HPE Pointnext Tech Care is to help customers maximize and expose all the value from that product. We’re going to do that with HPE Pointnext Tech Care through five key elements.

Products are going to be embedded with a support experience called HPE Pointnext Tech Care. It’s a very simple experience. It has some choices on the SLA side, but it’s going to dramatically simplify the buying and owing experience at HPE.

The first is to make it a very simple experience. Today, we have four different choices when you’re buying a product as to which experience you want to go with. Now with HPE Pointnext, products are going to be sold embedded with a support experience called HPE Pointnext Tech Care. It’s a very simple experience. It has some choices on the service-level-agreement (SLA) side, but it’s going to dramatically simplify the buying and owning experience for our HPE customers.

The second aspect is the digital-transformation component that we see everywhere in life. That means we’re embedding a lot of data telemetry into the products. We have a product called HPE InfoSight that’s now embedded in our technology being deployed.

InfoSight collects all that data and sends it back to the mother ship, which allows our support experts to gain all of those insights and provide help with the customer in mitigating, predicting, planning capacity, and helping to keep that system running and optimized at all times. So, that’s one element of the digital component.

The other aspect is a very rich support portal, a customer engagement platform. We’ve already redone our support center on and customers will see it’s completely changed. It has a new look and feel. Over the coming quarters, there will be more and more new capabilities and functionality added. Customers will be able to see dashboards, personalized views of their environments, and their products. They’ll get omni-channel access to our experts, which is the third element we are providing.

We have all this great expertise. Traditionally, you would connect with them over the telephone. But going forward, you’re going to have the capability, as Rob mentioned, for customers to do chat. They may also want to watch videos of the experts. They may want to talk to their peers. So we have a moderated forum area where customers can communicate with each other and with our experts. There’s also a whole plethora of white papers and Tech Tip videos. It’s a very rich environment.

No alt text provided for this image

Then the fourth HPE Pointnext Tech Care element touches on a key trend that Rob mentioned, which goes beyond break-fix. With HPE Pointnext Tech Care, you’ll have the capability to communicate with experts beyond just talking about a broken part of your system. This will allow you to contact us and talk about things such as using the product, or capacity planning, or configuration information that you may have questions about. This general tech guidance feature of HPE Pointnext Tech Care, we believe, is going to be very exciting for customers, and they’re going to really benefit from it. 

And lastly, the fifth component is about a broader spectrum of full lifecycle help that our customers want. They don’t just want a support experience around buying the product, they want it all the way through its lifetime. The customer may need help with migration, for example, or they may need help with performance, training their people, security, and maybe even retiring or sanitizing that asset. 

With HPE Pointnext Tech Care, they will have a nice, easy mechanism where you have a very robust, warm-blanket-type of support that comes with the product and can easily be augmented with other menu choices. We’re very excited about launch of HPE Pointnext Tech Care and it comes with those five key elements. It’s going to transform the support experience and help customers get the most from their HPE products.

Gardner: Rob, how much of a departure do you sense the HPE Pointnext Tech Care approach is from earlier HPE offerings, such as HPE Foundation Care? Is this a sea change or a moderate change? How big of a deal is this?

Proactive, predictive capabilities

Brothers: In my opinion, it’s a pretty significant change. You’re going to get proactive, predictive capabilities at the base level of the HPE Pointnext Tech Care service that a lot of other vendors charge a real premium for.

I can’t stress enough how important it is for those proactive, predictive capabilities to come with environments. A survey that I did not long ago supported a cost-downtime study. In that study, customers saw approximately 700 or so hours of downtime per year across their environments. These are servers, storage, networking, and security, and take human error into account. If customers enabled proactive, predictive capabilities, they saw approximately 200 hours of saved downtime. That’s because of what those corrective, predictive capabilities can do at that base layer. They allow you to do the one big thing that prevents downtime — and that’s patch management and patch planning.

Now, those technical experts that Gerry talked about can access all of this beautiful, feature-rich information and data. They can feed it back to the customer and say, “Look, here’s how your environment looks. Here’s where we see some areas that you can make improvements, and here’s a patch plan that you can put in place.”

Now technical experts can access all of this beautiful, feature-rich information and data. They can feed it back to the customer to make improvements. That’s precious information and data.

Then all of the data comes back from enterprises, saying, “If I do a better job of that patching and patch planning that just saves a copious amount of unplanned and planned downtime out of my environment because I now do a better job of that.” That’s precious information and data.

That’s the big fundamental change. They’re showing the real value to the customer so they don’t have to buy some of those premium levels. They can get that kind of value in the base level, which is extremely important and provides that higher-order experience to end-user customers. So I do think that’s a huge fundamental shift, and definitely a new value for the customers.

Gardner: Rob, correct me if I’m wrong, but having this level of proactive, baked-in-from-the-start support comes at an auspicious time, too, because people are also trying to do more automation with their security operations. It seems to me that we’re dovetailing the right approaches for patching and proactive maintenance along with what’s needed for security. So, there’s a security benefit here as well?

Brothers: Oh, massive. Especially if you look at this day-and-age with a lot of the security breaches we just had just over the past year due to new security remote access to a lot of systems. Yes, it definitely plays a major factor in how enterprises should be thinking about how they’re patching and patch planning.

Gardner: Gerry, just to pull on that thread again about data and knowledge sharing, the more you get the relationship that you’re describing with HPE Pointnext Tech Care — the more back and forth of the data and learning what the systems are doing — and you have a virtuous cycle. Tell us how the machine learning (ML) and data gathering works in aggregate and why that’s an auspicious virtuous cycle.

Nolan: That’s an excellent question and, of course, you’re spot-on. The combination is of the telemetry built into the actual products through HPE InfoSight, our back-end experts, and the detailed knowledge management processes. We also have our experts who are watching, listening, and talking to customers as they deal with issues.

That means you have two things going on. You have the software learning over time and we have rules being built in there so that when it spots an issue it can go and look for all the other similar environments and then help those customers mitigate and predict ahead of time.

That means that customers will immediately get the benefit of all of this knowledge. It might be a Tech Tip video. It might be a white paper. It might be an item or an article in a moderated forum. There’s this rich back-and-forth between what’s available in the portal and what’s available in the knowledge that the software will build over time. And all of this just comes to bear in a richer experience for the customer, where they can help either self-solve or self-serve. But if they want to engage with our experts, they’re available in multiple different channels and in multiple different ways.

Gardner: Rob, another area where 2+2=5 is when we can take those ML and data-driven insights that Gerry described across a larger addressable market of installed devices. And then, we can augment that with MyRoom-type technologies and the VR and AR capabilities that you described earlier.

What’s the new sum value when we can combine these insights with the capability to then deliver the knowledge remotely and richly? 

Autonomous IT reduces human error 

Brothers: That’s a really great point. The whole idea is to attain what we call autonomous IT. That means to have IT systems that are more on the self-repair side, and that have product pieces shipped prior to things going wrong. 

One of the biggest and most-costly pieces of downtime is from human error. If we can pull the human touch and human interaction out of the IT environment, we save each company hundreds of thousands of dollars a year. That’s what all this data and information will provide to the IT vendors. They can then say, “Look, let’s take the human interactions out of it. We know that’s one of the most-costly sides of the equation.”

If we can pull the human touch and interaction out of the IT environment we save money and reduce human error. We can optimize systems. It gets to the point where we’re relying on the intelligence of the systems to do more. That’s the direction we’re heading in. 

If we can do that in an autonomous fashion — where we’re optimizing systems on a regular basis, equipment is being shipped to the facility prior to anything breaking, we can schedule any downtime during quiet times, and make sure that workloads are moved properly — then that’s the endgame. It gets to the point where the human factor gets more removed and we’re relying more on the intelligence of the systems to do more.

That’s definitely the direction we’re moving in, and what we’re seeing here is definitely heading in that direction.

Gardner: Yes, and in that case, you’re not necessarily buying IT support, your buying IT insurance.

Brothers: Yes, exactly. That gets back to the consumption models. HPE is one of the leaders in that space with HPE GreenLake. They were one of the pioneers to come up with a solution such as that, which takes the whole IT burden off of IT’s plate and puts it back on the vendor.

Nolan: We have a term for that concept that one of my colleagues uses. They call it invisible IT. That’s really what a lot of customers are looking for. As Rob said, we’re still some ways from that. But it’s a noble goal, and we’re all in to try and achieve it.

Gardner: So we know what the end-goal is, but we’re still in the progression to it. But in the meantime, it’s important to demonstrate to people value and return on investment (ROI).

Do we have any HPE Pointnext Tech Care examples, Gerry? Rob already mentioned a few of his studies that show dramatic improvements. But do we have use cases and/or early-adoption patterns? How do we measure when you do this well and you get?

Benefits already abound

Nolan: There are a ton of benefits. For example, we already have extensive Tech Tip video libraries. We have chat implemented. We have the moderated forums up and running. We have lots of different elements of the experience already live in certain product areas, especially in storage.

Of course, many HPE products are already connected through HPE InfoSight or other tools, which means those systems are being monitored on a 24 x 7 basis. The software already monitors, predicts, and mitigates issues before they occur, as well as provides all sorts of insights and recommendations. This allows both the customer and our support experts to engage and take remediation action before anything bad happens. 

Customers seem to love this more-rich experience approach. Yes, there’s a lot more data and a lot more insights. But to have those experts on-hand, to be able to gain or build an action plan from all of that data, is really important.

Now, in terms of some of the benefits that we’re seeing in the storage space, those customers that are connected are seeing 73 percent fewer trouble tickets and 69 percent faster time-to-resolution. To date, since InfoSight was first deployed in that storage environment alone, we’ve measured about 1.5 million hours of saved productivity time.

So there are real benefits when you combine being connected with ML tools such as InfoSight. When the rich value available in HPE Pointnext Tech Carecomes together, it further reduces downtime, improves performance, and helps reach the end-goal that Rob talked about, the autonomous IT or invisible IT. 

Gardner: Rob, we started our conversation about what’s changed in tech support. What’s changed when it comes to the key performance indicators (KPIs) for evaluating tech support and services?

Brothers: The big, new KPIs that we’re seeing do not just evaluate the experience that the enterprise has with the IT vendors. Although that’s obviously extremely important, it’s also about how does that correlate to the experiences my end-users are receiving?

No alt text provided for this image

You’re beginning to see those measurements come to the fore. An enterprise has its own SLAs and KPIs with its end-users. How is that matching to the KPIs and SLAs I have back to my IT vendors? You’re beginning to see those merge and come together. You’re beginning to see new matrices put in place where you can evaluate the vendor through how well you’re delivering user experiences to your own end-users. 

It takes a bit of time and energy to align that because it’s a fairly complex measurement to put in place. But we’re beginning to see that from enterprises, to seek that level of value from the vendors. And the vendors are stepping up, right? They’re beginning to show these dashboards back to the enterprise that say, “Hey, here’s the SLA, here are the KPIs, here are the performance matrices that we’re collecting and that should correlate fairly well to what you’re providing to your end-user customers.”

Gardner: Gerry, if we properly align these values, it better fits with digital transformation because people have to perceive the underlying digital technologies as an enabler, not as a hurdle. Is HPE Pointnext Tech Care an essential part of digital transformation when we think about that change of perception?

Incident management transforms

Nolan: It totally is. One of our early Pointnext customers is a large, US retailer. They’ve gone through a situation where they had a bunch of technology. Each one had its own individual support contract. And they’ve moved to a more centralized and simpler approach where they have one support experience, which we actually deliver across each of their different products — and they’re seeing huge benefits.

They’ve gone from firefighting and having their small IT team predominantly focused on dealing with issues and support calls regarding hardware- and update-type issues and all of a sudden, they were measuring themselves on incidents — how many incidents — and they were trying to keep that at a manageable level.

One large, US retailer has moved to a more centralized and simpler approach where they have one support experience — and they’re seeing huge benefits. 

Well, now, they’ve totally changed. The incidents have almost disappeared — and now they’re focused on innovation. How fast can they get new applications to their business? How fast can they get new projects to market in support of the business?

They’re just one customer who has gone through this transformation where they’re using all of the things we just talked about and it’s delivering significant benefits to them and to their IT group. And the IT group, in turn, are now heroes to their business partners around the US.

Gardner: I want to close with some insights into how organization should prepare themselves. Rob, if you want to gain this new level of capability across your IT organization, you want the consumers of IT in your enterprise to look to IT for solutions and innovation, what should you be thinking about now? What should you put in place to take advantage of the offerings that organizations such as HPE are providing with HPE Pointnext Tech Care?

Evaluating vendor experiences

Brothers: It all starts with the deployment process. When you’re looking and evaluating vendors, it’s not just, “Hey, how is the product? Is the product going to perform and do its task?” 

Some 99 percent of the time, the stand-alone IT system you’re procuring is going to solve the issue you’re looking to solve. The key is how well is that vendor going to get that system up and running in your environment, connected to everything it needs to be connected to, and then supports it optimizes it for the long run.

It’s really more about that life cycle experience. So, as an enterprise, you need to think differently on how you want to engage with your IT vendor. You need to think about all the different performance KPIs, and match that back to your end-user customer.

The thought process of evaluating vendors, in my opinion, is shifting. It’s more about the type of experience I get with this vendor versus the product and its job. That’s one of the big transitional phases I’m seeing right now. Enterprises are thinking about more the experience they have with their partners, more so then if the product is doing the job. 

Gardner: Gerry, what do you recommend people do in order to get prepared to take advantage of such offerings as HPE Pointnext Tech Care?

Nolan: Following on from what Rob said, customers can already decide what experience they would like. HPE Pointnext Tech Care will be the embedded support experience that comes with their HPE products. It’s going to be very easy to buy because it’s going to be right there embedded with the product when the product is being configured and when the quote is being put together. 

HPE Pointnext Tech Care is a very simple, easy, and fully integrated experience. They’re buying a full product experience, not a product — and then choose their support experience on the side. If they want something broader than just a product experience — what I call the warm blanket around their whole enterprise environment — we have another experience called Datacenter Care that provides that.

We also have other experiences. We can, for example, manage the environment for them using our management capabilities. And then, of course, we have our HPE GreenLake as-a-service on-premises experience. We’ve designed each of these experiences so they can totally live together and work together. You can also move and evolve from one to the other. You can buy products that come with HPE Pointnext Tech Care and then easily move to a broader Datacenter Care to cover the whole environment.

We can take on and manage some of that environment and then we can transition workloads to the as-a-service model. We’re trying to make it as easy and as fast as possible for customers to onboard through any and all of these experiences.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise Pointnext Services.


Posted in AIOps, artificial intelligence, Cloud computing, contact center, Cyber security, data center, Data center transformation, DevOps, digital transformation, disaster recovery, Enterprise transformation, Help desk, Hewlett Packard Enterprise, managed services, professional services, Security, storage, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Work from anywhere unlocks once-hidden productivity and creativity talent gems

Image for post

Now that hybrid work models have been the norm for a year, what’s the long-term impact on worker productivity? Will the pandemic-induced shift to work from anywhere agility translate into increased employee benefits — and better business outcomes — over time?

The next BriefingsDirect workspace innovation discussion explores how a bellwether UK accounting services firm has shown how consistent, secure, and efficient digital work experiences lead to heightened team collaboration and creative new workflows.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the ways that distributed work models fuel innovation, please welcome our guests, Chris Madden, Director of IT and Operations for Kreston Reeves, LLP in the UK, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, we’ve been in a work-from-anywhere mode for a year. Is this panning out as so productive and creative that people are considering making it a permanent feature of their businesses?

Minahan: Dana, if there’s one small iota of a silver lining in this global crisis we’ve all been going through together it’s that it has shone a light on the importance of flexible and remote work models.

Image for post

Companies are now rethinking their workforce strategies and work models — as well as the role the office will play in this new world of work. And employees are, too. They’re voting with their feet, moving out of high-cost, high-rent districts like San Francisco and New York because they realize they can not only do their work effectively remotely, but they can also be more productive and have a better work life.

A few data points that are important: This isn’t a temporary shift. The pandemic has opened folks’ eyes to what’s possible with remote work. In fact, in a recent Gartner study, 82 percent of executives surveyed plan to make remote work and flexible work a more permanent part of their workforce and cost-management strategies — and it’s for very good business reasons.

As the pandemic has proven, this distributed work model can significantly lower real estate and IT costs. But more importantly, the companies that we talk to, the most forward-looking ones, are realizing that flexible work models make them more attractive as an employer. And that prompts them to rethink their staffing strategies because they have access to new pools of talent and in-demand skills of workers that live well beyond commuting distance to one of their work hubs.

Businesses work from anywhere

Such flexible work models can also advance other key corporate initiatives like sustainability and diversity, which are increasingly becoming board-level priorities at most companies. Those companies that remain laggards — that are still somewhat reluctant to embrace remote work or flexible work as a more permanent part of their strategies — may soon be forced to change as their employees look for more flexible work approaches.

We’ve heard about the mass exodus from some of those large metropolitan areas to more suburban — and even rural locales. At Citrix, our own research of thousands of workers and IT and business executives finds that more than three-quarters of workers now prefer to shift to a more remote and flexible work model — even if it means taking a pay cut. And 80 percent of workers say that flexible work arrangements will be a top selection criterion when evaluating employers in the future.

Gardner: Chris, based on your experience at Kreston Reeves, do you agree that these changes to a more flexible and hybrid work location model are here to stay?

Learn How Digital Workspaces Help

Companies Support Hybrid Work Models

Madden: I would. At Kreston Reeves, we are expecting to move permanently to a three- or two-days a week in an office with the remaining time working from home and away from the office. That’s for many of the reasons already covered, such as reduced commuter time, reduced commuting cost, more time at home with family, best work-life balance, and a lot better for the environment as well because of people travelling less and all those greenhouse gases not going up into the atmosphere.

Gardner: We certainly hear how there are benefits to the organization. But how about the end users, the customers? Have your experiences at Kreston Reeves led you to believe that you can maintain the quality of service to your customers and consumers?

Madden: It’s probably ultimately going to be a balance. I don’t think it will shift totally one way or go back to how it was. I think for our customers and clients, there are distinct advantages, depending on the type of work. There isn’t always a need to go and have a face-to-face meeting that can take a lot of time for people, time that they could spend elsewhere in their business.

Depending on the nature of the interactions, quite a lot will shift to video calling, which has become the norm overt the last year even as in the past people may have thought it impersonal.

Depending on the nature of the interactions, quite a lot will shift to video calling, which has become the norm over the last year even as in the past people may have thought it impersonal. So I think that will become a lot more accepted, and face-to-face meetings will be then kept for those meetings that really require everybody to sit down together.

Gardner: It sounds like we’re into a more fit-for-purpose approach. If it’s really necessary, that’s fine, we can do it. But if it’s not necessary, there are benefits to alleviating the pressure on people.

Tell us, Chris, about how your organization operates and how you reacted to the pandemic.

Madden: Yes, we began best part of 10 years ago, when we moved on to Citrix as the platform to distribute computer services to our users. Over the years, we have upgraded that and added on the remote-access solutions. And so, when it came to early 2020 and the pandemic, we were ready to take off. We could see where we were heading in terms of lockdowns and the pandemic, so we closed two or three of our offices — just to see how the system coped.

Image for post

It was designed to do that, but would it really work when we actually closed the offices and everybody worked from home? Well, it worked brilliantly, and was very easy to deal with. And then a few days after that, the UK government announced the first national lockdown and everybody had to work from home within a day.

From our point of view, it worked really well. The only wrinkles in the whole process were to get everybody the appropriate apps on their phones to make sure they could have remote access using multifactor authentication. But otherwise, it was very seamless; the system was designed to cope with everybody working from anywhere — and it did.

Gardner: Chris, we often hear that there is a three-legged stool when it comes to supporting business process — as in people, technology, and process. Did you find that any of those three was primary? What led you to succeed in making such as rapid transition when it comes to the three pillars?

A new world of flexible work

Madden: I think it’s all three of those things. The technology is the enabler, but the people need to be taken with you, and the processes have to adapt for new ways of working. I don’t think any one of those three would lead. You have to do all three together.

Gardner: Tim, how does Citrix enable organizations to keep all three of those plates in the air spinning, if you will, especially on that point about the right applications on the right device at the right time?

Learn to Deliver Superior

Employee Work Environments

Minahan: What’s clear in our research — and what we’re seeing from our customers — is that we’re accelerating to a new world of work. And it’s a more hybrid and flexible world where that employee experience become a key differentiator.

To the point Chris was making, success is going to go to those organizations that can deliver a consistent and secure work experience across any and all work channels — all the Slacks, all of the apps, all the Teams, and in any work location.

Whether work needs to be done in the office, on the road, or at home, delivering that consistent and secure work experience — so employees have secure and reliable access to all their work resources — needs to come together to service end customers regardless of where they’re at.

Kreston Reeves is not alone in what they have experienced. We’re seeing this across every industry. In addition to the change in work models, we are also seeing a rapid acceleration of their digitization efforts, whether it is in the financial services sector, or other areas such as retail and healthcare. They may have had plans to digitize their business, but over the past year they’ve out of necessity had to digitize their business.

Kreston Reeves is not alone in what they have experienced. We’re seeing this across every industry. In addition to the change in work models, we are also seeing a rapid acceleration of digitization efforts. Over the past year out of necessity they have had to digitize their businesses.

For example, there’s the healthcare provider in your neck of the woods, up in the Boston area, Dana, that has seen a 27-times increase in monthly telemedicine visits. During the COVID crisis, they went from 9,000 virtual visits per month to over 250,000 per month — and they don’t think they’re ever going to go back.

In the financial services sector, we hear consistently customers hiring thousands of new advisors and loan officers in order to handle the demand — all in a remote and digital environment. What’s so exciting, as I said earlier, is as companies begin to use these approaches as key enablers, it becomes a liberator for them to rethink their workforce strategies and reach for new skills and new talent that’s well beyond commuting distance to one of their work hubs.

It’s not just about, “Should Sam or Suzy come back and work in the office full time?” That’s a component of the equation. It’s not even about, “Do Sam and Suzy perform at their best even when they’re working at home?” It’s about, “Hey, what should our workforce look like? Can we now reach skills and talent that were previously inaccessible to us because we can empower them with a consistent work experience through a digital workspace strategy?”

Gardner: How about that, Chris? Have you been simply repaving work-in-the-office paths with a different type of work from home? Or are you reinventing and exploring new business processes and workflows as a result of the flexibility?

Remote work retains trust, security

Madden: There is much more willingness amongst businesses and the people working in businesses to move quickly with technology. We’re past being cautious. With the pandemic, and the pressure that that brings, people are more willing to move faster — and be less concerned about understanding everything that they may want to know before embracing technology.

The other thing is with relationships with clients. There is a balance, to not go as far as some industries. Some never see their clients any longer because everything is done remotely, and everything is automated through apps and technology.

Image for post

And the correct balance that we will be mindful of as we embrace remote working — and as we have more virtual meetings with clients — is that we still need to maintain the relationship of being a trusted advisor to the client — rather than commoditizing our product.

Gardner: I suppose one of the benefits to the way the technology is designed is that you can turn the knobs. You can experiment with those relationships. Perhaps one client will require a certain value toward in-person and face-to-face engagements. Another might not. But the fact is the technology can accommodate that dynamic shift. It gives us, I think, some powerful tools.

Image for postMadden: Absolutely. The key is that for those clients who really want to embrace the modern world and do everything digitally, there is a solution. If a client would still like to be very traditional and have lots of invoices and things on paper and send those into their accountant, that, too, can be accommodated.

But it is about moving the industry forward over time. And so, gradually I can see that technology will become a bigger contributor to the overall service that we provide and will probably do the basic accountancy work, producing an end result that a human then looks at provides the answer back to the client.

Gardner: Now, of course, the materials that you’re dealing with are often quite sensitive and there are business regulations. How did the reaction of your workforce and your customer base come down on the issues of privacy, control, and security?

Madden: The clients trust that we will get it right and therefore look to us to provide the secure solution for them. So, for example, there are clients who have an awful lot of information to send us and cannot come into an office to hand over whatever that is.

We can get them new technologies that they haven’t used in the past such as Citrix ShareFile to share those documents with us securely and efficiently, but in a way that allow us to bring those documents into our systems and into the software we need to use to produce the accounts and the audits for the clients.

Gardner: Tim, you mentioned earlier that sometimes when people are forced into a shift in behavior, it’s liberating. Has that been the case with people’s perceptions around privacy and security as well?

Learn How Digital Workspaces Help

Companies Support Hybrid Work Models

Minahan: If you’re going to provide a consistent and secure work experience, the other thing folks are beginning to see as they embrace hybrid and more distributed work models is that their security posture needs to evolve too. People aren’t all coming into the office every day to sit at their desk on the corporate network, which had much better-defined parameters and arguably was easier to secure.

Now, in a truly distributed work environment, you need to not only provide a digital workspace that gives employees access to all the work resources they need — and that is not just their virtual desktops, but all of their software-as-a-service (SAAS) apps or web apps or mobile apps — it needs to be all in one unified experience that’s accessible across any location.

That is another dynamic we’re seeing. Companies are accelerating their embrace of new more contextual zero trust access security models as they look forward to a post-pandemic world.

It also needs to be secure. It needs to be wrapped in a holistic and contextual security model that fosters not just zero trust access into that workspace, but ongoing monitoring and app protection to detect and proactively remediate any access anomalies, whether initiated by a user, a bot, or another app.

And so, that is another dynamic we’re seeing. Companies are accelerating their embrace of new more contextual zero trust access security models as they look forward to preparing themselves for how they’re going to operate in a post-pandemic world.

Gardner: Chris, I suppose another challenge has been the heterogeneity of the various apps and data across the platforms and sources that you’re managing. How has working with a digital Workspace environment helped you provide a singular view for your employees and end customers? How do workspace environments help mitigate what had been a long-term integration issue for IT consumption?

Madden: For us, whether we are working from home remotely or are in an office, we are consuming the same desktop with the same software and apps as if we were sitting in an office. It’s really exactly the same. From a colleague’s point of view, whether they are working from home in a pandemic or sitting in their office in Central London, they are getting exactly the same experience with exactly the same tools.

And so for them, it’s been a very easy transition. They’re not having to learn the technology and different ways to access things. They can focus instead on doing the client work and making sure that their home arrangement is sorted out.

Gardner: Tim, regardless of whether it’s a SaaS app, cloud app, on-premises data — as long as that workspace is mobile and flexible — the complexity is hidden?

Workspace unifies and simplifies tasks

Minahan: Well, there is another challenge that the pandemic has shone a light on, which is this dirty little secret of the business world. And that is our work environment is too complex. For the past 30 years, we’ve been giving employees access to new applications and devices. And more recently, chat and collaboration tools — all with the intent to help get work done.

While on an independent basis, each of these tools adds value and efficiency, collectively they’ve created a highly fragmented and complex work environment that oftentimes interrupts, distracts, and stresses out employees. It keeps them possibly from getting their actual work done.

Just to give you a sense, with some real statistics: On any given workday, the typical employee uses more than 30 critical apps to get their work done, oftentimes needing to navigate four or more just to complete a single business process. They spend more than 20 percent of their time searching across all of these apps and all of these collaboration channels to find the information they need to make decisions to do their jobs.

Learn to Deliver Superior

Employee Work Environments

To make matters worse, now we’ve empowered these apps and these communication and collaboration channels. They’re all vying for our attention throughout the day, shouting at us about things we need to get done, and oftentimes distracting us from our core work. By some estimates, all of these notifications, chats, texts, and other disruptions interrupt us from our core work about every two minutes. That means the typical employee gets interrupted and forced to switch context between apps, emails, and other chat channels more than 350 times each day. Not surprisingly, what we are seeing is a huge productivity gap — and it is turning our top talent into task rabbits.

As companies think through this next phase of work, how do they provide a consistent and secure work experience and a digital workspace environment for employees no matter where they’re working? It not only be needs to be unified — giving them access to everything they need and security, ensuring that corporate information, applications, and networks remain secure no matter where employees are doing the work — but it also needs to be intelligent.

Image for post

Leveraging intelligent capabilities such as machine learning (ML), artificial intelligence (AI) assistance, bots, and micro apps personalize and simplify work execution. It’s what I call adding an experience layer between an employee and their work resources. This simplifies their interactions and work execution across all of the enterprise apps, content, and other resources so employees are not overwhelmed and can perform at their best no matter where work needs to get done.

Gardner: Chris, are you interested in elevating people from task rabbits to a higher order of value to the business and their end users and customers? And is the digital environment and workplace a part of that?

Madden: Absolutely. There are lots of processes, many firms, and across multiple campuses. They have grown up over the years and they’ve always been done that way. This is a perfect time to reappraise how we do those things smarter digitally using some robotic process automation (RPA) tools and AI to take a lot of the rework and data from one system into another to produce the end result for the client.

We want to free up our people to do more value-added work — and it would be more interesting work for those people. It will give a better quality role for people, which will help us to attract better talent.

There is a lot of that on our radar for the coming year or two. We want to free our people up to do more value-added work — and it would be more interesting work for those people. It will give a better quality of role for people, which will help us to attract better talent. And given the fact that people now have a taste of a different work-life balance, there will be a lot of pressure on new recruits to our business to continue with that.

Gardner: Chris, now that your organization has been at this for a year — really thrust into much more remote flexible work habits — were there any unexpected and positive spins? Things that you didn’t anticipate, but you could only find out with 20–20 hindsight?

Virtual increases overall efficiency

Madden: Yes. One is the speed at which our clients were happy to switch to video meetings and virtual audits. Previously, on audits, we would send a team of people to a client’s premises and they would look through the paperwork, look at the stock in a warehouse, et cetera, and perform the audit physically. We were able to move quickly to doing that virtually.

For example, if we’re looking in a warehouse to check that a certain amount of stock is actually present, we can now do that by a video call and walk around the warehouse and explain what we’re looking for and see that on the screen and say, “Yes, okay, we know that that stock is actually available.” It was a really big shift in mindset for our regulators, for ourselves, and for our clients, which is a great positive because it means that we can become much more efficient going forward.

The other one that sticks out in my mind is the efficiency of our people. When you’re at home, focusing on the work and without the distractions of an office, the noise, and the conversations, people are generally more efficient. There is still the need for a balance because we don’t want everybody just sitting at home in silence staring at a screen. We miss out on some of the richness of business relationships and conversations with colleagues, but it was interesting how productivity generally increased during the lockdown.

Gardner: Tim, is that what you’re finding more generally around the globe among the Citrix installed base, that productivity has been on the uptick even after a 20- or 30-year period where, in many respects and measurements, productivity has been flat?

Minahan: Yes, that is a trend we have been seeing for decades. Despite the introduction of more technology, employee productivity continued to trend down, ironically, until the pandemic. We talked with employees, executives, and through our own research and it shows that more than 80 percent of employees feel that they’re as, if not more, productive when working from home — for a lot of the reasons that Chris mentions. What they’ve seen at Kreston Reeves has continued to be sustained.

Image for post

It’s introduced the need for more collaborative work management tools in the work environment in order to foster and facilitate that higher level of engagement and that more efficient execution that we mentioned earlier. But overall, whether it’s the capability to avoid the lengthy commute or the ability to avoid distractions, employees are indeed seeing themselves as more productive.

In fact, we’re seeing a lot of customers now talk about how they need to rethink the very role of the office. Where it’s not just a place where people come to punch their virtual time cards, but is a place that’s more purpose-built for when you need to get together with a client or with other teammates to foster collaboration. You still keep the flexibility to work remotely to focus on innovation, creativity, and work execution that oftentimes, as Chris indicated, can be distracting or difficult to achieve strictly in an office environment.

Gardner: Chris, what’s interesting to me about your business is you’re in a relationship with so many client companies. And you were forced to go digital very rapidly — but so were they. Is there a digital transformation accelerant at work here? Because they all had to go digital at the same is there a network effect?

Because your customers have gone digital, Chris, could you then be better digital providers in your relationships together?

Collaborative communication

Madden: To an extent. It depends on the type of client industry that they’re in. In the UK, certain industries have been shut for a long time and therefore, they are not moving digitally. They are just stuck waiting until they are able to reopen. In the meantime, there’s probably very little going on in those businesses.

Those businesses that are open and working are very much embracing modern technology. So, one of the things that we’ve done for our audit clients, particularly, is providing different ways in which they can communicate with us. Previously, we probably had a straightforward, one-way approach. Now, we are giving clients three or four different ways they can communicate and collaborate with us, which helps everybody and moves things along a lot more quickly.

It is going to be interesting post-pandemic. Will people intrinsically go back to what they were always doing? Will what drove us forward keep us creating and becoming more digital or will the instinct be to go back to how it was because that’s how people are more comfortable?

Gardner: Yes, it will be interesting to see if there’s an advantage for those who embrace digital methods more and whether that causes a competitive advantage that the other organizations will have to react to. So we’re in for an interesting ride for a few more years yet.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.


Posted in application transformation, artificial intelligence, BYOD, Citrix, Cloud computing, Cyber security, Data center transformation, digital transformation, enterprise architecture, Security, Software, User experience, Virtualization | Tagged , , , , , , , , , , , , , , | Leave a comment

How to gain advanced cyber resilience and recovery across the IT ops and SecOps divide

Cyber attacks are on the rise, harming brands and supply chains while fomenting consumer and employee distrust — as well as leading to costly interruptions and service blackouts.

At the same time, more remote workers and extended-enterprise processes due to the pandemic demand higher levels of security across all kinds of business workflows.

Stay with us now as the next BriefingsDirect discussion explores why comprehensive cloud security solutions need to go beyond on-premises threat detection and remediation to significantly strengthen extended digital business workflows.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about ways to shrink the attack surface and dynamically isolate process security breaches, please join Karl Klaessig, Director of Product Marketing for Security Operations, at ServiceNow, and E.G. Pearson, Security Architect at Unisys. The discussion is moderated byDana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Karl, why are digital workflows so essential now for modern enterprises, and why are better security solutions needed to strengthen digital businesses?

Klaessig: Dana, you touched on cyber attacks being on the rise. It’s a really scary time if you think about MGM Resorts and some of the really big attacks in 2020 that took us all by surprise. And 23 percent of consumers have had their email or social media accounts hacked, taken over, or used. These are all huge threats to our everyday life as businesses and consumers.

And when we look at so many of us now working from home, this huge new attack surface space is going to continue. In a recent Gartner chief financial officer (CFO) survey, 74 percent of companies have the intent to shift employees to work from home (WFH) permanently.

These are huge numbers indicating a mad dash to build and scale remote worker infrastructures. At the end of the day, the teams that E.G. and I represent, as vendors, we strive hard to support these businesses as they seek to scale and address an explosive impact for cyber resilience and cyber operations in their enterprises.

Gardner: E.G., we have these new, rapidly evolving adoption patterns around extended digital businesses and workflows. Do the IT and security personnel, who perhaps cut their teeth in legacy security requirements, need to think differently? Do they have different security requirements now?

IT security requirements rise

Pearson: As someone who did cut their teeth in the legacy parts, I say, “Yes,” because things are new. Things are different.

The legacy IT world was all about protecting what they know about, and it’s hard to change. The new world is all about automation, right? It impacts everything we want to do and everything that we can do. Why wouldn’t we try to make our jobs as simple and easy as possible?

When I first got into IT, one of my friends told me that the easiest thing you can do is script everything that you possibly can, just to make your life simpler. Nowadays, with the way digital workflows are going, it’s not just automating the simple things — now we’re able to easily to automate the complex ones, too. We’re making it so anybody can jump in and get this automation going as quickly as possible.

Gardner: Karl, now that we’re dealing with extended digital workflows and expanded workplaces, how has the security challenge changed? What are we up against?

Klaessig: The security challenge has changed dramatically. What’s the impact of Internet of things (IoT) and edge computing? We’ve essentially created a much larger attack surface area, right?

What’s changed in a very positive way is that this expanded surface has driven automation and the capability to not only secure workflows but to collaborate on those workflows.

We have to have the capability to quickly detect, respond, and remediate. Let’s be honest, we need automated security for all of the remote solutions now being utilized – virtually overnight – by hundreds of thousands of people. Automation is going to be the driver. It’s what’s really rises to the top to help in this.

Gardner: E.G., one of the good things with the modern IT landscape is that we can do remote access for security in ways that we couldn’t before. So, for IoT, as Karl mentioned, we’re talking about branch offices — not just sensors or machines.

We increasingly have a very distributed environment, and we can get in there with our security teams in a virtual sense. We have automation, but we also have the virtual capability to reach just about everywhere. 

Pearson: Nowadays, IoT is huge. Operational technology (OT) is huge. Data is huge. Take your pick, it’s all massive in scope nowadays. Branch offices? Nowadays, all of us are our own branch office sitting at our homes.

Now, everybody is a field employee. The world changed overnight. And the biggest concern is how do we protect every branch office and every individual who’s out there? It used to be simpler, you used to create a site-to-site virtual private network (VPN) or you had communications that could be easily taken care of.

Everybody is now a field employee. The world changed overnight. And the biggest concern is how do we protect every branch office and every individual who’s out there? The world is different.

Now the communication is open to everybody because your kids want to watch Disney in the living room while you’re trying to work in your office while your wife is doing work for her job three rooms down. The world is different.

The networks that we have to work through are different. Now, instead of trying to protect an all-encompassing environment, it’s about moving to more individual or granular levels of security, of protecting individual endpoints or systems.

I now have smart thermostats and a smart doorbell. I don’t want anybody attaching to those. I don’t need somebody talking to my kids through those things. In the same vein, I don’t need somebody attaching to my company’s OT environment and doing something silly inside of there. So, in my opinion, it’s less about the overarching IT environment, and more about how to protect the individuals.

Gardner: To protect all of those vulnerable individuals then, what are the new solutions? How are the Unisys Stealth and ServiceNow Platformcoming together to help solve these issues?

Collaborate to protect individuals

Klaessig: Well, there are a couple of areas I’ll touch on. One is that Unisys has an uncanny capability to do isolation and initially contain a breach or threat. That is absolutely paramount for our customers. We need to get a very quick handle on how to investigate and respond. Our teams are all struggling to scale faster and faster with higher volume. So, every minute bought is a huge minute gained. Right out of the gate, between Unisys and ServiceNow, that buys us time — and every second counts. It’s invaluable.

Another thing that’s driving our solutions are the better ties between IT and security; there’s much more collaboration. For a long time they tended to be in separate towers, so to speak. But the codependences and collaborative drivers between Unisys and ServiceNow mean that those groups are so much more effective. The IT and security teams collaborate thanks to the things we do in the workloads and the automation between both of our solutions. It becomes extremely efficient and effective.

Gardner: E.G., why is your technology, Unisys Stealth for Dynamic Isolation a good fit with ServiceNow? Why is that a powerful part of this automation drive?

Pearson: The nice part about dynamic isolation is it’s just a piece of what we can do as a whole with Unisys Stealth. Our Stealth core product is doing identity-based microsegmentation. And, by nature, it flows into software-defined networking, and it’s based on a zero trust model.

The reason that’s important is, in software-defined networking, we’re gathering tons of information about what’s happening across your network. So, in addition to what’s happening at the perimeter with firewalls, you are able to get really good, granular information about what’s happening inside of your environment, too.

We’re able to gather that and send all of that fantastic information over the ServiceNow Platform to your source, whatever it may be. ServiceNow is a fantastic jumping point for us to be able to get all that information into what would have been separate systems. Now they can all talk together through the ServiceNow Platform. 

Klaessig: To add to that, this partnership solves the issues around security data volume so you can prioritize accurately because you’re not inundated. E.G. just described the perfect scenario, which is that the right data gets into the right solution to enable effective assessment and understanding to make prioritizations on threat responses and threat actions based on business impact.

That huge but managed amount of data that comes in is invaluable. It’s what drives everything to get to prioritizing the right incidents. 

Gardner: The way you’re describing how the solutions work together, it sounds like the IT people can get better awareness about security priorities. And the security people can perhaps get insights into making sure that the business-wide processes remain safe.

Critical care for large communities

Klaessig: You’re absolutely right because the continuous threat prioritization and breach protection means that the protective measures have to go through both IT and security. That collaboration and automation enables not just the operational resilience that IT is driving for, but also the cyber resilience that the security teams want. It is a handshake.

That shared data and workloads are part of security but they reflect actual IT processes, and vice versa. It makes both more effective. 

Gardner: E.G., anything more to offer on this idea of roles, automation, and how your products come together?

Pearson: I wholeheartedly agree with Karl. IT and security can’t be siloed anymore. They can’t be separate organizations.

IT relies on what security operations puts in play, and security operations can’t do anything unless IT mitigates what security finds. So they can’t act individually any more. Otherwise, it’s like telling a football player to lace up their ice skates and go score a couple of goals.

IT relies on what security operations puts in play, and security operations can’t do anything unless IT mitigates what security finds. So they can’t act individually any more. Otherwise, it’s like telling a football player to lace up their ice skates and go score a couple of goals.

Gardner: As we use microsegmentation and zero trust to attend to individual devices and users, can we provide a safer environment for sets of users or applications?

Pearson: Yes, we have to do this in smaller and smaller groups. It’s about being able to understand what those communities need and how to dynamically protect them. 

As we adjust to the pandemic and the humungous security breaches like we found at the end of 2020, protecting large communities can’t be done as easily. It’s so much easier to break those down into smaller chunks that can be best protected.

We can group things out based on use and the impact to the business. And again, this all contributes to the prioritization and the response when we coordinate between the two solutions, Unisys and ServiceNow.

Gardner: So it’s an identity-driven model but on steroids. It’s not just individual people. It’s critical groups.

Klaessig: Well said.

Pearson: Yes.

Gardner: How can people consume this, whether you’re in IT, security personnel, or even an end user? If you’re trying to protect yourself, how do you avail yourself of what ServiceNow and Unisys have put together?

Speed for bad-to-worse scenarios

Klaessig: The key is we target enterprises. That’s where we work together and that’s where ServiceNow workflows go. But to your point, nowadays I’m essentially a lone, solo office person, right? With that in mind, we need to remember those new best practices.

The appropriate workflows and processes within our collective solutions must reflect the actual individual users and processes. It goes back to our comments a couple of minutes ago, which is what do you use most? How often do you use it? When do you use it, and how critical is it? Also, who else is involved?

That’s something we haven’t touched on up until now — who else will be impacted? At the end of the day, what is the impact? In other words, if someone just had a credential stolen, I need the quick isolation from Unisys based on the areas of IT impacted. I can do that in ServiceNow, and then the appropriate response puts a workflow out and it’s automated into IT and security. That’s critical. And that’s the starting point for the other processes and workflows.

Gardner: We now need to consider what happens when you inevitably face some security issues. How does the ServiceNow Security Incident Response Platform and Unisys Stealth come together to help isolate, reduce, and stifle a threat rapidly?

Pearson: The reason such speed is important is that many of you all have already been impacted by ransomware. How many of you all have actually seen what ransomware will do if left unchecked for even just 30 minutes inside of a network? It’s horrible. That to me, that is your biggest need.

Whether it is just a regular end-user or if it’s a full-scale, enterprise-level-type workflow, speed is a huge reason that we need a solution to work and to work well. You have to be fast to keep bad things from going really, really wrong.

One of the biggest reasons we have come together with Stealth doing microsegmentation and building small communities and protecting them is to watch the flow of what happens with whom across ports and protocols because it is identity based. Who’s trying to access certain systems? We’re able to watch those things.

As we’re seeing that information, we’re able to say if something bad is happening on a specific system. We’re able to show that weird or bad traffic flow is occurring, send that to ServiceNow and allow the automated operations to protect an end point or a server.

Because the process is automated, it brings the response down to less than 10 seconds, using automated workflows within ServiceNow. With dynamic isolation, we’re able to isolate that specific system and cut if off from doing anything else bad within a larger network.

That’s huge. That gives us the capability to take on something fast that could bring down an entire system. I have seen ransomware go 30 minutes unchecked, and it will completely ravage an entire file server, which brings down an entire company for three days until everything can be brought back up from the backups. Nobody has time for that. Nobody has time for the 30 minutes it took to do something silly to cost you three days of extra work, not to mention what else may come from that.

With our combined capabilities, Unisys Stealth provides the information we’re able send to the ServiceNow platform to have protection put in place to isolate and start to remediate within 10 seconds. That’s best for everybody because 10 seconds worth of damage is a whole lot easier to mitigate than 30 minutes’ worth.

Klaessig: Really well-said, E.G.

Gardner: I can see why 2+2=6 when it comes to putting your solutions together. ServiceNow gets the information from Stealth that something is wrong, but then you could put the best of what you do together to work.

Resolve to scale with automation

Klaessig: We do. And this leads us to do even more automation. How can you get to that discovery point faster, and what does that mean to resolve the problem?

And there’s another angle to this. Our listeners and readers are probably saying, “I know we need to respond quickly, and, yes, you’re enabling me to do so. And, yes, you’re enabling me to isolate and do some orchestration that ties things up to buy me time. But how do I scale the teams that are already buried beyond belief today to go ahead and address that?”

That’s a bit overwhelming. And here’s another added wrinkle. E.G. mentioned ransomware, and the scary part is in 2020 ransomware was paid 50 percent of the time versus one-third of the time in 2019. Even putting aside the pandemic and natural disasters, this is what our teams our facing.

It again goes back to what you heard E.G. and I touch on, which is automation of security and IT is what’s critical here. Not only can you respond consistently quicker, but you’ll be able to scale your teams and skills — and that’s where the automation further kicks in.

Businesses can’t take on this type of volume around security management with the teams they have in place today. That’s why automation is so critical. As attacks escalate, they can’t just go and add more people in time, right?

In other words, businesses can’t take on this type of volume around security management with the teams they have in place today. That’s why automation is so critical. Comprehensive tooling increases detection on the Unisys side, and that gives them not only more time to respond but allows them to be more effective as well. As attacks escalate, they can’t just go ahead and add more people in time, right? This is where they need that automation to be able to scale with what they have.

It really pays off. We’ve seen customers benefit from a dollars and cents prospective, where they saw a 74 percent improvement in time-to-identify. And now 46 percent of their incidents are handled by automation, saving more than 8,700 hours annually for their teams. Just wrap your head around that. I mean, that’s just a huge advantage from putting these pieces together and automating and orchestration like E.G. has been talking about.

Gardner: Is it too soon, Karl, to talk about bots and more automation where the automation is a bit more proactive? What’s going to happen when the data and the speed get even more useful, but more compressed when it comes to the response time? How smart are these systems going to get?

Get people to do the right thing

Klaessig: The reality is, we’re already going there. When you think of machine learning (ML) and artificial intelligence (AI), we’re already doing a certain amount of that in the products.

As we leverage more of the great data from Unisys, it drives who can resolve those vulnerabilities because they have a predetermined history of dealing with those types of vulnerabilities. That’s just an example of being able to use ML to align the right people to the right resolution. Because, at the end of the day, it still comes down to certain people doing certain things and it always will. But we can use that ML and AI to put those together very quickly, very accurately, and very efficiently. So, again, it takes that time to respond down to seconds, as E.G. mentioned.

Gardner: Are we going to get to a point where we simply say, “J.A.R.V.I.S., clean up the network”?

Pearson: I hope so! Going back to my old days of being an admin, I was an extremely lazy admin. If I could have just said, “J.A.R.V.I.S., remediate my servers,” I would have been all over it.

I don’t think there’s any way we can’t move toward more automation and ML. I don’t necessarily want us to get to the point where Skynet is not going to delete the virus, saying, “I am the virus.” We don’t need that.

But being able to automate helps overcome the mundane, such as resetting somebody’s password and being able to pull a system offline that’s experiencing some sort of weird whatever it may be. Automating those types of things helps everybody go faster through their day because if you’re working a helpdesk, you’ve already gotten 19 people with their hair on fire begging for your attention.

If you could cut off five of those people by automating and very easily allowing some AI to do the work for you, why wouldn’t you? I think their time is more valuable than the few dollars it’s going to cost to automate those processes.

Klaessig: That’s going to be the secret to success in 2021 and going forward. You can scale, and the way you’re going to scale is to take out those mundane tasks and automate all of those different things that can be automated.

As I mentioned, 46 percent of the security incidents became automated for our customer. That’s a huge advantage. And at the end of the day, putting J.A.R.V.I.S. aside, the more ML we can get into it, the better and more repeatable the processes and the workflows will be — and that much faster. That’s ultimately what we’re driving toward as well.

Gardner: Now that we understand the context of the problem, the challenges organizations face, and how these solutions come together, I’m curious at how this actually gets embedded into organizations? Is this something that security people do, that the IT people do, that the helpdesk people do? Is it all of the above? 

Everybody has role to reap benefits

Pearson: The way we usually get this going is there needs to be buy-in from everybody because it’s going to touch a lot of folks. I’m willing to bet Karl’s going to say similar things. It’s nice to have everybody involved and to have everybody’s buy-in on this.

It usually starts for us at Unisys with what we’re doing with microsegmentation and with a networking and security group. They need to talk to be able to get this rolled out. We also need the general IT folks because they’re going to have to install and get this rolled out to endpoints. And we need the server admins involved as well.

At the end of the day, this goes back to being a collaborative opportunity … for IT and security to join together. These solutions benefit both teams and can piggyback on investments they have already made elsewhere.

When it comes down to it, everybody’s going to have to be involved a little bit. But it generally starts with the security folks and the networking folks, saying, “How can I protect my environment just a little bit more than I was before?” And then it rolls from there.

And that’s a big advantage as well. Going forward, I strongly believe in — and I’ve seen the results of this — being a driver toward greater collaboration. It is that type of deployment and should be done in that manner. And then quite frankly, both organizations reap the benefits.

Pearson: Wholeheartedly.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and ServiceNow.


Posted in AIOps, Cloud computing, containers, Cyber security, digital transformation, Enterprise architect, Information management, machine learning, Security, ServiceNow, storage, Unisys | Tagged , , , , , , , , , , , , , , , | Leave a comment

How HPE Pointnext ‘Moments’ provide a proven critical approach to digital business transformation

The next edition of the BriefingsDirect Voice of Innovation video podcast series explores new and innovative paths for businesses to attain digital transformation.

Even as a vast majority of companies profess to be seeking digital business transformation, few proven standards or broadly accepted methods stand out as the best paths to take.

And now, the COVID-19 pandemic has accelerated the need for bold initiativesto make customer engagement and experience optimization an increasingly data-driven and wholly digital affair.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video.

Stay with us here to welcome a panel of experts as they detail a multi-step series of “Moments” that guide organizations on their transformations. Here to share the Hewlett Packard Enterprise (HPE) view on helping businesses effectively innovate for a new era of pervasive digital business are:

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Craig, while some 80 percent of CEOs say that digital transformation initiatives are under way, and they’re actively involved, how much actual standardization — or proven methods — are available to them? Is everyone going at this completely differently? Or is there some way that we can help people attain a more consistent level of success?

Partridge: A few things have emerged that are becoming commonly agreed upon, if not commonly executed upon. So, let’s look at those things that have been commonly agreed-upon and that we see consistently in most of our customers’ digital transformation agendas.

The first principle would be — and no shock here — focusing on data and moving toward being a data-driven organization to gain insights and intelligence. That leads to being able to act upon those insights for differentiation and innovation.

It’s true to say that data is the currency of the digital economy. Such a hyper-focus on data implies all sorts of things, not least of all, making sure you’re trusted to handle that data securely, with cybersecurity for all of the good things come that out of that data.

Another thing we’re seeing now as common in the way people think about digital transformation is that it’s a lot more about being at the edge. It’s about using technology to create an exchange value as they transact value from business-to-business (B2B) or business-to-consumer (B2C) activities in a variety of different environments. Sometimes those environments can be digitized themselves, the idea of physical digitization and using technology to address people and personalities as well. So edge-centric thinking is another common ingredient.

These may not form an exact science, in terms of a standardized method or industry standard benchmark, but we are seeing these common themes now iterate as customers go through digital transformation.

Gardner: It certainly seems that if you want to scale digital transformation across organizations that there needs to be consistency, structure, and common understanding. On the other hand, if everyone does it the same way, you don’t necessarily generate differentiation.

How do you best attain a balance between standardization and innovation?

Partridge: It’s a really good question because there are components of what I just described that can be much more standardized to deliver the desired outcomes from these three pillars. If you look, for example, at cloud-use-enablement, increasingly there are ways to become highly standardized and mobilized around a cloud agenda.

Moving toward containerization and leveraging microservices, or developing with an open API mindset, these are now pervasive principles in almost every industry. IT has to bring its legacy environment to play in all of that at high velocity and high agility.

And that doesn’t vary much from industry to industry. Moving toward containerization, for example, and leveraging microservices or developing with an open API mindset — these principles are pervasive in almost every industry. IT has to bring its legacy environment to play in that discussion at high velocity and high agility. So there is standardized on that side of it.

The variation kicks in as you pivot toward the edge and in thinking about how to create differentiated digital products and services, as well as how you generate new digital revenue streams and how you use digital channels to reach your customers, citizens, and partners. That’s where we’re seeing a high degree of variability. A lot of that is driven by the industry. For example, if you’re in manufacturing you’re probably looking at how technology can help pinpoint pain or constraints in key performance indicators (KPIs), like overall equipment effectiveness, and in addressing technology use across the manufacturing floor.

If you’re in retail, however, you might be looking at how digital channels can accelerate and outpace the four-walled retail experiences that companies may have relied on pre-pandemic.

Gardner: Craig, before we drill down into the actual Moments, were there any visuals that you wanted to share to help us appreciate the bigger picture of a digital transformation journey?

Partridge: Yes, let me share a couple of observations. As a team, we engage in thousands of customer conversations around the world. And what we’re hearing is exactly what we saw from a recent McKinsey report.

No alt text provided for this image

There are number of reasons why seven out of 10 respondents in this particular survey say they are stalled in attaining digital execution and gaining digital business value. Those are centered around four key areas. First of all, communication. It sounds like such a simple problem statement, but it is so hard to sometimes communicate what is a quite complex agenda in a way that is simple enough for as many people as possible — key stakeholders — to rally behind and to make real inside the organization. Sometimes it’s a simple thing of, “How do I visualize and communicate my digital vision?” If you can’t communicate really clearly, then you can’t build that guiding coalition behind you to help execute.

A second barrier to progress centers on complexity, so having a lot of suspended, spinning plates at the same time and trying to figure out what’s the relationship and dependencies between all of the initiatives that are running. Can I de-duplicate or de-risk some of what I’m doing to get that done quicker? That tends to be major barrier.

The third one you mentioned, Dana, which is, “Am I doing something different? Am I really trying to unlock the business models and value that are uniquely mine? Am I changing or reshaping my business and my market norms?” The differentiation challenge is really hard.

No alt text provided for this image

The fourth barrier is when you do have an idea or initiative agenda, then how to lay out the key building blocks in a way that’s going to get results quickly. That’s a prioritization question. Customers can get stuck in a paralysis-by-analysis mode. They’re not quite sure what to establish first in order to make progress and get to that minimum valuable product as quickly as possible. Those are the top four things we see.

To get over those things, you need a clear transformation strategy and clarity on what it is you’re trying to do. As I always say before the digital transformation — everything from edge, business model, how to engage with customers and clients, and through to a technology-as-assembly — to deliver those experiences and differentiation you have to have a distinctive transformation strategy. It leads to an acceleration capability, getting beyond the barriers, and planning the digital capabilities in the right sequence.

You asked, Dana, at the opening if there are emerging models to accomplish all of this. We have established at HPE something called Digital Next Advisory. That’s our joined customer engagement framework, through which we diagnose and pivot beyond the barriers that we commonly see in the customer digital ambitions. So that’s a high-level view of where we see things going, Dana.

Gardner: Why do you call your advisory service subsets “Moments,” and why have you ordered them the way you did?

Moments create momentum for digital

Partridge: We called them Moments because in our industry if you start calling things services then people believe, “Oh, well, that sounds like just a workshop that I’ll pay for.” It doesn’t sound very differentiated.

We also like the way it expresses co-innovation and co-engagement. A moment is something to be experienced with someone else. So there are two sides to that equation.

In terms of how we sequence them, actually they’re not sequenced. And that’s key. One of the things we do as a team across the world is to work out where the constraint points and barriers are. So think of it as a methodology.

And as with any good methodology, there are a lot of tools in the toolkit. The key for us as practitioners in the Digital Next Advisory service is to know what tool to bring at the right point to the customer.

As with any good methodology, there are a lot of tools in the toolkit. The key for us as practitioners in the Digital Next Advisory service is to know what tool to bring at the right point to the customer.

Sometimes that’s going to mean a communication issue, so let’s go solve for that particular problem first. Or, in some cases, it’s needing a differentiated technology partner, like HPE, to come in and create a vision, or a value proposition, that’s going to be different and unique. And so we would engage more specifically around that differentiation agenda.

There’s no sequencing; the sequencing is unique to each customer. And the right Moment is to make sure that the customer understands it is bidirectional. This is a co-engagement framework between two parties.

Gardner: All right, very good. Let’s welcome back Yara.

Schuetz: To reiterate what Craig mentioned, when we engage with a customer in a complex phenomenon such as digital transformation, it’s important to find common ground where we can and then move forward in the digital transformation journey specific to each of our customers.

Common core beliefs drive outcomes

We have three core beliefs. One is being edge-centric. And on the edge-centric core belief we believe that there are two business goals and business outcomes that our customers are trying to achieve.

No alt text provided for this image

In the top left, we have the human edge-centric journey, which is all about redefining customer experiences. In this journey, for example, the corporate initiative could mean the experiences of two personas. It could be the customer or the employees.

These initiatives are designed to increase revenues and productivity via such digital engagements as new services, such as mobile apps. And also to complement this human-to-edge journey we have the physical journey, or the physical edge. To gain insight and control means dealing with the physical edge. It’s about using, for example, Internet of things (IoT) technology for the environment the organization works in, operates in, or provide services in. So the business objective here in this journey consists of improving efficiency by means of digitizing the edge.

Complementary to the edge-centric side, we also have the core belief that the enterprise of the future will be cloud-enabled. By being cloud-enabled, we again separate the cloud-enabled capabilities into two distinct journeys.

The bottom right-hand journey is about modernizing and optimization. In this journey, initiatives address how IT can modernize its legacy environment with, for example, multi-cloud agility. It also includes, for example, optimization and management of services delivery, where different workloads should be best hosted. We’re talking about on-premises as well as different cloud models to focus the IT journey. That also includes software development, especially accelerating development.

This journey also involves the development improvement around personas. The aim is to speed up time-to-value with cloud-native adoption. For example, calling out microservices or containerization to shift innovation quickly over to the edge, using certain platforms, cloud platforms, and APIs.

The third core belief that the enterprise of the future should strive for is the data-driven, intelligence journey, which is all about analyzing and using data to create intelligence to innovate and differentiate from competitors. As a result, they can better target, for example, business analytics and insights using machine learning (ML) or artificial intelligence (AI). Those initiatives generate or consume data from the other journeys.

And complementary to this aspect is bringing trust to all of the digital initiatives. It’s directly linked to the intelligence journey because the data generated or consumed by the four journeys needs to be dealt with in a connected organization with resiliency and cybersecurity playing leading roles resulting in interest to internal as well as external stakeholders.

At the center is the operating model. And that journey really builds the center of the framework because skills, metrics, practices, and governance models have to be reshaped, since they dictate the outcomes of all digital transformation efforts.

These components build the enabling considerations that one must consider when you’re pursuing different business goals such as driving revenues, building productivity, or modernizing existing environments via multi-cloud agility. To put that all in the context of what many companies are really asking for right now is to put it in the context of everything-as-a-service.

Everything-as-a-service does not just belong to, for example, the cloud-enabled side. It’s not only about how you’re consuming technology. It also applies to the edge side for our customers, and in how they deliver, create, and monetize their services to their customers.

Gardner: Yara, please tell us how organizations are using all of this in practice. What are people actually doing?

Communicate clearly with Activate

Schuetz: One of the core challenges we’ve experienced together with customers is that they have trouble framing and communicating their transformation efforts in an easily understandable way across their entire organizations. That’s not an easy task for them.

Communication tension points tend to be, for example, how to really describe digital transformation. Is there any definition that really suits my business? And how can I visualize, easily communicate, and articulate that to my entire organization? How does what I’m trying to do with technology make sense in a broader context within my company?

So within the Activate Moment, we familiarize them with the digital journey map. This captures their digital ambition and communicates a clear transformation and execution strategy. The digital journey map is used as a model throughout the conversations. This tends to improve how an abstract and complex phenomenon like digital transformation can be delivered as something visual and simple to communicate.

Besides simplification, the digital journey map in the Activate Moment also helps describe an overview and gives a structure of various influencing categories and variables, as well as their relationship with each other, in the context of digital transformation.

Besides simplification, the digital journey map in the Activate Moment also helps describe an overview and gives a structure of various influencing categories and variables, as well as their relationship with each other in the context of digital transformation. It provides our customers guidance on certain considerations, and, of course, all the various possibilities of the application of technology in their business.

For example, at the edge, when we bring the digital journey map into the customer conversation in our Activate Moment, we don’t just talk about the edge generally. We refer to specific customer needs and what their edge might be.

In the financial industry, for example, we talk about branch offices as their edge. In manufacturing, we’re talking about production lines as their edges. If in retail, you have public customers, we talk about the venues as the edge and how — in times like this and the new normal — they can redefine experience and drive value there for their customers there.

Of course, this also serves as inspiration for internal stakeholders. They might say, “Okay, if I link these initiatives, or if I’m talking about this topic in the intelligence space, [how does that impact] the digitization of research and development? What does that mean in that context? And what else do I need to consider?”

Such inspiration means they can tie all of that together into a holistic and effective digital transformation strategy. The Activate Moment engages more innovation on the customer-centric side, too, by bringing insights into the different and various personas at a customer’s edge. They can have different digital ambitions and different digital aspirations that they want to prosper from and bring into the conversation.

Gardner: Thanks again, Yara. On the thinking around personas and the people, how does the issue of defining a new digital corporate culture fit into the Activate Moment?

Schuetz: It fits in pretty well because we are addressing various personas with our Activate Moment. For the chief digital officer (CDO), for example, the impact of the digital initiatives on the digital backbone are really key. She might ask, “Okay, what data will be captured and processed? And which insights will we drive? And how do we make these initiatives trusted?”

Gardner: We’re going to move on now to the next Moment, Align, and orchestrating initiatives with Aviviere. Tell us more about the orchestrating initiatives and the Align Moment, please.

Align with the new normal and beyond

Telang: The Align Moment is designed to help organizations orchestrate their broad catalog of digital transformation initiatives. These are the core initiatives that drive the digital agenda. Over the last few years, as we’ve engaged with customers in various industries, we have found that one of the most common challenges they encounter in this transformation journey is a lack of coordination and alignment between their most critical digital initiatives.

No alt text provided for this image

And, frankly, that slows their time-to-market and reduces the value realized from their transformation efforts. Especially now, with the new normal that we find ourselves in, organizations are rapidly scaling up and broadening out that their digital agenda.

As these organizations rapidly pivot to launching new digital experiences and business models, they need to rapidly coordinate their transformation agenda against an ever-increasing set of stakeholders — who sometimes have competing priorities. These stakeholders can be the various technology teams siting in an IT or digital office, or perhaps the business units responsible for delivering these new experience models to market. Or they can be the internal functions that support internal operations and supply chains of the organizations.

We have found that these groups are not always well-aligned to the digital agenda. They are not operating as a well-oiled machine in their pursuit of that singular digital vision. In this new normal, speed is critical. Organizations have to get aligned to the conversation and execute on all of the digital agenda quickly. That’s where the Align Moment comes in. It is designed to generate deep insights that help organizations evaluate a catalog of digital initiatives across organizational silos and to identify an execution strategy that speeds up their time-to-market.

No alt text provided for this image

So what does that actually look like? During the Align Moment, we bring together a diverse set of stakeholders that own or contribute to the digital agenda. Some of the stakeholders may sit in the business units, some may sit in internal functions, or maybe even on the digital office. But we bring them together to jointly capture and evaluate the most critical initiatives that drive the core of the digital agenda.

The objective is to jointly blend our own expertise and experience with that of our customers to jointly investigate and uncover the prerequisites and interdependencies that so often exist between these complex sets of enterprise-scale digital initiatives.

During the Align Moment, you might realize that the business units need to quickly recalibrate their business processes in order to meet the data security requirements coming in from the business unit or the digital team. For example, one of our customers found out during their own Align Moment that before they got too far down the path of developing their next generation of digital product, they needed to first build in data transparency and accessibility as a core design principle in their global data hub.

The methodology in the Align Moment significantly reduces execution risk as organizations embark on their multi-year transformation agendas. Quite frankly, these agendas are constantly evolving because the speed of the market today is so fast.

Our goal here is to drive a faster time-to-value for the entire digital agenda by coordinating the digital execution strategy across the organization. That’s what the Align Moment helps our customers with. That value has been brought to different stakeholders that we’ve engaged with.

The Align Moment has brought tremendous value to the CDO, for example. The CDO now has the ability to quickly make sense and — even in some cases — coordinate the complex web of digital initiatives running across their organizations, regardless of which silos they may be owned within. They can identify a path to execution that speeds up the realization of the entire digital agenda. I think of it as giving the CDO a dashboard through which they can now see their entire transformation on a singular framework.

We have found that the Align Moment delivers a lot of value for digital initiative owners. Because we work jointly across silos to de-risk, the execution pass implements that initiative whether it’s technology risk, process risk, or governance risk.

We’ve also found that the Align Moment delivers a lot of value for digital initiative owners. Because we jointly work across silos to de-risk, the execution pass implements that initiative whether it’s a technology risk, process risk, or governance risk. That helps to highlight the dependencies between these competing initiatives and competing priorities. And then, sequencing the work streams and efforts minimizes the risk of delays or mismatched deliverables, or mismatched outputs, between teams.

And then there is the chief information officer (CIO). This is a great tool for the CIO to take IT to the next level. They can elevate the impact of IT in the business, and in the various functions in the organization, by establishing agile, cross-functional work streams that can speed up the execution of the digital initiatives.

That’s in a nutshell what the Align Moment is about, helping our customers rapidly generate deep insights to help them orchestrate their digital agenda across silos, or break down silos, with the goal to speed up execution of their agendas.

Advance to the next big thing

Gardner: We’re now moving on to our next Moment, around stimulating differentiation, among other things. We now welcome back Christian to tell us about the Advance Moment.

Reichenbach: The train-of-thought here is that digital transformation is not only to optimize businesses by using technology. We also want to emphasize that technology is used to transform businesses by leveraging digital technology.

That means that we are using technology to differentiate the value propositions of our customers. And differentiation means, for example, new experiences for the customers of our customers, as well as new interactions with digital technology.

Further, it’s about establishing new digital business models, gaining new revenue streams, and expanding the ecosystem in a much broader sense. We want to leverage technology to differentiate the value propositions of our customers, and differentiation means you can’t do whatever one is doing by just copycatting, looking to your peers, and replicating what others are doing. That will not differentiate the value proposition.

Therefore, we specifically designed the Advance Moment where we co-innovate and co-ideate together with our customers to find their next big thing and driving technology to a much more differentiated value proposition.

Gardner: Christian, tell us more about the discreet steps that people need to do in order to get through that stimulating of differentiation.

Reichenbach: Differentiation comes from having new ideas and doing something different than in the past. That’s why we designed the Advance Moment to help our customers differentiate their unique value proposition.

No alt text provided for this image

The Advance Moment is designed as a thinking exercise that we do together with our customers across their diverse teams, meaning product owners, technology designers, engineers, and the CDO. This is a diverse team thinking about a specific problem they want to solve, but they shouldn’t think about it in isolation. They should think about what they do differently in the future to establish new revenue streams with maybe a new digital ecosystem to generate the new digital business models that we see all over the place in the annual reports from our customers.

Everyone is in the race to find the next big thing. We want to help them because we have the technology capabilities and experience to explain and discuss with our customers what is possible today with such leading technology as from HPE.

We can prove that we’ve done that. For example, we sit down with Continental, the second largest automotive part supplier in the world, and ideate about how we can redefine the experience of a driver who is driving along the road. We came up with a data exchange platform that helps our co-manufacturers to exchange data between each other so that the driver who’s sitting in the car gets new entertainment services that were not possible without a data exchange platform.

Our ideation and our Advance Moment are focused on redefining the experience and stimulating new ideas that are groundbreaking — and are not just copycatting what their peers are doing. And that, of course, will differentiate the value propositions from our customers in a unique way so that they can create new experiences and ultimately new revenue streams.

We’re addressing particular personas within our customer’s organization. That’s because today we see that the product owners in a company are powerful and are always asking themselves, “How can I bring my product to the next level?”

We’re addressing particular personas within our customer’s organization. That’s because today we see that the product owners in a company are powerful and are always asking themselves, “How can I bring my product to the next level? How can I differentiate my product so that it is not easily comparable with my peers?”

And, of course, the CDO in the customer organizations are looking to orchestrate these initiatives and support the product owners and engineers and build up the innovation engine with the right initiatives and right ideas. And, of course, when we’re talking about digital business transformation, we end up in the IT department because it has to operate somewhere.

So we bring in the experts from the IT department as well as the CIO to turn ideas quickly into realization. And for turning ideas quickly into something meaningful for our customers is what we designed the Accelerate Moment for.

Gardner: We will move on next to the Moment with Amos and learn about the Accelerate Moment, of moving toward the larger digital transformation value.

Accelerate from ideas into value

Ferrari: When it comes to realizing digital transformation, let me ask you a question, Dana. What do you think is the key problem our customers have?

Gardner: Probably finding ways to get started and then finding realization of value and benefits so that they can prove their initiative is worthwhile.

Ferrari: Yes. Absolutely. It’s a problem of prioritization of investment. They know that they need to invest, they need to do something, and they ask, “Where should I invest first? Should I invest in the big infrastructure first?”

But these decisions can slow things down. Yet time-to-market and speed are the keys today. We all know that this is what is driving the behavior of the people in their transformations. And so the key thing is the Accelerate Moment. It’s the Moment where we engage with our customers via workshops with them.

We enable them to extrapolate from their digital ambition and identify what will enable them to move into the realization of their digital transformation. “Where should I start? What is my journey’s path? What is my path to value?” These are the main questions that the Accelerate Moment answers.

No alt text provided for this image

As you can see, this is a part of the entire HPE Digital Next Advisory services, and it’s enabling the customer to move critically to the realization of benefits. In this engagement, you start with the decision about the use cases and the technology. There are a number of key elements and decisions that the customer is making. And this is where we’re helping them with the Accelerate Moment.

To deliver an Accelerate Moment, we use a number of steps. First, we frame the initiative by having a good discussion about their KPIs. How are you going to measure them? What are the benefits? Because the business is what is thriving. We know that. And we understand how the technology is the link to the business use case. So we frame the initiative and understand the use cases and scope out the use cases that advance the key KPIs that are the essential platform for the customer. That is a key step into the Moment.

Another important thing to understand is that in a digital transformation, a customer is not alone. No customer is really alone in that. It’s not successful if they don’t think holistically about their digital ecosystems. A customer is successful when they think about the complete ecosystem, including not only the key internal stakeholders but the other stakeholders surrounding them. Together they can enable them to build a new digital value and enable customer differentiation.

The next step is understanding the depth of technology across our digital journey map. And the digital journey map helps customers to see beyond just one angle. They may have started only from the IT point of view, or only from the developer point of view, or just the end user point of view. The reality is that IT now is becoming the value creator. But to be the value creator, they need to consider the entire technology of the entire company.

They need to consider edge-to-cloud, and data, as a full picture. This is where we can help them through a discussion about seeing the full technology that supports the value. How can you bring value to your full digital transformation?

The last step that we consider in the Accelerate Moment is to identify the elements surrounding your digital transformation that are the key building blocks and that will enable you to execute immediately. Those building blocks are key because they create what we call the minimal value product.

They should build up a minimum value product and surround it with the execution to realize the value immediately. They should do that without thinking, “Oh, maybe I need two or three years before realize that value.” They need to change to asking, “How can I do that in a very short time by creating something that is simple and straightforward to create by putting the key building blocks in place.”

This shows how everything is linked and how we need to best link them together. How? We link everything together with stories. And the stories are what help our key stakeholders realize what they needed to create. The stories are about the different stakeholders and how the different stakeholders see themselves in the future of digital transformation. This is the way we show them how this is going to be realized.

The end result is that we will deliver a number of stories that are used to assemble the key building blocks. We create a narrative to enable them to see how the applied technology enables them to create value for their company and achieve the key growth. This is the Accelerate Moment.

Gardner: Craig, as we’ve been discussing differentiation for your customers, what differentiates HPE Pointnext Services? Why are these four Moments the best way to obtain digital transformation?

No alt text provided for this image

Partridge: Differentiation is key for us, as well as for our customers across a complex and congested landscape of partners that the customers can choose. Some of the differentiation we’ve touched on here. There is no one else in the market, as far as I’m aware, that has the edge-to-cloud digital journey map, which is HPE’s fundamental model and allows us then to holistically paint the story of not only digital transformation and digital ambition — but also shows you how to do that at the initiative level and to how plug in those building blocks.

I’m not saying that anybody with just the maturity of an edge-to-cloud model can bring digital ambition to life, to visualize it through the Activate Moment, orchestrate it through the Align Moment, create differentiation through the Advance Moment, and then get to quicker value with the Accelerate Moment.

Gardner: Craig, for those organizations interested in learning more, how do they get started? Where can they go for resources to gain the ability to innovate and be differentiated?

Partridge: If anybody viewing this has seen something that they want to grab on to, that they think can accelerate their own digital ambition, then simply pick up the phone and call HPE and your sales rep. We have sales organizations from dedicated enterprise managers at some of that biggest customers around the world, on through to small- to medium-sized businesses with our inside-sales organization. Call your HPE sales rep and say the magic words “I want to engage with a digital adviser and I’m interested in Digital Next Advisory.” And that should be the flag that triggers a conversation with one of our digital advisers around the world.

Finally, there’s an email address, If worse comes to worst, throw an email to that address and then we’d be able to get straight back to you. So, it should make it as easy as possible and just reach out to HPE advisors in advance.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video. Sponsor: Hewlett Packard Enterprise.


Posted in application transformation, artificial intelligence, Business intelligence, Cloud computing, containers, Data center transformation, DevOps, digital transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

How global data availability accelerates collaboration and delivers business insights

The next BriefingsDirect data strategy insights discussion explores the payoffs when enterprises overcome the hurdles of disjointed storage to obtain global data access.

By leveraging the latest in container and storage server technologies, the holy grail of inclusive, comprehensive, and actionable storage can be obtained. And such access extends across all deployment models – from hybrid cloud, to software-as-a-service (SaaS), to distributed data centers, and edge.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us here to examine the role that comprehensive data storage plays in delivering the rapid insights businesses need for digital business transformation with our guest, Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Denis, in our earlier discussions in this three-part series we learned about IBM’s vision for global consistent data, as well as the newest systems forming the foundation for these advances.

But let’s now explore the many value streams gained from obtaining global data access. We hear a lot about the rise of artificial intelligence (AI) adoption needed to support digital businesses. So what role does a modern storage capability — particularly with a global access function and value — play in that AI growth? 

Kennelly: As enterprises become increasingly digitally transformed, the amount of data they are generating is enormous. IDC predicts that something like 42 billion Internet of things (IoT) devices will be sold by 2025, and so the role of storage is not only centralized to data centers. It needs to be distributed across this entire hybrid cloud environment.

Discover and share AI data

For actionable AI, you want to build models on all of the data that’s been generated across this environment. Being able to discover and understand that data is critical, and that’s why it’s a key part of our storage capabilities. You need to run that storage on all of these highly distributed environments in a seamless fashion. You could be running anywhere — the data center, the public cloud, and at edge locations. But you want to have the same software and capabilities for all of these locations to allow for that essential seamless access.

That’s critical to enabling an AI journey because AI doesn’t just operate on the data sitting in a public cloud or data center. It needs to operate on all of the data if you want to get the best insights. You must get to the data from all of these locations and bring it together in a seamless manner.

Gardner: When we’re able to attain such global availability of data — particularly in a consistent context – how does that accelerate AI adoption? Are there particular use cases, perhaps around DevOps? How do people change their behavior when it comes to AI adoption, thanks to what the storage and data consistency can do for them?

Kennelly: First it’s about knowing where the data is and doing basic discovery. And that’s a non-trivial task because data is being generated across the enterprise. We are increasingly collaborating remotely and that generates a lot of extended data. Being able to access and share that data across environments is a critical requirement. It’s something that’s very important to us. 

Then — as you discover and share the data – you can also bring that data together into use by AI models. You can use it to actually generate better AI models across the various tiers of storage. But you don’t want to just end up saying, “Okay, I discovered all of the data. I’m going to move it to this certain location and then I’m going to run my analytics on it.”

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Instead, you want to do the analytics in real time and in a distributed fashion. And that’s what’s critical about the next level of storage.Coming back to what’s hindering AI adoption, number one is that data discovery because enterprises spent a huge amount of time just discovering the data. And when you get access, you need to have seamless access. And then, of course, as you build your AI models you need to infuse those analytics into the applications and capabilities that you’re developing.

And that leads to your question around DevOps, to be able to integrate the processes of generating and building AI models into the application development process so that we make sure the application developers can leverage those insights for the applications they are building.

Gardner: For many organizations, moving to hybrid cloud has been about application portability. But when it comes to the additional data mobility we gain from consistent global data access, there’s a potential greater value. Is there a second shoe to fall, if you will, Denis, when we can apply such data mobility in a hybrid cloud environment?

Access data across hybrid cloud 

Kennelly: Yes, and that second shoe is about to fall. The first part of our collective cloud journey was all about moving to the public cloud, moving everything to public clouds, and building applications with cloud-based data.

What we discovered in doing that is that life is not so simple, and we’re really now in a hybrid cloud world for many reasons. Because of that success, we now need the hybrid cloud approach.

The need for more cloud portability has led to technologies like containers to get portability across all of the environments — from data centers to clouds. As we roll out containers into production, however, the whole question of data becomes even more critical.

That need for more cloud portability has led to technologies like containers to get portability across all of these environments – from data centers to clouds. As we roll out containers and these workloads into production, the whole data question is more critical.

You can now build an application that runs in a certain environment, and containers allow you to move that application to other environments very quickly. But if the data doesn’t follow — if the data access doesn’t follow that application seamlessly — then you face some serious challenges and problems.

And that is the next shoe to drop, and it’s dropping right now. As we roll out these sophisticated applications into production, being able to copy data or get access to data across this hybrid cloud environment is the biggest challenge the industry is facing.

Gardner: When we envision such expansive data mobility, we often think about location, but it also impacts the type of data – be it file, block, and object storage, for example. Why must there be global access geographically — but also in terms of the storage type and across the underlying technology platforms? 

Kennelly: To the application developer, we really have to hide from them that layer of complexity of the storage type and platform. At the end of the day, the application developer is looking for a consistent API through which to access the data services, whether that’s file, block, or object. They shouldn’t have to care about that level of detail.

No alt text provided for this image

It’s important that there’s a focus on consistent access via APIs to the developer. And then the storage subsystem has to take care of the federated global access of the data. Also, as we generate data, the storage subsystem should scale horizontally.These are the design principles we have put into the IBM Storage platform. Number one, you get seamless actions and consistent access – be it file, object, or block storage. And we can scale horizontally as you generate data across that hybrid cloud environment.

Gardner: The good news is that global data access enablement can now be done with greater ease. The bad news is the global access enablement can be done anywhere, anytime, and with ease.

And so we have to also worry about access, security, permissions, and regulatory compliance issues. How do you open the floodgates, in a sense, for common access to distributed data, but at the same time put in the guardrails that allow for the management of that access in a responsible way?

Global data access opens doors

Kennelly: That’s a great question. As we introduce simplicity and ease of data access, we can’t just open it up to everybody. We have to make sure we have good authentication as part of the design, using things like two-factor authentication on the data-access APIs.

But that’s only half of the problem. In the security world, the unfortunate acceptance is that you probably are going to get breached. It’s in how you respond that really differentiates you and determines how quickly you can get the business back on its feet.

And so, when something bad happens, the third critical role for the storage subsystem to play is in the access control to the persistence storage. At the end of the day, that is what people are after. Being able to understand the typical behavior of those storage systems, and how data is usually being stored, forms a baseline against which you can understand when something out of the ordinary is happening.

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Clearly, if you’re under a malware or CryptoLocker attack, you see a very different input/output (IO) pattern than you would normally see. We can detect that in real time, understand when it happens, and make sure you have protected copies of the data so you can quickly access that and get back to business and back online quickly.Why is all of that important? Because we live in a world where it’s not a case of if it will happen, it’s really when it will happen. How we can respond is critical.

Gardner: Denis, throughout our three-part series we’ve been discussing what we can do, but we haven’t necessarily delved into specific use cases. I know you can’t always name businesses and reference customers, but how can we better understand the benefits of a global data access capability in the context of use cases?

In practice, when the rubber hits the road, how does global data storage access enable business transformation? Is there a key metric you look for to show how well your storage systems support business outcomes? 

Global data storage success

Kennelly: We’re at a point right now when customers are looking to drive new business models and to move much more quickly in their hybrid cloud environments.

There are enabling technologies right now facilitating that. There’s a lot of talk about edge with the advent of 5G networks, which enable a lot of this to happen. When you talk about seamless access and the capability to distribute data across these environments, you need the underlying network infrastructure to make that happen.

Customers are looking to drive new business models and to move much more quickly in their hybrid cloud deployments. There’s a lot of talk about edge with the advent of 5G networks. When you talk about seamless access and the capability to distribute data across these environments, you need the underlying network infrastructure to make that happen.

As we do that, we’re looking at a number of key business measures and metrics. We have done some independent surveys and analysis looking at the business value that we drive for our clients with a hybrid cloud platform and things like portability, agility, and seamless data access.

In terms of business value, we have four or five measures. For example, we can drive roughly 2.5 times more business value for our clients — everything from top-line growth to operational savings. And that’s something that we have tested with many clients independently.

One example that’s very relevant in the world we live in today is we have a cloud provider that needed to have more federated access to their global data. But they also wanted to distribute that through edge nodes in a consistent manner. And that’s just an example of why this is happening in action.

Gardner: You know, some of the major consumers of analytics in businesses these days are data scientists, and they don’t always want to know what’s going on underneath the covers. On the other hand, what goes on underneath the covers can greatly impact how well they can do their jobs, which are often essential to digital business transformation.

No alt text provided for this image

For you to address a data scientist specifically about why global access for data and storage modernization is key, what would you tell them? How do you describe the value that you’re providing to someone like a data scientist who plays such a key role in analytics?

Kennelly: Well, data scientists talk a lot about data sets. They want access to data sets so they can test their hypothesis very quickly. In a nutshell, we surface data sets quicker and faster than anybody else at a price performance that leads the industry — and that’s what we do every day to enable data scientists.

Gardner: Throughout our series of three storage strategy discussions, we’ve talked about how we got here and what we’re doing. But we haven’t yet talked about what comes next.

These enabling technologies not only satisfy business imperatives and requirements now but set up organizations to be even more intelligent over time. Let’s look to the future for the expanding values when you do data access globally and across hybrid clouds well. 

Insight-filled future drives growth

Kennelly: Yes, you get to critically look at current and new business models. At the end of the day, this is about driving business growth. As you start to look at these environments — and we’ve talked a lot about analytics and data – it becomes about getting competitive advantage through real-time insights about what’s going on in your environments.

You become able to better understand your supply chain, what’s happening in certain products, and in certain manufacturing lines. You’re able to respond accordingly. There’s a big operational benefit in terms of savings. You don’t have to have excess capacity in the environment.

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Also, in seeking new business opportunities, you will detect the patterns needed to have insights you hadn’t had before by doing analytics and machine learning into what’s critical in your systems and markets. If you move your IT environment and centralize everything in one cloud, for example, then that really hinders that progress.By being able to do that with all of the data as it’s generated in real time, you get very unique insights that provide competitive advantage.

Gardner: And lastly, why IBM? What sets you apart from the competition in the storage market for obtaining these larger goals of distributed analytics, intelligence, and competitiveness?

Kennelly: We have shown over the years that we have been at the forefront of many transformations of businesses and industries. Going back to the electronic typewriter, if we want to go back far enough, or now to our business-to-business (B2B) or business-to-employee (B2E) models in the hybrid cloud — IBM has helped businesses make these transformations. That includes everything from storage to data and AI through to hybrid cloud platforms, with Red Hat Enterprise Linux, and right out to our business service consulting.

IBM has the end-to-end capabilities to make that all happen. It positions us as an ideal partner who can do so much.

I love to talk about storage and the value of storage, and I spend a lot of time talking with people in our business consulting group to understand the business transformations that clients are trying to drive and the role that storage has in that. Likewise, with our data science and data analytics teams that are enabling those technologies.

The combination of all of those capabilities as one idea is a unique differentiator for us in the industry. And it’s why we are developing the leading edge capabilities, products, and technology to enable the next digital transformations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.


Posted in AIOps, application transformation, big data, Business intelligence, Cloud computing, containers, Cyber security, data analysis, data center, Data center transformation, digital transformation, enterprise architecture, IBM, machine learning, Software, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How consistent storage services across all tiers and platforms attains data simplicity, compatibility, and lower cost

This BriefingsDirect Data Strategies Insights discussion series, Part 2, explores the latest technologies and products delivering common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

New advances in storage technologies, standards, and methods have changed the game when it comes to overcoming the obstacles businesses too often face when seeking pervasive analytics across their systems and services. 

Stay with us now as we examine how IBM Storage is leveraging containers and the latest storage advances to deliver inclusive, comprehensive, and actionable storage.  

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the future of storage strategies that accelerate digital transformation, please welcome Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: In our earlier discussion we learned about the business needs and IBM’s large-scale vision for global, consistent data. Let’s now delve beneath the covers into what enables this new era of data-driven business transformation. 

In our last discussion, we also talked about containers — how they had been typically relegated to application development. What should businesses know about the value of containers more broadly within the storage arena as well as across other elements of IT?

Containers for ease, efficiency

Kennelly: Sometimes we talk about containers as being unique to application development, but I think the real business value of containers is in the operational simplicity and cost savings. 

When you build applications on containers, they are container-aware. When you look at Kubernetes and the controls you have there as an operations IT person, you can scale up and scale down your applications seamlessly. 

As we think about that and about storage, we have to include storage under that umbrella. Traditionally, storage was independently doing of a lot of the work. Now we are in a much more integrated environment where you have cloud-like behaviors. And you want to deliver those cloud-like behaviors end-to-end — be it for the applications, for the data, for the storage, and even for the network — right across the board. That way you can have a much more seamless, easier, and operationally efficient way of running your environment. 

Containers are much more than just an application development tool; they are a key enabler to operational improvement across the board.

Gardner: Because hybrid cloud and multi-cloud environments are essential for digital business transformation, what does this container value bring to bridging the hybrid gap? How do containers lead to a consistent and actionable environment, without integrations and complexity thwarting wider use of assets around the globe?

Kennelly: Let’s talk about what a hybrid cloud is. To me, a hybrid cloud is the ability to run workloads on a public cloud and on a private cloud traditional data center. And even right out to edge locations in your enterprise where there are no IT people whatsoever. 

Being able to do that consistently across that environment — that’s what containers bring. They allow a layer of abstraction above the target environment, be it a bare-metal server, a virtual machine (VM), or a cloud service – and you can do that seamlessly across those environments.

That’s what a hybrid cloud platform is and what enables that are containers and being able to have a seamless runtime across this entire environment.

Today, as an enterprise, we still have assets sitting on a data center. Yet typical horizontal business processes, such as HR or sales, want to move to a SaaS model while still retaining core differentiating business processes. 

And that’s core to digital transformation, because when we start to think about where we are today as an enterprise, we still have assets sitting on the data center. Typically, what you see out there are horizontal business processes, such as human resources or sales, and you might want to move those more to a software as a service (SaaS) capability while still retaining your core, differentiating business processes.

For compliance or regulatory reasons, you may need to keep those assets in the data center. Maybe you can move some pieces. But at the same time, you want to have the level of efficiency you gain from cloud-like economics. You want to be able to respond to business needs, to scale up and scale down the environment, and not design the environment for a worst-case scenario. 

That’s why a hybrid cloud platform is so critical. And underneath that, why containers are a key enabler. Then, if you think about the data in storage, you want to seamlessly integrate that into a hybrid environment as well.

Gardner: Of course, the hybrid cloud environment extends these days more broadly with the connected edge included. For many organizations the edge increasingly allows real-time analytics capabilities by taking advantage of having compute in so many more environments and closer to so many more devices.

What is it about the IBM hybrid storage vision that allows for more data to reside at the edge without having to move it into a cloud, analyze it there, and move it back? How are containers enabling more data to stay local and still be part of a coordinated whole greater than the sum of the parts?

Data and analytics at the edge

Kennelly: As an industry, we go from being centralized to decentralized — what I call a pendulum movement every number of years. If you think back, we were in the mainframe, where everything was very centralized. Then we went to distributed systems and decentralized everything.

With cloud we began to recentralize everything again. And now we are moving our clouds back out to the edge for a lot of reasons, largely because of egress and ingress challenges and to seek efficiency in moving more and more of that data. 

No alt text provided for this image

When I think about edge, I am not necessarily thinking about Internet of things (IoT) devices or sensors, but in a lot of cases this is about branch and remote locations. That’s where a core part of the enterprise operates, but not necessarily with an IT team there. And that part of the enterprise is generating data from what’s happening in that facility, be it a manufacturing plant, a distribution center, or many others.

As you generate that data, you also want to generate the analytics that are key to understanding how the business is reacting and responding. Do you want to move all that data to a central cloud to run analytics, and then take the result back out to that distribution center? You can do that, but it’s highly inefficient — and very costly. 

What our clients are asking for is to keep the data out at these locations and to run the analytics locally. But, of course, with all of the analytics you still want to share some of that data with a central cloud.

So, what’s really important is that you can share across this entire environment, be it from a central data center or a central cloud out to an edge location and provide what we call seamless access across this environment. 

With our technology, with things like IBM Spectrum Scale, you gain that seamless access. We abstract the data access as if you are accessing the data locally — or it could be back in the cloud. But in terms of the applications, it really doesn’t care. That seamless access is core to what we are doing.

Gardner: The IBM Storage portfolio is broad and venerable. It includes flash, disk, and tape, which continues to have many viable use cases. So, let’s talk about the products and how they extend the consistency and commonality that we have talked about and how that portfolio then buttresses the larger hybrid storage vision.

Storage supports all environments 

Kennelly: One of the key design points of our portfolio, particularly our flash line, is being able to run in all environments. We have one software code base across our entire portfolio. That code runs on our disk subsystems and disk controllers, but it can also run on your platform of choice. So, we absolutely support all platforms across the board. So that’s one design principle. 

Secondly, we embrace containers very heavily. And being able to run on containers and provide data services across those containers provides that seamless access that I talked about. That’s a second major design principle.

Yet as we look at our storage portfolio, we also want to make sure we optimize the storage and optimize the spend by the customer by tiered storage and being able to move data across those different tiers of storage.

As we look at our storage portfolio, we also want to make sure we optimize the storage and optimize the spend by the customer by tiered storage and being able to move data across those different tiers of storage.

You mentioned tape storage. And so, for example, at times you may want to move from fast, online, always-on, and high-end storage to a lower tier of less expensive storage such as tape, maybe for data retention reasons. You’ll then need an air gap solution and you’ll want to move to cold storage, as we call it, i.e. on tape. We support that capability and we can manage your data across that environment. 

There are three core design principles to our IBM Storage portfolio. Number one is we can run seamlessly across these environments. Number two, we provide seamless access to the data across those environments. And number three, we support optimization of the storage for the use case needed, such being able to tier the storage to your economic and workload needs.

Gardner: Of course, what people are also interested in these days is the FlashSystem performance. Tell us about some of the latest and greatestwhen it comes to FlashSystem. You have the new 5200, the high-end 9200, and those also complement some of your other products like ESS 3200

Flash provides best performance

Kennelly: Yes, we continue to expand the portfolio. With the FlashSystems, and some of our recent launches, some things don’t change. We’re still able to run across these different environments.

But in terms of price-performance, especially with the work we have done around our flash technology, we have optimized our storage subsystems to use standard flash technologies. In terms of price for throughput, when we look at this against our competitors, we offer twice the performance for roughly half the price. And this has been proven as we look at our competitors’ technology. That’s due to leveraging our innovations around what we call the FlashCore Module, wherein we are able to use standard flash in those disk drives and enable compression on the fly. That’s driving the roadmap in terms of throughput and performance at a very, very competitive price point.

Gardner: Many of our readers and listeners, Denis, are focused on their digital business transformation. They might not be familiar with some of these underlying technological advances, particularly end-to-end Non-Volatile Memory Express (NVMe). So why are these systems doing things that just weren’t possible before?

No alt text provided for this image

Kennelly: A lot of it comes down to where the technology is today and the price points that we can get from flash from our vendors. And that’s why we are optimizing our flash roadmap and our flash drives within these systems. It’s really pushing the envelope in terms of performance and throughput across our flash platforms.

Gardner: The desired end-product for many organizations is better and pervasive analytics. And one of the great things about artificial intelligence (AI) and machine learning (ML) is it’s not only an output — it’s a feature of the process of enhancing storage and IT.

How are IT systems and storage using AI inside these devices and across these solutions? What is AI bringing to enable better storage performance at a lower price point?

Kennelly: We continue to optimize what we can do in our flash technology, as I said. But when you embark on an AI project, something like 70 to 80 percent of the spend is around discovery, gaining access to the data, and finding out where the data assets are. And we have capabilities like IBM Spectrum Discover that help catalog and understand where the data is and how to access that data. It’s a critical piece of our portfolio on that journey to AI.

We also have integrations with AI services like Cloudera out of the box so that we can seamlessly integrate with those platforms and help those platforms differentiate using our Spectrum Scale technology.

But in terms of AI, we have some really key enablers to help accelerate AI projects through discovery and integration with some of the big AI platforms.

Gardner: And these new storage platforms are knocking off some impressive numbers around high availability and low latency. We are also seeing a great deal of consolidation around storage arrays and managing storage as a single pool. 

On the economics of the IBM FlashSystem approach, these performance attributes are also being enhanced by reducing operational costs and moving from CapEx to OpEx purchasing.

Storage-as-a-service delivers

Kennelly: Yes, there is no question we are moving toward an OpEx model. When I talked about cloud economics and cloud-like flexibility behavior at a technology level, that’s only one side of the equation. 

On the business side, IT is demanding cloud consumption models, OpEx-type models, and pay-as-you-go. It’s not just a pure financial equation, it’s also how you consume the technology. And storage is no different. This is why we are doing a lot of innovation around storage-as-a-service. But what does that really mean? 

It means you ask for a service. “I need a certain type of storage with this type of availability, this type of performance, and this type of throughput.” Then we as a storage vendor take care of all the details behind that. We get the actual devices on the floor that meet those requirements and manage that. 

As those assets depreciate over a number of years, we replace and update those assets in a seamless manner to the client.

We already have the technology to support all environments. Now we want to make sure we have a seamless consumption model and the business processes of delivering storage-as-a-service and being able to replace and upgrade that storage over time — all seamless to the client.

As the storage sits in the data center, maybe the customer says, “I want to move some of that data to a cloud instance.” We also offer a seamless capability to move the data over to the cloud and run that service on the cloud. 

We already have all the technology to do that and the platform support for all of those environments. What we are working on now is making sure we have a seamless consumption model and the business processes of delivering that storage-as-a-service, and how to replace and upgrade that storage over time — while making it all seamless to the client. 

I see storage moving quickly to this new storage consumption model, a pure OpEx model. That’s where we as an industry will go over the next few years.

Gardner: Another big element of reducing your total cost of ownership over time is in how well systems can be managed. When you have a common pool approach, a comprehensive portfolio approach, you also gain visibility, a single pane of glass when it comes to managing these systems.

Intelligent insights via storage

Kennelly: That’s an area we continue to invest in heavily. Our IBM Storage Insights platform provides tremendous insights in how the storage subsystems are running operationally. It also provides insights within the storage in terms of where you have space constraints or where you may need to expand. 

But that’s not just a manual dashboard that we present to an operator. We are also infusing AI quite heavily into that platform and using AIOps to integrate with Storage Insights to run storage operations at much lower costs and with more automation.

And we can do that in a consistent manner right across the environments, whether it’s a flash storage array, mainframe attached, or a tape device. It’s all seamless across the environment. You can see those tiers and storage as one platform and so are able to respond quickly to events and understand events as they are happening.

Gardner: As we close out, Denis, for many organizations hybrid cloud means that they don’t always know what’s coming and lack control over predicting their IT requirements. Deciding in advance how things get deployed isn’t always an option.

How do the IBM FlashSystems, and your recent announcements in February 2021, provide a path to a crawl-walk-run adoption approach? How do people begin this journey regardless of the type of organization and the size of the organization?

Kennelly: We are introducing an update to our FlashSystem 5200platform, which is our entry point platform. Now, that consistent software platform runs our storage software, IBM Spectrum Virtualize. It’s the same software as in our high-end arrays at the very top of our pyramid of capabilities. 

No alt text provided for this image

As part of that announcement, we are also supporting other public cloud vendors. So you can run the software on our arrays, or you can move it out to run on a public cloud. You have tremendous flexibility and choice due to the consistent software platform.

And, as I said, it’s our entry point so the price is very, very competitive. This is a part of the market where we see tremendous growth. You can experience the best of the IBM Storage platform at a low-cost entry point, but also get the tremendous flexibility. You can scale up that environment within your data center and right out to your choice of how to use the same capabilities across the hybrid cloud.

There has been tremendous innovation by the IBM team to make sure that our software supports this myriad of platforms, but also at a price point that is the sweet spot of what customers are asking for now.

Gardner: It strikes me that we are on the vanguard of some major new advances in storage, but they are not just relegated to the largest enterprises. Even the smallest enterprises can take advantage and exploit these great technologies and storage benefits.

Kennelly: Absolutely. When we look at the storage market, the fastest growing part is at that lower price point — where it’s below $50K to $100K unit costs. That’s where we see tremendous growth in the market and we are serving it very well and very efficiently with our platforms. And, of course, as people want to scale and grow, they can do that in a consistent and predictable manner.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

You may also be interested in:

Posted in Cloud computing, data analysis, data center, Data center transformation, digital transformation, IBM, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How storage advances help businesses digitally transform across a hybrid cloud world

The next BriefingsDirect data strategies insights discussion explores how consistent and global storage models can best propel pervasive analytics and support digital business transformation.

Decades of disparate and uncoordinated storage solutions have hindered enterprises’ ability to gain common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

Yet only a comprehensive data storage model that includes all platforms, data types, and deployment architectures will deliver the rapid insights that businesses need.

Stay with us to examine how IBM Storage is leveraging containers and the latest storage advances to deliver the holy grail of inclusive, comprehensive, and actionable storage.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the future promise of the storage strategies that accelerate digital transformation, please welcome Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Clearly the world is transforming digitally. And hybrid cloud is helping in that transition. But what role specifically does storage play in allowing hybrid cloud to function in a way that bolsters and even accelerates digital transformation?

Kennelly: As you said, the world is undergoing a digital transformation, and that is accelerating in the current climate of a COVID-19 world. And, really, it comes down to having an IT infrastructure that is flexible, agile, has cloud-like attributes, is open, and delivers the economic value that we all need.

That is why we at IBM have a common hybrid cloud strategy. A hybrid cloud approach, we now know, is 2.5 times more economical than a public cloud-only strategy. And why is that? Because as customers transform — and transform their existing systems — the data and systems sit on-premises for a long time. As you move to the public cloud, the cost of transformation has to overcome other constraints such as data sovereignty and compliance. This is why hybrid cloud is a key enabler.

Hybrid cloud for transformation

Now, underpinning that, the core building block of the hybrid cloud platform, is containers and Kubernetesusing our OpenShifttechnology. That’s the key enabler to the hybrid cloud architecture and how we move applications and data within that environment.

As the customer starts to transform and looks at those applications and workloads as they move to this new world, being able to access the data is critical and being able to keep that access is a really important step in that journey. Integrating storage into that world of containers is therefore a key building block on which we are very focused today.

Storage is where you capture all that state, where all the data is stored. When you think about cloud, hybrid cloud, and containers — you think stateless. You think about cloud-like economics as you scale up and scale down. Our focus is bridging those two worlds and making sure that they come together seamlessly. To that end, we provide an end-to-end hybrid cloud architecture to help those customers in their transformation journeys.

Gardner: So often in this business, we’re standing on the shoulders of the giants of the past 30 years; the legacy. But sometimes legacy can lead to complexity and becomes a hindrance. What is it about the way storage has evolved up until now that people need to rethink? Why do we need something like containers, which seem like a fairly radical departure?

Kennelly: It comes back to the existing systems. You know, I think storage at the end of the day was all about the applications, the workloads that we ran. It was storage for storage’s sake. You know, we designed applications, we ran applications and servers, and we architected them in a certain fashion.

When you get to a hybrid cloud world … If you’re in a digitally transformed business, you can respond rapidly. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity.

And, of course, they generated data and we wanted access to that data. That’s just how the world happened. When you get to a hybrid cloud world — I mean, we talk about cloud-like behavior, cloud-like economics — it manifests itself in the ability to respond.

If you’re in a digitally transformed business, you can respond to needs in your supply chain rapidly, maybe to a surge in demand based on certain events. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity that would ever be needed. That’s the benefit cloud has brought to the industry, and why it’s so critically important.

Now, maybe traditionally storage was designed for the worst-case scenario. In this new world, we have to be able to scale up and scale down elastically like we do in these workloads in a cloud-like fashion. That’s what has fundamentally changed and what we need to change in those legacy infrastructures. Then we can deliver more of our analysis-services-consumption-type model to meet the needs of the businesses.

Gardner: And on that economic front, digitally transformed organizations need data very rapidly, and in greater volumes — with that scalability to easily go up and down. How will the hybrid cloud model supported by containers provide faster data in greater volumes, and with a managed and forecastable economic burden?

Disparate data delivers insights

Kennelly: In a digitally transformed world, data is the raw material to a competitive advantage. Access to data is critical. Based on that data, we can derive insights and unique competitive advantages using artificial intelligence (AI) and other tools. But therein lies the question, right?

When we look at things like AI, a lot of our time and effort is spent on getting access to the data and being able to assemble that data and move it to where it is needed to gain those insights.

Being able to do that rapidly and at a low cost is critical to the storage world. And so that’s what we are very focused on, being able to provide those data services — to discover and access the data seamlessly. And, as required, we can then move the data very rapidly to build on those insights and deliver competitive advantage to a digitally transformed enterprise.

Gardner: Denis, in order to have comprehensive data access and rapidly deliver analytics at an affordable cost, the storage needs to run consistently across a wide variety of different environments — bare-metal, virtual machines (VMs), containers — and then to and from both public and private clouds, as well as the edge.

What is it about the way that IBM is advancing storage that affords this common view, even across that great disparity of environments?

Kennelly: That’s a key design principle for our storage platform, what we call global access or a global file system. We’re going right back to our roots of IBM Research, decades ago where we invented a lot of that technology. And that’s the core of what we’re still talking about today — to be able to have seamless access across disparate environments.

A key design principle for our storage platform, what we call global access or a global file system, goes back to our roots at IBM Research. We invented a lot of that technology. And that’s at the core of what we’re talking about — seamless access across disparate environments.

Access is one issue, right? You can get read-access to the data, but you need to do that at high performance and at scale. At the same time, we are generating data at a phenomenal rate, so you need to scale out the storage infrastructure seamlessly. That’s another critical piece of it. We do that with products or capabilities we have today in things like IBM Spectrum Scale.

But another key design principle in our storage platforms is being to run in all of those environments — bare-metal servers, to VMs, to containers, and right out to the edge footprints. So we are making sure our storage platform is designed and capable of supporting all of those platforms. It has to run on them and as well as support the data services — the access services, the mobility services and the like, seamlessly across those environments. That’s what enables the hybrid cloud platform at the core of our transformation strategy.

Gardner: In addition to the focus on the data in production environments, we also should consider the development environment. What does your data vision include across a full life-cycle approach to data, if you will?

Be upfront with data in DevOps

Kennelly: It’s a great point because the business requirements drive the digital transformation strategy. But a lot of these efforts run into inertia when you have to change. The development processes teams within the organization have traditionally done things in a certain way. Now, all of a sudden, they’re building applications for a very different target environment — this hybrid cloud environment, from the public cloud, to the data center, and right out to the edge.

The economics we’re trying to drive require flexible platforms across the DevOpstool chain so you can innovate very quickly. That’s because digital transformation is all about how quickly you can innovate via such new services. The next question is about the data.

As you develop and build these transformed applications in a modern, DevOps cloud-like development process, you have to integrate your data assets early and make sure you know the data is available — both in that development cycle as well as when you move to production. It’s essential to use things like copy-data-management services to integrate that access into your tool chain in a seamless manner. If you build those applications and ignore the data, then it becomes a shock as you roll it into production.

This is the key issue. A lot of times we can get an application running in one scenario and it looks good, but as you start to extend those services across more environments — and haven’t thought through the data architecture — a lot of the cracks appear. A lot of the problems happen.

You have to design in the data access upfront in your development process and into your tool chains to make sure that’s part of your core development process.

Gardner: Denis, over the past several years we’ve learned that containers appear to be the gift that keeps on giving. One of the nice things about this storage transition, as you’ve described, is that containers were at first a facet of the development environment.

Developers leveraged containers first to solve many problems for runtimes. So it’s also important to understand the limits that containers had. Stateful and persistent storage hadn’t been part of the earlier container attributes.

How technically have we overcome some of the earlier limits of containers?

Containers create scalable benefits

Kennelly: You’re right, containers have roots in the open-source world. Developers picked up on containers to gain a layer of abstraction. In an operational context, it gives tremendous power because of that abstraction layer. You can quickly scale up and scale down pods and clusters, and you gain cloud-like behaviors very quickly. Even within IBM, we have containerized software and enabled traditional products to have cloud-like behaviors.

We were able to quickly move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs such as when there’s a spike in demand and you need to scale up the environment. Containers are amazing in how quickly and how simple that is.

We have been able to move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs to scale up and down. Containers are amazing in how quickly and how simple that is.

Now, with all of that power and the capability to scale up and scale down workloads, you also have a storage system sitting at the back end that has to respond accordingly. That’s because as you scale up more containers, you generate more input/output (IO) demands. How does the storage system respond?

Well, we have managed to integrate containers into the storage ecosystem. But, as an industry, we have some work to do. The integration of storage with containers is not just the simple IO channel to the storage. It also needs to be able to scale out accordingly, and to be managed. It’s an area we at IBM are focused on working closely with our friends at Red Hat to make sure that’s a very seamless integration and gives you consistent, global behavior.

Gardner: With security and cyber-attacks being so prominent in people’s minds in early 2021, what impacts do we get with a comprehensive data strategy when it comes to security? In the past, we had disparate silos of data. Sometimes, bad things could happen between the cracks.

So as we adopt containers consistently is there an overarching security benefit when it comes to having a common data strategy across all of your data and storage types?

Prevent angles of attack

Kennelly: Yes. It goes back to the hybrid cloud platform and having potentially multiple public clouds, data center workloads, edge workloads, and all of the combinations thereof. The new core is containers, but you know that with applications running across that hybrid environment that we’ve expanded the attack surface beyond the data center.

By expanding the attack surface, unfortunately, we’ve created more opportunities for people to do nefarious things, such as interrupt the applications and get access to the data. But when people attack a system, the cybercriminals are really after the data. Those are the crown jewels of any organization. That’s why this is so critical.

Data protection then requires understanding when somebody is tampering with the data or gaining access to data and doing something nefarious with that data. As we look at our data protection technologies, and as we protect our backups, we can detect if something is out of the ordinary. Integrating that capability into our backups and data protection processes is critical because that’s when we see at a very granular level what’s happening with the data. We can detect if behavioral attributes have changed from incremental backups or over time.

We can also integrate that into business process because, unfortunately, we have to plan for somebody attacking us. It’s really about how quickly we can detect and respond very quickly to get the systems back online. You have to plan for the worst-case scenario.

That’s why we have such a big focus on making sure we can detect in real time when something is happening as the blocks are literally being written to the disk. We can then also unwind to when we seek a good copy. That’s a huge focus for us right now.

Gardner: When you have a comprehensive data infrastructure, can go global and access data across all of these different environments, it seems to me that you have set yourself up for a pervasive analytics capability, which is the gorilla in the room when it comes to digital business transformation. Denis, how does the IBM Storage vision help bring more pervasive and powerful analytics to better drive a digital business?

Climb the AI Ladder

Kennelly: At the end of the day, that’s what this is all about. It’s about transforming businesses, to drive analytics, and provide unique insights that help grow your business and respond to the needs of the marketplace.

It’s all about enabling top-line growth. And that’s only possible when you can have seamless access to the data very quickly to generate insights literally in real time so you can respond accordingly to your customer needs and improve customer satisfaction.

This platform is all about discovering that data to drive the analytics. We have a phrase within IBM, we call it “The AI Ladder.” The first rung on that AI ladder is about discovering and accessing the data, and then being able to generate models from those analytics that you can use to respond in your business.

We’re all in a world based on data. AI has a major role to play where we can look at business processes and understand how they are operating and then drive greater automation.That’s a huge focus for us — optimizing and automating existing business processes.

We’re all in a world based on data. And we’re using it to not only look for new business opportunities but for optimizing and automating what we already have today. AI has a major role to play where we can look at business processes and understand how they are operating and then, based on analytics and AI, drive greater automation. That’s a huge focus for us as well: Not only looking at the new business opportunities but optimizing and automating existing business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cloud computing, containers, data analysis, Data center transformation, DevOps, digital transformation, IBM, Information management, machine learning, Security, storage | Leave a comment

The future of work is happening now thanks to Digital Workplace Services

Businesses, schools, and governments have all had to rethink the proper balance between in-person and remote work. And because that balance is a shifting variable — and may well continue to be for years after the pandemic — it remains essential that the underlying technology be especially agile.

The next BriefingsDirect worker strategies discussion explores how a partnership behind a digital workplace services solution delivers a sliding scale for blended work scenarios. We’ll learn how Unisys, Dell, and their partners provide the time-proof means to secure applications intelligently — regardless of location.

We’ll also hear how an increasingly powerful automation capability makes the digital workplace easier to attain and support.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in cloud-delivered desktop modernization, please welcome Weston Morris, Global Strategy, Digital Workplace Services, Enterprise Services, at Unisys, and Araceli Lewis, Global Alliance Lead for Unisys at Dell Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Weston, what are the trends, catalysts, and requirements transforming how desktops and apps are delivered these days?

Morris: We’ve all lived through the hype of virtual desktop infrastructure (VDI). Every year for the last eight or nine years has supposedly been the year of VDI. And this is the year it’s going to happen, right? It had been a slow burn. And VDI has certainly been an important part of the “bag of tricks” that IT brings to bear to provide workers with what they need to be productive.

COVID sends enterprises to cloud

But since the beginning of 2020, we’ve all seen — because of the COVID-19 pandemic — VDI brought to the forefront in the importance of having an alternative way of delivering a digital workplace to workers. This has been especially important in environments where enterprises had not invested in mobility, the cloud, or had not thought about making it possible for user data to reside outside of their desktop PCs.

Those enterprises had a very difficult time moving to a work-from-home (WFH) model — and they struggled with that. Their first instinct was, “Oh, I need to buy a bunch of laptops.” Well, everybody wanted laptops at the beginning of the pandemic, and secondly, they were being made in China mostly — and those factories were shut down. It was impossible to buy a laptop unless you had the foresight to do that ahead of time.

And that’s when the “aha” moment came for a lot of enterprises. They said, “Hey, cloud-based virtual desktops — that sounds like the answer, that’s the solution.” And it really is. They could set that up very quickly by spinning up essentially the digital workplace in the cloud and then having their apps and data stream down securely from the cloud to their end users anywhere. That’s been the big “aha” moment that we’ve had as we look at our customer base and enterprises across the world. We’ve done it for our own internal use.

Gardner: Araceli, it sounds like some verticals and in certain organizations they may have waited too long to get into the VDI mindset. But when the pandemic hit, they had to move quickly.

What is about the digital workplace services solution that you all are factoring together that makes this something that can be done quickly?

Lewis: It’s absolutely true that the pandemic elevated digital workplace technology from being a nice-to-have, or a luxury, to being an absolute must-have. We realized after the pandemic struck that public sector, education, and more parts of everyday work needed new and secure ways of working remotely. And it had to become instantaneously available for everyone.

You had every C-level executive across every industry in the United States shifting to the remote model within two weeks to 30 days, and it was also needed globally. Who better than Dell on laptops and these other endpoint devices to partner with Unisys globally to securely deliver digital workspaces to our joint customers? Unisys provided the security capabilities and wrapped those services around the delivery, whereas we at Dell have the end-user devices.

You had every C-level executive across every industry in the U.S. shifting to the remote model within two weeks to 30 days, and it was also needed globally. Unisys provided the security capabilities and wrapped those services around delivery, whereas Dell had the end-user devices.

What we’ve seen is that the digitalization of it all can be done in the comfort of everyone’s home. You’re seeing them looking at x-rays, or a nurse looking into someone’s throat via telemedicine, for example. These remote users are also able to troubleshoot something that might be across the world using embedded reality, virtual reality (VR) embedded, and wearables.

We merged and blended all of those technologies into this workspaces environment with the best alliance partners to deliver what the C-level executives wanted immediately.

Gardner: The pandemic has certainly been an accelerant, but many people anticipated more virtual delivery of desktops and apps as inevitable. That’s because when you do it, you get other timely benefits, such as flexible work habits. Millennials tend to prefer location-independence, for example, and there are other benefits during corporate mergers and acquisitions and for dynamic business environments.

So, Weston, what are some of the other drivers that reward people when they make the leap to virtual delivery of apps and desktops?

Take the virtual leap, reap rewards

Morris: I’m thinking back to a conversation I had with you, Araceli, back in March. You were excited and energized around the topic of business continuity, which obviously started with the pandemic.

But, Dana, there are other forces at work that preceded the pandemic and that we know will continue after the pandemic. And mergers and acquisition are a very big one. We see a tremendous amount of activity there in the healthcare space, for example, which was affected in multiple ways by the pandemic. Pharmaceuticals and life sciences as well, there are multiple merger activities going on there.

One of the big challenges in a merger or acquisition is how to quickly get the acquired employees working as first-class citizens as quickly as possible. That’s always been difficult. You either give them two laptops, or two desktops, and say, “Here’s how you do the work in the new company, and here’s where you do the work in the old company.” Or you just pull the plug and say, “Now, you have to figure out how to do everything in a new way in web time, including human resources and all of those procedures in a new environment — and hopefully you will figure it all out.”

But with a cloud-based, virtual desktop capability — especially with cloud-bursting — you can quickly spin up as much capacity as you need and build upon the on-premises capabilities you already have, such as on Dell EMC VxRail, and then explode that into the cloud as needed using VMware Horizon to the Microsoft Azure cloud.

That’s an example of providing a virtual desktop for all of the newly acquired employees for them do their new corporate-citizen stuff while they keep their existing environment and continue to be productive by doing the job you hired them to do when you made the acquisition. That’s a very big use case that we’re going to continue to see going forward.

Gardner: Now, there were number of hurdles historically toward everyone adopting VDI. One of the major use cases was, of course, security and being able to control content by having it centrally located on your servers or on your cloud — rather than stored out on every device. Is that still a driving consideration, Weston? Are people still looking for that added level of security, or has that become passé?

Morris: Security has become even more important throughout the pandemic. In the past, to a large extent, the corporate firewall-as-secure-the-perimeter model has worked fairly well. And we’ve been punching holes in the firewall for several years now.

But with the pandemic — with almost everyone working from home — your office network just exploded. It now extends everywhere. Now you have to worry about how well secured any one person’s home network is. Do they have their password changed or default password changed on their home router? Have they updated the firmware on it? And a lot of these things are beyond the average worker to worry about and to be thinking about.

But if we separate out the workload and put it into the cloud — so that you have the digital workplace sitting in the cloud — that is much more secure than a device sitting on somebody’s desk connected to a very questionable home network environment.

Gardner: Another challenge in working toward more modern desktop delivery has been cost, because it’s usually been capital-intensive and required upfront investment. But when you modernize via the cloud that can shift.

Araceli, what are some of the challenges that we’re now able to overcome when it comes to the economics of virtual desktop delivery?

Cost benefits of partnering

Lewis: The beautiful thing here is that in our partnership with Unisys and Dell Financial Services (DFS), we’re able to utilize different utility models when it comes to how we consume the technology.

We don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. So, that’s extremely flexible.

You don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. It’s extremely flexible.

And by partnering with Unisys, they secure those VDI solutions across all of the three core components: The VDI portion within the data center, the endpoint devices, and of course, the software. By partnering with Unisys in our alliance ecosystem, we get the best of DFS, Dell Technology, VMware software, and Unisys security capabilities.

Gardner: Weston, another issue that’s dogged VDI adoption is complexity for the IT department. When we think about VDI, we can’t only think about end users. What has changed for how the IT department deploys infrastructure, especially for a hybrid approach where VDI is delivered both from on-premises data centers as well as the cloud?

Intelligent virtual agents assist IT

Morris: Araceli and I have had several conversations about this. It’s an interesting topic. There has always been a lot of work to stand up VDI. If you’re starting from scratch, you’re thinking about storage, IOPS, and network capacity. Where are my apps? What’s the connectivity? How are we going to run it at optimal performance? After all, are the end users happy with the experience they’re getting? And how can I even know that what their experience is?

And now, all that’s changed thanks to the evolving technology. One is the advent of artificial intelligence (AI) and the use of personal intelligent virtual assistance. At home, we’re used to that, right? We ask AlexaSiri, or Cortana what’s going on with the weather? What’s happening in the news? We ask our virtual assistants all of these things and we expect to be able to get instant answers and help. Why is that not available in the enterprise for IT? Well, the answer is it is now available.

As you can imagine on the provisioning side, wouldn’t it be great if you were able to talk to a virtual assistant that understood the provisioning process? You simply answer questions posed by the assistant. What is it you need to provision? What is your load that you’re looking at? Do you have engineers that need to access virtual desktops? What types of apps might they need? What is the type of security?

Then the virtual assistant understands the business and IT processes to provision the infrastructure needed virtually in the cloud to make that all happen or to cloud-burst from your on-premises Dell VxRail into the cloud.

That is a very important game changer. The other aspect of the intelligent virtual agent is it now resides on the virtual desktop as well. I, as an at-home worker, may have never seen a virtual desktop before. And now, the virtual assistant pops up and guides the home worker through the process of connecting, explaining how their apps work, and saying, “I’m always here. I’m ready to give you help whenever possible.” But I think I’ll defer to the expert here.

Araceli, do you want to talk about the power of the hybrid environment and how that simplifies the infrastructure?

Multiple workloads managed

Lewis: Sure, absolutely. At Dell EMC, we are proud of the fact that Gartner rates us number one, as a leader in the category for pretty much all of the products that we’ve included in this VDI solution. When Unisys and my alliances team get the technology, it’s already been tested from a hyper-converged infrastructure (HCI) perspective. VxRail has been tested, tried-and-true as an automated system in which we combine servers, storage, network, and the software.

That way, Weston and I don’t have to worry about what size are we going to use. We actually have T-shirt sizes already for the number of VDI users that are needed that have been thought out. We have the graphics-intensive portion of it thought out. And we can basically deploy quickly and then put the workloads on them as we need to spin them up or spin them down or to add more.

We can adjust on the fly. That’s a true testament of our HCI being the backbone of the solution. And we don’t have to get into all of the testing, regression testing, and the automation and self-healing of it. Because a lot of that management would have had to be done by enterprise IT or by a managed services provider but it’s done instead via the lifecycle management of the Dell EMC VxRail HCI solution.

That is a huge benefit, the fact that we deliver a solution from the value line and the hypervisor on up. We can then focus on the end users’ services and we don’t have to be swapping out components or troubleshooting because all of the refinement that Dell has done in that technology today.

Morris: Araceli, the first time you and your team showed me the cloud-bursting capability, it just blew me away. I know in the past how hard it was to expand any infrastructure. You showed me where, you know, every industry and every enterprise are going to have a core base of assumptions. So, why not put that under Dell VxRail?

Then, as you need to expand, cloud-burst into, in this case, Horizonrunning on Azure. And that can all be done now through a single dashboard. I don’t have to be thinking, “Okay, now I have to have the separate workload, it’s in the cloud, this other workload that’s on my on-premises cloud with VxRail.” It’s all done through one, single dashboard that can be automated on the back end through a virtual agent, which is pretty cool.

Gardner: It sure seems in hindsight that the timing here was auspicious. Just as the virus was forcing people to rapidly find a virtual desktop solution, you had put together the intelligence and automation along with software-defined infrastructure like HCI. And then you also gained the ease in hybrid by bursting to the cloud.

And so, it seems that the way that you get to a solution like this has never been easier, just when it was needed to be easy for organizations such as small- to medium-sized businesses (SMBs) and verticals like public sector and education. So, was the alliance and partnering, in fact, a positive confluence of timing?

Greater than sum of parts

Morris: Yes. The perfect storm analogy certainly applies. It was great when I got the phone call from Araceli, saying, “Hey, we have this business continuity capability.” We at Unisys had been thinking about business continuity as well.

We looked at the different components that we each brought. Unisys with its security around Stealth or capability to proactively monitor infrastructure and desktops and see what’s going on and automatically fix them via the intelligent virtual agent and automation. And realizing that this was really a great solution, a much better solution than the individual parts.

We could not make this happen without all of the cool stuff that Dell brings in terms of the HCI, the clients, and, of course, the very powerful VMware-based virtual desktops. And we added to that some things that we have become very good at in our digital workplace transformation. The result is something that can make a real difference for enterprises. You mentioned the public sector and education. Those are great examples of industries that really can benefit from this.

Gardner: Araceli, anything more to offer on how your solution came together, the partners and the constituent parts?

Lewis: Consistent infrastructure, operations, and the help of our partner, Unisys, globally, delivers the services to the end users. This was just a partnership that had to come together.

We were getting so many requests early during the pandemic, an overwhelming amount of demand from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking.

We at Dell couldn’t do it alone. We needed those data center spaces. We needed the capabilities of their architects and teams to deliver for us. We were getting so many requests early during the pandemic, an overwhelming amount of demand from every C-level suite across the country, and from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking. But we knew if we partnered with them, we could give our community what they needed to get through the pandemic.

Gardner: And among those constituent parts, how is important part is Horizon? Why is it so important?

Lewis: VMware Horizon is the glue. It streamlines desktop and app delivery in various ways. The first would be by cloud-bursting. It actually gives us the capability to do that in a very simple fashion.

Secondly, it’s a single pane of glass. It delivers all of the business-critical apps to any device, anywhere on a single screen. So that makes it simple and comprehensive for the IT staff.

We can also deliver non-persistent virtual desktops. The advantage here is that it makes software patching and distribution a whole lot easier. We don’t have all the complexity. If there were ever a security concern or issue, we simply blow away that non-persistent virtual desktop and start all over. It gets us to our first phase, square one, and we would otherwise have to spend countless hours of backups and restores to get us to where we are safe again. So, it pulls everything together for us and being a user have a seamless interface for the IT staff who don’t have the complexity, and it gives us the best of our world while we get out to the cloud.

Gardner: Weston, on the intelligent agents and bots, do you have an example of how it works in practice? It’s really fascinating to me that you’re using AI-enabled robotic process collaboration (RPA) tools to help the IT department set this up. And you’re also using it to help the end-user learn how to onboard themselves, get going, and then get ongoing support.

Amelia AI ascertains answers

Morris: It’s an investment we began almost 24 months ago, branded as the Unisys InteliServe platform, which initially was intended to bring AI, automation, and analytics to the service desk. It was designed to improve the service desk experience and make it easier to use, make it scalable, and to learn over time what kinds of problems people needed help solving.

But we realized once we had it in place, “Wow, this intelligent virtual agent can almost be an enterprise personal assistant where it can be trained on anything, on any business process.” So, we’ve been training it on fixing common IT problems … password resets, can’t log in, can’t get to the virtual private network (VPN), Outlook crashes, those types of things. And it does very well at those sorts of activities.

But the core technology is also perfectly suited to be trained for IT processes as well as business processes inside of the enterprise. For example, for this particular scenario of supporting virtual desktops. If a customer has a specific process for provisioning virtual desktops, they may have specific pools of types of virtual desktops, certain capacities, and those can be created ahead of time, ready to go.

Then it’s just a matter of communicating with the intelligent virtual assistant to say, “I need to add more users to this pool,” or, “We need to remove users,” or, “We need to add a whole new pool.” The agent is branded as Amelia. It has a female voice, through it doesn’t have to be, but in most cases, it is.

When we speak with Amelia, she’s able to ask questions that guide the user through the process. They don’t have to know what the process is. They don’t do this very often, right? But she can be trained to be an expert on it.

Amelia collects the information needed, submits it to the RPA that communicates with Horizon, Azure, and the VxRail platforms to provision the virtual desktops as needed. And this can happen very quickly. Whereas in the past, it may have taken days or weeks to spin up a new environment for a new project, or for a merger and acquisition, or in this case, reacting to the pandemic, and getting people able to work from home.

By the same token, when the end users open up their virtual desktops, they connect to the Horizon workspace, and there is Amelia. She’s there ready to respond to totally different types of questions: “How do I use this?” “Where’s my apps?” “This is new to me, what do I do? How do I connect?” “What about working from home?” “What’s my VPN connection working like, and how do I get that connected properly?” “What about security issues?” There, she’s now able to help with the standard end-user types issues as well.

Gardner: Araceli, any examples of where this intelligent process automation has played out in the workplace? Do we have some ways of measuring the impact?

Simplify, then measure the impact

Lewis: We do. It’s given us, in certain use cases, the predictability and the benefit of a pay-as-you-grow linear scale, rather than the pay-by-the-seat type of solution. In the past, if we had a state or a government agency where they need, for example, 10,000 seats, we would measure them by the seat. If there’s a situation like a pandemic, or any other type of environment where we have to adjust quickly, how could we deliver 10,000 instances in the past?

Now, using Dell EMC ready-architectures with the technologies we’ve discussed — and with Unisys’ capabilities — we can provide such a rapid and large deployment in a pay-as-you-grow linear scale. We can predict what the pricing is going to be as they need to use it for these public sector agencies and financial firms. In the past, there was a lot of capital expenditures (CapEx). There was a lot of process, a lot of change, and there were just too many unknowns.

These modern platforms have simplified the management of the backends of the software and the delivery of it to create a true platform that we can quantify and measure — not only just financially, but from a time-to-delivery perspective as well.

Morris: I have an example of a particular customer where they had a manual process for onboarding. Such onboarding includes multiple steps, one of which is, “Give me my digital workplace.”

But there are other things, too. The training around gaining access to email, for example. That was taking almost 40 hours. Can you imagine a person starting their job, and 40 hours later they finally get the stuff they need to be productive? That’s a lot of downtime.

After using our automation, that transition was down to a little over eight hours. What that means is a person starts filling out their paperwork with HR on day one, gets oriented, and then the next day they have everything they need to be productive. What a big difference. And in the offboarding — it’s even more interesting. What happens when a person leaves the company? Maybe under unfavorable circumstances, we might say.

In the past, the manual processes for this customer took almost 24 hours before everything was turned off. What does that mean? That means that an unhappy, disgruntled employee has 24 hours. They can come in, download content, get access to materials or perhaps be disruptive, or even destructive, with the corporate intellectual property, which is very bad.

Through automation, this offboarding process is now down to six minutes. I mean that person hasn’t even walked out of the room and they’ve been locked out completely from that IT environment. And that can be even be done more quickly if we’re talking about a virtual desktop environment, in which the switch can be thrown immediately and completely. Access is completely and instantly removed from the virtual environment.

Gardner: Araceli, is there a best-of-breed, thin-client hardware approach that you’re using? What about use cases such as graphics-intense or computer-aided design (CAD) applications? What’s the end-point approach for some of these more intense applications?

Viable, virtual, and versatile solutions

Lewis: Being Dell Technologies, that was a perfect question for us, Dana. We understand the persona of the end users. As we roll out this technology, let’s say it’s for an engineering team where they do CAD drawings as an engineering group. If you look at the persona, and we partner with Unisys and look at what each end-user’s needs are, you can determine if they need more memory, more processing power, and if they need a more graphics-intensive device. We can do that. Our Wyseend-clients that can do that, the Wyse 3000s and the 5000s.

But I don’t want to pinpoint one specific type of device per user because we could be talking about a doctor, or we could be talking about a nurse in an intensive care unit. She is going to need something more mobile. We can also provide end-user devices that are ruggedized, maybe in an oil field or in a construction site. So, from an engineering perspective, we can adopt the end-user device to their persona and their needs and we can meet all of those requirements. It’s not a problem.

Gardner: Weston, anything from your vantage point on the diversity and agility of those endpoint devices and why this solution is so versatile?

Morris: There is diversity at both ends. Araceli, you talked about being able to on the backend provision and scale up and down the capacity and capability of a virtual desktop to meet the personas’ needs.

Millennials want choice on how they connect. Am I connecting from home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day. They don’t want to lose work in between. That all is entirely possible with this infrastructure.

And then on the end-user side, and you mentioned, Dana, Millennials. They may want choice of how they connect. Am I connecting in through my own personal laptop at home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day? And they don’t want to lose work in between. That is all entirely possible with this infrastructure.

Gardner: Let’s look to the future. We’ve been talking about what’s possible now. But it seems to me that we’ve focused on the very definition of agility: It scales, it’s fast, and it’s automated. It’s applicable across the globe.

What comes next? What can you do with this technology now that you have it in place? It seems to me that we have an opportunity to do even more.

Morris: We’re not backing down from AI and automation. That is here to stay, and it’s going to continue to expand. People have finally realized the power of cloud-based VDI. That is now a very important tool for IT to have in their bag of tricks. They can respond to very specific use cases in a very fast, scalable, and effective way.

In the future we will see that AI continues to provide guidance, not only in the provisioning that we’ve talked about, not only in startup and use on the end-user side — but in providing analytics as to how the entire ecosystem is working. That’s not just the virtual desktops, but the apps that are in the cloud as well and the identity protection. There’s a whole security component that AI has to play a role in. It almost sounds like a pipe dream, but it’s just going to make life better. AI absolutely will do that when it’s used appropriately.

Lewis: I’m looking to the future on how we’re going to live and work in the next five to 10 years. It’s going to be tough to go back to what we were used to. And I’m thinking forward to the Internet of Things (IoT). There’s going to be an explosion of edge devices, of wearables, and how we incorporate all of those technologies will be a part of a persona.

Typically, we’re going to be carrying our work everywhere we go. So, how are we going to integrate all of the wearables? How are we going to make voice recognition more adaptable? VR, AI, robotics, drones — how are we going to tie all of that together?

Nowadays, we tie our home systems and our cooling and heating to all of the things around us to interoperate. I think that’s going to go ahead and continue to grow exponentially. I’m really excited that we’ve partnered with Unisys because we wouldn’t want to do something like this without a partner who is just so deeply entrenched in the solutions. I’m looking forward to that.

Gardner: What advice would give to an organization that hasn’t bitten off the virtual desktop from the cloud and hybrid environment yet? What’s the best way to get started?

Morris: It’s really important to understand your users, your personas. What are they consuming? How do they want to consume it? What is their connectivity like? You need to understand that, if you’re going to make sure that you can deliver the right digital workplace to them and give them an experience that matters.

Lewis: At Dell Technologies, we know how important it is to retain our top and best talent. And because we’ve been one of the top places to work for the past few years, it’s extremely important to make sure that technology and access to technology help to enable our workforce.

I truly feel that any one of our customers or end users that hasn’t looked at VDI, and hasn’t realized the benefits across savings, and keeping a competitive advantage in this fast-paced world, that they also need to retain their talent, too. To do that they need to give their employees the best tools and the best capabilities to be the very best. They have to look at VDI in some way, shape, or form. As soon as we bring it to them — whether technically, financially, or for competitive factors — it really makes sense. It’s not a tough sell at all, Dana.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Dell Technologies.

Posted in Cloud computing, Virtualization, VMware | Tagged , , , , , , , , , , , | Leave a comment

Customer experience management has never been more important or impactful

The next BriefingsDirect digital business innovation discussion explores how companies need to better understand and respond to their markets one subscriber at a time. By better listening inside of their products, businesses can remove the daylight between their digital deliverables and their customers’ impressions.

Stay with us now as we hear from a customer experience (CX) management expert at SAP on the latest ways that discerning customers’ preferences informs digital business imperatives.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the business of best fulfilling customer wants and needs, please welcome Lisa Bianco, Global Vice President, Experience Management and Advocacy at SAP Procurement Solutions. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What was the catalyst about five years ago that led you there at SAP Procurement to invest in a team devoted specifically to CX innovation?

Bianco: As a business-to-business (B2B) organization, we recognized that B2B was changing and it was starting to look and feel more like business-to-consumer (B2C). The days of leaders dictating the solutions and products that their end users were going to be leveraging for day-to-day business stuff — like procurement or finance — we found we were competing with what an end-user’s experience would be with the products or applications they use in their personal life.

No alt text provided for this image

We all know this; we’ve all been there. We would go to work to use the tools, and there used to be those times we would use the printer for our kids’ flyers for their birthday because it was a much better tool than what we had at home. And that had shifted.

But then business leaders were competing with rogue employees using tools like versus SAP Ariba’s solution for procurement to buy things for their businesses. And so with that maverick spend, companies weren’t having the same insights that they needed to make decisions. So, we knew that we had to ensure that that end-user experience at work replicated what they might feel at home. It reflected that shift in persona from a decision-maker to that of a user.

Gardner: Whether it’s B2B or B2C, there tends to be a group of people out there who are really good at productivity and will find ways to improve things if you only take the chance to listen and follow their lead, right?

Bianco: That’s exactly right.

Gardner: And what was it about B2B in the business environment that was plowing new ground when it came to listening rather than just coming up with a list of requirements, baking it into the software, and throwing it over the wall?

Leaders listen to customer experience

Bianco: The truth is, better listening to B2B resulted in a centralized shift for leaders. All of a sudden, a chief procurement officer (CPO) who made a decision on a procurement solution, or a chief information officer (CIO) who made a decision on an enterprise resource planning (ERP) solution, they were beginning to get flak from cross-functional leaders who were end-users and couldn’t actually do their functions.

In B2B we found that we had to start understanding the feelings of employees and the feelings of our customers. And that’s not really what you do in B2B, right? Marketing and branding at SAP now said that the future of business has feelings. And that’s a shock. I can’t tell you how many times I have talked to leaders who say, “I want to switch the word empathy in our mission statement because that’s not strong leadership in B2B.”

The truth is we had to shift. Society was shifting to that place and understanding that feelings allow us to understand the experiences because experiences were that of people. We can only make so many decisions based on our operational data.

But the truth is we had to shift. Society was shifting to that place and understanding that feelings allow us to understand the experiences because the experiences were that of people. We can only make so many decisions based on our operational data, right? You really have to understand the why.

We did have to carve out a new path, and it’s something we still do to this day. Many B2B companies haven’t evolved to an experience management program, because it’s tough. It’s really hard.

Gardner: If we can’t just follow the clicks, and we can’t discern feelings from the raw data, we need to do something more. What do we do? How do we understand why people feel good or bad about what they are doing?

Bianco: We get over that hurdle by having a corporate strategy that puts the customer at the center of all we do. I like to think of it as having a customer-centric decision-making platform. That’s not to say it’s a product. It’s really a shift in mindset that says, “We believe we will be a successful company if our customers’ feelings are positive, if their experiences are great.”

If you look at the disruptors such as Airbnb or Amazon, they prioritize CX over their own objectives as a business and their own business success, things like net-new software sales or renewal targets. They focus on the experiences that their customers have throughout their lifecycle.

No alt text provided for this image

That’s a big shift for corporate America because we are so ingrained in producing for the board and we are so ingrained in producing for the investors that oftentimes putting that customer first is secondary. It’s a systemic shift in culture and thinking that tends to be what we see in the emerging companies today as they grab such huge market share. It’s because they shifted that thinking.

Gardner: Right. And when you shift the thinking in the age of social media — and people can share what their impressions are — that becomes a channel and a marketing opportunity in itself. People aren’t in a bubble. They are able to say and even demonstrate in real time what their likes are, what their dislikes are, and that’s obvious to many other people around them.

Customer feedback ecosystem

Bianco: Dana, you are pointing out risk. And it’s so true. And this year, the disrupter that COVID-19 has created is a tectonic shift in our digitalization of customer feedback. And now, via social media and Twitter, if you are not at the forefront of understanding what your customers’ feelings are — and what they may or may not say — and you are not doing that in a proactive way, you run the risk of it playing out socially in a public forum. And the longer that goes unattended to, you start to lose trust.

When you start to lose trust, it is so much harder to fix than understanding in the lifecycle of a customer the problems that they face, fixing those and making that a priority.

Gardner: Why is this specifically important in procurement? Is there something about procurement, supply chain, and buying that this experience focus is important? Or does it cut across all functions in business?

Bianco: It’s across all functions in business. However, if you look at procurement in the world today, it incorporates a vast ecosystem. It’s one of those functions in business that includes buyers and suppliers. It includes logistics, and it’s complex. It is one of the core areas of a business. When that is disrupted it can have drastic effects on your business.

No alt text provided for this image

We saw that in spades this year. It affects your supply chain, where you can have alternative opportunities to regain your momentum after a disruption. It affects your workforce and all of the tools and materials necessary for your company to function when it shifts and moves home. And so with that, we look from SAP’s perspective at these personas that navigate through a multitude of products in your organization. And in procurement, because that ecosystem is there for our customers, understanding the experience of all of those parties allows for customers to make better decisions.

A really good example is one of the world’s largest consulting firms. They took 500,000 employees in offices around the world and found that they had to immediately put them in their homes. They had to make sure they had the products they needed, like computers, green screens, or leisure wear.

They learned what looks good enough on a virtual Zoom meeting. Procurement had to understand what their employees needed within a week’s time so that they didn’t lose revenue deploying the services that their customers had purchased and rely on them for.

Understanding that lifecycle really helps companies, especially now. Seeing the recent disruption made them able to understand exactly what they need to do and quickly make decisions to make experiences better to get their business back on track.

Gardner: Well, this is also the year or era of moving toward automation and using data and analytics more, even employing bots and robotic process automation (RPA). Is there something about that tack in our industry now that can be brought to CX management? Is there a synergy between not just doing this manually, but looking to automation and finding new insights using new tools?

Automate customer journeys

Bianco: It’s a really great insight into the future of understanding the experiences of a customer. A couple of things come to mind. As you look at operational data, we have all recognized the importance of having operational data; so usage data, seeing where the clicks are throughout your product. Really documenting customer journey maps.

If you automate the way you get feedback you don’t just have operational data; you need to get that feelings to come through with experience data … to help drive to where automation needs to happen.

But if you automate the way you get feedback you don’t just have operational data; you need to get the feelings to come through with experience data. And that experience data can help drive where automation needs to happen. You can then embed that kind of feedback-loop-process in typical survey-type tools or embed them right into your systems.

And so that helps you understand some areas where we can remove steps from in the process, especially as many companies look to procurement to create automation. And so the more we can understand where we have those repetitive flows and we can automate, the better.

Gardner: Is that what you mean by listening inside of the product or does that include other things, too?

Bianco: It includes other things. As you may know, SAP purchased a company called Qualtrics. They are experts in experience management, and we have been able to move from and evolve from traditional net promoter score (NPS) surveys into looking at micro moments to get customer feedback as they are doing a function. We have embedded certain moments inside of our product that allow us to capture feedback in real time.

Gardner: Lisa, a little earlier you alluded that there are elements of what happens in the B2C world as individual consumers and what we can then learn and take into the B2B world. Is there anything top of mind for you that you have experienced as a consumer that you said, “Aha, I want to be able to do that or bring that type of experience and insight to my B2B world?”

Customer service is king in B2B

Bianco: Yes, you know what happened to me just this week as a matter of fact? There is a show on TV right now about chess. With all of us being at home, many of us are consuming copious amounts of content. And I went and ordered a chess set, it came, it was beautiful, it was from Wayfair, and one of the pieces was broken.

I snapped a little picture of the piece that had broken and they had an amazing app that allowed me to say, “Look, I don’t need you to replace the whole thing, it’s just this one little piece, and if you can just send me that, that would be great.”

And they are like, “You know what? Don’t worry about sending it back. We are just going to send you a whole new set.” It was like a $100 set. So I now have two sets because they were gracious enough to see that I didn’t have a great experience. They didn’t want me to deal with sending it back. They immediately sent me the product that I wanted.

No alt text provided for this image

I am, like, where is that in B2B? Where is that in the complex area of procurement that I find myself? How can we get that same experience for our customers when something goes wrong?

When I began this program, we would try to figure out what is that chess set. Other organizations use garlic knots, like at pizza restaurants. While you and your kids wait 25 minutes for the pizza to be made, a lot of pizza shops offer garlic knots to make you happy so the wait doesn’t seem so long. What is that equivalent for B2B?

It’s hard. What we learned early on, and I am so grateful for, is that in B2B many end users and customers know how difficult it is to make some of their experiences better, because it’s complex. They have a lot of empathy for companies trying to go down such a path, in this case, for procurement.

But with that, what their garlic knot is, what their free product or chess set is, is when we tell them that their voice matters. It’s when we receive their feedback, understand their experience against our operational data, and let them know that we have the resources and budget to take action on their feedback and to make it better.

Either we show them that we have made it better or we tell them, “We hear what you are saying, but that doesn’t fit into our future.” You have to be able to have that complete feedback loop, otherwise you alienate your customer. They don’t want to feel like you are asking for their feedback but not doing anything with it.

And so that’s one of the most important things we learned here. That’s the thing that I witnessed from a B2C perspective and tried to replicate in B2B.

Gardner: Lisa, I’m sensing that there is an opportunity for the CX management function to become very important for overall digital business transformation. The way that Wayfair was able to help you with the chess set required integration, cooperation, and coordination between what were probably previously siloed parts of their organization.

That means the helpdesk, the ordering and delivering, exception management capabilities, and getting sign-off on doing this sort of thing. It had to mean breaking down those silos — both in process, data, and function. And that integration is often part of an all-important digital transformation journey.

So are you finding that people like yourself, who are spearheading the experience management for your customers, are in a catbird seat of identifying where silos, breakdowns, and gaps exist in the B2B supplier organizations?

Feedback fuels cross-training

Bianco: Absolutely. Here is what I have learned: I am going to focus on cloud, especially in companies that are either cloud companies or had been an on-premises company and are migrating to being a cloud company. SAP Ariba did this over the last 20 years. It has migrated from on-premises to cloud, so we have a great DNA understanding of that. SAP is out doing the same thing; many companies are.

And what’s important to realize, at least from my perspective — it was an “Aha” moment — is that there is a tendency in the B2C world leadership to say, “Look, I am looking at all this data and feedback around customers. Can’t we just go fix this particular customer issue, and they are going to be happy?”

Most of the issues our customers were facing were systemic. There was consistent feedback about something that wasn’t working. We had to recognize that these systemic issues needed to be solved by a cross-functional group of people.

What we found in the B2B data was that most of the issues our customers were facing were systemic. It was broad strokes of consistent feedback about something that wasn’t working. We had to recognize that these systemic issues needed to be solved by a cross-functional group of people.

That’s really hard because so many folks have their own budgets, and they lead only a particular function. To think about how they might fix something more broadly took our organization quite a bit of time to wrap our heads around. Because now you need a center of excellence, a governance model that says that CX is at the forefront, and that you are going to have accountability in the business to act on that feedback and those actions. And you are going to compose a cross-functional, multilevel team to get it done.

It was funny early on, in our receiving feedback that customer support is a problem. Support was the problem. The support function was awful. I remember the head of support was like, “Oh, my gosh. I am going to get fired. I just hate my job. I don’t know what to do.”

When you look at the root cause you find that quality is a root-cause issue, but quality wasn’t just in one or another product — it was across many products. That broader quality issue led to how we enabled our support teams to understand how to better support those products. That quality issue also impacted how we went to market and we showed the features and functions of the product.

We developed a team called the Top X Organization that aggregated cross-functional folks, held them accountable to a standard of a better outcome experience for our customers, and then led a program to hit certain milestones to transform that experience. But all that is a heavy lift for many companies.

Gardner: That’s fascinating. So, your CX advocates — by having that cross-functional perspective by nature — became advocates for better processes and higher quality at the organization level. They are not just advocating for the customer; they are actually advocating for the betterment of the business. Are you finding that and where do you find the people that can best do that?

Responsibility of active listening

Bianco: It’s not an easy task, it’s for few and far between. Again, it takes a corporate strategy. Dana, when you asked me the question earlier on, “What was the catalyst that brought you here?” I oftentimes chuckle. There isn’t a leader on the planet who isn’t going to have someone come to them, like I did at the time, and say, “Hey, I think we should listen to our customers.” Who wouldn’t want to do that? Everyone wants to do that. It sounds like a really good idea.

But, Dana, it’s about active listening. If you watch movies, there is often a scene where there is a husband and wife getting therapy. And the therapist says, “Hey, did you hear what she said?” or, “Did you hear what he said?” And the therapist has them repeat it back. Your marriage or a struggle you have with relationships is never going to get better just by going and sitting on the couch and talking to the therapist. It requires each of you to decide internally that you want this to be better, and that you are going to make the changes necessary to move that relationship forward.

It’s not dissimilar to the desire to have a CX organization, right? Everyone thinks it’s a great idea to show in their org chart that they have a leader of CX. But the truth is you have to really understand the responsibility of listening. And that responsibility sometimes devolves into just taking a survey. I’m all for sending a survey out to our customers, let’s do it. But that is the smallest part of a CX organization.

No alt text provided for this image

It’s really wrapped up in what the corporate strategy is going to be: A customer-centric, decision-making model. If we do that, are we prepared to have a governance structure that says we are going to fund and resource making experiences better? Are we going to acknowledge the feedback and act on it and make that a priority in business or not?

Oftentimes leaders get caught up in, “I just want to show I have a CX team and I am going to run a survey.” But they don’t realize the responsibility that gives them when now they have on paper all the things that they know they have an opportunity to make better for their customers.

Gardner: You have now had five years to make these changes. In theory this sounds very advantageous on a lot of levels and solves some larger strategic problems that you would have a hard time addressing otherwise.

So where’s the proof? Do you have qualitative, quantitative indicators? Maybe it’s one of those things that’s really hard to prove. But how do you rate customer advocacy and CX role? What does it get you when you do it well?

Feelings matter at all levels

Bianco: Really good point. We just came off of our five-year anniversary this week. We just had an NPS survey and we got some amazing trends. In five years, we have seen an even greater improvement in the last 18 months — an 11-point increase in our customer feedback. And that not only translates into the survey, as I mentioned, but it also translates with influencers and analysts.

Gartner has noted the increase in our ability to address CX issues and make them better. We can see that in terms of the 11-point increase. We can see that in terms of our reputation within our analyst community.

And we also see it in the data. Customers are saying, “Look, you are much more responsive to me.” We see a 35-percent decrease in customers complaining in their open text fields about support. We see customers mentioning less the challenges they have seen in the area of integration, which is so incredibly important.

We see a 35-percent decrease in customers complaining in their open text fields about support. We see customers less challenged by integration, which is so incredibly important.

And we also hear less from our own SAP leaders who felt like NPS just exposed the fact that they might not be doing their job well, which was initially the experience we got from leaders who were like, “Oh my gosh. I don’t want you to talk about anything that makes it look like I am not doing my job.” We created a culture where we have been more open to feedback. We now relish in that insight, versus feeling defensive.

And that’s a culture shift that took us five years to get to. Now you have leaders chomping at the bit to get those insights, get that data, and make the changes because we have proof. And that proof did start with an organizational change right in the beginning. It started with new leadership in certain areas like support. Those things translated into the success we have today. But now we have to evolve beyond that. What’s the next step for us?

Gardner: Before we talk about your next steps, for those organizations that are intrigued by this — that want to be more customer-centric and to understand why it’s important — what lessons have you learned? What advice do you have for organizations that are maybe just beginning on the CX path?

Bianco: How long is this show?

Gardner: Ten more minutes, tops.

Bianco: Just kidding. I mean gosh, I have learned a lot. If I look back — and I know some of my colleagues at IBM had a similar experience — the feedback is this. We started by deploying NPS. We just went out there and said we are going to do these NPS surveys and that’s going to shake the business into understanding how our customers are feeling.

We grew to understand that our customers came to SAP because of our products. And so I think I might have spent more time listening inside of the products. What does that mean? It certainly means embedding micro-moments, of aggregating feedback, in the product to help understand — and allows our developers to understand what they need to do. But that need to be done in a very strategic way.

It’s also about making sure that any time anyone in the company wants to listen to customers, you ensure that you have the budget and the resources necessary to make that change — because otherwise you will alienate your customers.

Another area is you have to have executive leadership. It has to be at the root of your corporate objectives. Anything less than that and you will struggle. It doesn’t mean you won’t have some success, but when you are looking at the root of making experience better, it’s about action. That action needs to be taken by the folks responsible for your products or services. Those folks have to be incented, or they have to be looped in and committed to the program. There has to be a governance model that measures the experience of the customer based on how the customer interprets it — not how you interpret it.

If, as a company, you interpret success as net-new software sales, you have to shift that mindset. That’s not how your customers view their own success.

Gardner: That’s very important and powerful. Before we sign off, five years in, where do you go now? Is there an acceleration benefit, a virtuous adoption pattern of sorts when you do this? How do you take what you have done and bring it to a step-change improvement or to an even more strategic level?

Turn feedback into action

Bianco: The next step for us is to embed the experience program in every phase of the customer’s journey. That includes every phase of our engagement journey inside of our organization.

So from start to finish, what are the teams providing that experience, whether it’s a service or product? That would be one. And, again, that requires the governance that I mentioned. Because action is where it’s at — regardless of the feedback you are getting and how many places you listen. Action is the most important piece to making their experience better.

This requires governance because action is where it’s at — regardless of the feedback. Taking action is the most important piece to making the customer experience better.

Another is to move beyond just NPS surveys. Again, it’s not that this is a new concept, but as I watched the impact of COVID-19 on accelerating digital feedback, social forums, and public forums, we measured that advocacy. It’s not just the, “Will you recommend this product to a friend or colleague?” In addition it’s about, “Will you promote this company or not?”

That is going to be more important than ever, because we are going to continue in a virtual environment next year. As much as we can help frame what that feedback might be — and be proactive — is where I see success for SAP in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

Posted in Ariba, artificial intelligence, Business intelligence, Business networks, digital transformation, ERP, Help desk, machine learning, managed services, marketing, procurement, retail, SAP, SAP Ariba, social media, supply chain, User experience | Tagged , , , , , , , , , | Leave a comment

How to industrialize data science to attain mastery of repeatable intelligence delivery

Businesses these days are quick to declare their intention to become data-driven, yet the deployment of analytics and the use of data science remains spotty, isolated, and often uncoordinated.

To fully reach their digital business transformation potential, businesses large and small need to make data science more of a repeatable assembly line — an industrialization, if you will — of end-to-end data exploitation.

The next BriefingsDirect Voice of Analytics Innovation discussion explores the latest methods, tools, and thinking around making data science an integral core function that both responds to business needs and scales to improve every aspect of productivity.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the ways that data and analytics behave more like a factory — and less like an Ivory Tower — please welcome Doug Cackett, EMEA Field Chief Technology Officer at Hewlett Packard Enterprise. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Doug, why is there a lingering gap — and really a gaping gap — between the amount of data available and the analytics that should be taking advantage of it?

Cackett: That’s such a big question to start with, Dana, to be honest. We probably need to accept that we’re not doing things the right way at the moment. Actually, Forrester suggests that something like 40 zettabytes of data are going to be under management by the end of this year, which is quite enormous.

And, significantly, more of that data is being generated at the edge through applications, Internet of Things (IoT), and all sorts of other things. This is where the customer meets your business. This is where you’re going to have to start making decisions as well.

So, the gap is two things. It’s the gap between the amount of data that’s being generated and the amount you can actually comprehend and create value from. In order to leverage that data from a business point of view, you need to make decisions at the edge.

You will need to operationalize those decisions and move that capability to the edge where your business meets your customer. That’s the challenge we’re all looking for machine learning (ML) — and the operationalization of all of those ML models into applications — to make the difference.

Gardner: Why does HPE think that moving more toward a factory model, industrializing data science, is part of the solution to compressing and removing this gap?

Data’s potential at the edge

Cackett: It’s a math problem, really, if you think about it. If there is exponential growth in data within your business, if you’re trying to optimize every step in every business process you have, then you’ll want to operationalize those insights by making your applications as smart as they can possibly be. You’ll want to embed ML into those applications.

Because, correspondingly, there’s exponential growth in the demand for analytics in your business, right? And yet, the number of data scientists you have in your organization — I mean, growing them exponentially isn’t really an option, is it? And, of course, budgets are also pretty much flat or declining.

There’s exponential growth in the demand for analytics in your business. And yet the number of data scientists in your organization, growing them, is not exponential. And budgets are pretty much flat or declining.

So, it’s a math problem because we need to somehow square away that equation. We somehow have to generate exponentially more models for more data, getting to the edge, but doing that with fewer data scientists and lower levels of budget.

Industrialization, we think, is the only way of doing that. Through industrialization, we can remove waste from the system and improve the quality and control of those models. All of those things are going to be key going forward.

Gardner: When we’re thinking about such industrialization, we shouldn’t necessarily be thinking about an assembly line of 50 years ago — where there are a lot of warm bodies lined up. I’m thinking about the Lucille Ball assembly line, where all that candy was coming down and she couldn’t keep up with it.

Perhaps we need more of an ultra-modern assembly line, where it’s a series of robots and with a few very capable people involved. Is that a fair analogy?

Industrialization of data science

Cackett: I think that’s right. Industrialization is about manufacturing where we replace manual labor with mechanical mass production. We are not talking about that. Because we’re not talking about replacing the data scientist. The data scientist is key to this. But we want to look more like a modern car plant, yes. We want to make sure that the data scientist is maximizing the value from the data science, if you like.

We don’t want to go hunting around for the right tools to use. We don’t want to wait for the production line to play catch up, or for the supply chain to catch up. In our case, of course, that’s mostly data or waiting for infrastructure or waiting for permission to do something. All of those things are a complete waste of their time.

As you look at the amount of productive time data scientists spend creating value, that can be pretty small compared to their non-productive time — and that’s a concern. Part of the non-productive time, of course, has been with those data scientists having to discover a model and optimize it. Then they would do the steps to operationalize it.

But maybe doing the data and operations engineering things to operationalize the model can be much more efficiently done with another team of people who have the skills to do that. We’re talking about specialization here, really.

But there are some other learnings as well. I recently wrote a blog about it. In it, I looked at the modern Toyota production system and started to ask questions around what we could learn about what they have learned, if you like, over the last 70 years or so.

It was not just about automation, but also how they went about doing research and development, how they approached tooling, and how they did continuous improvement. We have a lot to learn in those areas.

For an awful lot of organizations that I deal with, they haven’t had a lot of experience around such operationalization problems. They haven’t built that part of their assembly line yet. Automating supply chains and mistake-proofing things, what Toyota called jidoka, also really important. It’s a really interesting area to be involved with.

Gardner: Right, this is what US manufacturing, in the bricks and mortar sense, went through back in the 1980s when they moved to business process reengineering, adopted kaizen principles, and did what Demingand more quality-emphasis had done for the Japanese auto companies.

And so, back then there was a revolution, if you will, in physical manufacturing. And now it sounds like we’re at a watershed moment in how data and analytics are processed.

Cackett: Yes, that’s exactly right. To extend that analogy a little further, I recently saw a documentary about Morgan cars in the UK. They’re a hand-built kind of car company. Quite expensive, very hand-built, and very specialized.

And I ended up by almost throwing things at the TV because they were talking about the skills of this one individual. They only had one guy who could actually bend the metal to create the bonnet, the hood, of the car in the way that it needed to be done. And it took two or three years to train this guy, and I’m thinking, “Well, if you just automated the process, and the robot built it, you wouldn’t need to have that variability.” I mean, it’s just so annoying, right?

In the same way, with data science we’re talking about laying bricks — not Michelangelo hammering out the figure of David. What I’m really trying to say is a lot of the data science in our customer’s organizations are fairly mundane. To get that through the door, get it done and dusted, and give them time to do the other bits of finesse using more skills — that’s what we’re trying to achieve. Both [the basics and the finesse] are necessary and they can all be done on the same production line.

Gardner: Doug, if we are going to reinvent and increase the productivity generally of data science, it sounds like technology is going to be a big part of the solution. But technology can also be part of the problem.

What is it about the way that organizations are deploying technology now that needs to shift? How is HPE helping them adjust to the technology that supports a better data science approach?

Define and refine

Cackett: We can probably all agree that most of the tooling around MLOps is relatively young. The two types of company we see are either companies that haven’t yet gotten to the stage where they’re trying to operationalize more models. In other words, they don’t really understand what the problem is yet.

Forrester research suggests that only 14 percent of organizations that they surveyed said they had a robust and repeatable operationalization process. It’s clear that the other 86 percent of organizations just haven’t refined what they’re doing yet. And that’s often because it’s quite difficult.

Many of these organizations have only just linked their data science to their big data instances or their data lakes. And they’re using it both for the workloads and to develop the models. And therein lies the problem. Often they get stuck with simple things like trying to have everyone use a uniform environment. All of your data scientists are both sharing the data and sharing the computer environment as well.

Data scientists can be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating terabytes of data, which can take a long time. That also demands new resources, including new hardware.

And data scientists can often be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating the data. And if you’re going to replicate terabytes of data, that can take a long period of time. That also means you need new resources, maybe new more compute power and that means approvals, and it might mean new hardware, too.

Often the biggest challenge is in provisioning the environment for data scientists to work on, the data that they want, and the tools they want. That can all often lead to huge delays in the process. And, as we talked about, this is often a time-sensitive problem. You want to get through more tasks and so every delayed minute, hour, or day that you have becomes a real challenge.

The other thing that is key is that data science is very peaky. You’ll find that data scientists may need no resources or tools on Monday and Tuesday, but then they may burn every GPU you have in the building on Wednesday, Thursday, and Friday. So, managing that as a business is also really important. If you’re going to get the most out of the budget you have, and the infrastructure you have, you need to think differently about all of these things. Does that make sense, Dana?

Gardner: Yes. Doug how is HPE Ezmeral being designed to help give the data scientists more of what they need, how they need it, and that helps close the gap between the ad hoc approach and that right kind of assembly line approach?

Two assembly lines to start

Cackett: Look at it as two assembly lines, at the very minimum. That’s the way we want to look at it. And the first thing the data scientists are doing is the discovery.

The second is the MLOps processes. There will be a range of people operationalizing the models. Imagine that you’re a data scientist, Dana, and I’ve just given you a task. Let’s say there’s a high defection or churn rate from our business, and you need to investigate why.

First you want to find out more about the problem because you might have to break that problem down into a number of steps. And then, in order to do something with the data, you’re going to want an environment to work in. So, in the first step, you may simply want to define the project, determine how long you have, and develop a cost center.

You may next define the environment: Maybe you need CPUs or GPUs. Maybe you need them highly available and maybe not. So you’d select the appropriate-sized environment. You then might next go and open the tools catalog. We’re not forcing you to use a specific tool; we have a range of tools available. You select the tools you want. Maybe you’re going to use Python. I know you’re hardcore, so you’re going to code using Jupyter and Python.

And the next step, you then want to find the right data, maybe through the data catalog. So you locate the data that you want to use and you just want to push a button and get provisioned for that lot. You don’t want to have to wait months for that data. That should be provisioned straight away, right?

You can do your work, save all your work away into a virtual repository, and save the data so it’s reproducible. You can also then check the things like model drift and data drift and those sorts of things. You can save the code and model parameters and those sorts of things away. And then you can put that on the backlog for the MLOps team.

Then the MLOps team picks it up and goes through a similar data science process. They want to create their own production line now, right? And so, they’re going to seek a different set of tools. This time, they need continuous integration and continuous delivery (CICD), plus a whole bunch of data stuff they want to operationalize your model. They’re going to define the way that that model is going to be deployed. Let’s say, we’re going to use Kubeflow for that. They might decide on, say, an A/B testing process. So they’re going to configure that, do the rest of the work, and press the button again, right?

Clearly, this is an ongoing process. Fundamentally that requires workflow and automatic provisioning of the environment to eliminate wasted time, waiting for stuff to be available. It is fundamentally what we’re doing in our MLOps product.

But in the wider sense, we also have consulting teams helping customers get up to speed, define these processes, and build the skills around the tools. We can also do this as-a-service via our HPE GreenLake proposition as well. Those are the kinds of things that we’re helping customers with.

Gardner: Doug, what you’re describing as needed in data science operations is a lot like what was needed for application development with the advent of DevOps several years ago. Is there commonality between what we’re doing with the flow and nature of the process for data and analytics and what was done not too long ago with application development? Isn’t that also akin to more of a cattle approach than a pet approach?

Operationalize with agility

Cackett: Yes, I completely agree. That’s exactly what this is about and for an MLOps process. It’s exactly that. It’s analogous to the sort of CICD, DevOps, part of the IT business. But a lot of that tool chain is being taken care of by things like Kubeflow and MLflow Project, some of these newer, open source technologies.

I should say that this is all very new, the ancillary tooling that wraps around the CICD. The CICD set of tools are also pretty new. What we’re also attempting to do is allow you, as a business, to bring these new tools and on-board them so you can evaluate them and see how they might impact what you’re doing as your process settles down.

The way we’re doing MLOps and data science is progressing extremely quickly. So you don’t want to lock yourself into a corner where you’re trapped in a particular workflow. You want to have agility. It’s analogous to the DevOps movement.

The idea is to put them in a wrapper and make them available so we get a more dynamic feel to this. The way we’re doing MLOps and data science generally is progressing extremely quickly at the moment. So you don’t want to lock yourself into a corner where you’re trapped into a particular workflow. You want to be able to have agility. Yes, it’s very analogous to the DevOps movement as we seek to operationalize the ML model.

The other thing to pay attention to are the changes that need to happen to your operational applications. You’re going to have to change those so they can tool the ML model at the appropriate place, get the result back, and then render that result in whatever way is appropriate. So changes to the operational apps are also important.

Gardner: You really couldn’t operationalize ML as a process if you’re only a tools provider. You couldn’t really do it if you’re a cloud services provider alone. You couldn’t just do this if you were a professional services provider.

It seems to me that HPE is actually in a very advantageous place to allow the best-of-breed tools approach where it’s most impactful but to also start put some standard glue around this — the industrialization. How is HPE is an advantageous place to have a meaningful impact on this difficult problem?

Cackett: Hopefully, we’re in an advantageous place. As you say, it’s not just a tool, is it? Think about the breadth of decisions that you need to make in your organization, and how many of those could be optimized using some kind of ML model.

You’d understand that it’s very unlikely that it’s going to be a tool. It’s going to be a range of tools, and that range of tools is going to be changing almost constantly over the next 10 and 20 years.

This is much more to do with a platform approach because this area is relatively new. Like any other technology, when it’s new it almost inevitably to tends to be very technical in implementation. So using the early tools can be very difficult. Over time, the tools mature, with a mature UI and a well-defined process, and they become simple to use.

But at the moment, we’re way up at the other end. And so I think this is about platforms. And what we’re providing at HPE is the platform through which you can plug in these tools and integrate them together. You have the freedom to use whatever tools you want. But at the same time, you’re inheriting the back-end system. So, that’s Active Directory and Lightweight Directory Access Protocol (LDAP) integrations, and that’s linkage back to the data, your most precious asset in your business. Whether that be in a data lake or a data warehouse, in data marts or even streaming applications.

This is the melting point of the business at the moment. And HPE has had a lot of experience helping our customers deliver value through information technology investments over many years. And that’s certainly what we’re trying to do right now.

Gardner: It seems that HPE Ezmeral is moving toward industrialization of data science, as well as other essential functions. But is that where you should start, with operationalizing data science? Or is there a certain order by which this becomes more fruitful? Where do you start?

Machine learning leads change

Cackett: This is such a hard question to answer, Dana. It’s so dependent on where you are as a business and what you’re trying to achieve. Typically, to be honest, we find that the engagement is normally with some element of change in our customers. That’s often, for example, where there’s a new digital transformation initiative going on. And you’ll find that the digital transformation is being held back by an inability to do the data science that’s required.

There is another Forrester report that I’m sure you’ll find interesting. It suggests that 98 percent of business leaders feel that ML is key to their competitive advantage. It’s hardly surprising then that ML is so closely related to digital transformation, right? Because that’s about the stage at which organizations are competing after all.

So we often find that that’s the starting point, yes. Why can’t we develop these models and get them into production in time to meet our digital transformation initiative? And then it becomes, “Well, what bits do we have to change? How do we transform our MLOps capability to be able to do this and do this at scale?”

Often this shift is led by an individual in an organization. There develops a momentum in an organization to make these changes. But the changes can be really small at the start, of course. You might start off with just a single ML problem related to digital transformation.

We acquired MapR some time ago, which is now our HPE Ezmeral Data Fabric. And it underpins a lot of the work that we’re doing. And so, we will often start with the data, to be honest with you, because a lot of the challenges in many of our organizations has to do with the data. And as businesses become more real-time and want to connect more closely to the edge, really that’s where the strengths of the data fabric approach come into play.

So another starting point might be the data. A new application at the edge, for example, has new, very stringent requirements for data and so we start there with building these data systems using our data fabric. And that leads to a requirement to do the analytics and brings us obviously nicely to the HPE Ezmeral MLOps, the data science proposition that we have.

Gardner: Doug, is the COVID-19 pandemic prompting people to bite the bullet and operationalize data science because they need to be fleet and agile and to do things in new ways that they couldn’t have anticipated?

Cackett: Yes, I’m sure it is. We know it’s happening; we’ve seen all the research. McKinsey has pointed out that the pandemic has accelerated a digital transformation journey. And inevitably that means more data science going forward because, as we talked about already with that Forrester research, some 98 percent think that it’s about competitive advantage. And it is, frankly. The research goes back a long way to people like Tom Davenport, of course, in his famous Harvard Business Review article. We know that customers who do more with analytics, or better analytics, outperform their peers on any measure. And ML is the next incarnation of that journey.

Gardner: Do you have any use cases of organizations that have gone to the industrialization approach to data science? What is it done for them?

Financial services benefits

Cackett: I’m afraid names are going to have to be left out. But a good example is in financial services. They have a problem in the form of many regulatory requirements.

When HPE acquired BlueData it gained an underlying technology, which we’ve transformed into our MLOps and container platform. BlueData had a long history of containerizing very difficult, problematic workloads. In this case, this particular financial services organization had a real challenge. They wanted to bring on new data scientists. But the problem is, every time they wanted to bring a new data scientist on, they had to go and acquire a bunch of new hardware, because their process required them to replicate the data and completely isolate the new data scientist from the other ones. This was their process. That’s what they had to do.

So as a result, it took them almost six months to do anything. And there’s no way that was sustainable. It was a well-defined process, but it’s still involved a six-month wait each time.

So instead we containerized their Cloudera implementation and separated the compute and storage as well. That means we could now create environments on the fly within minutes effectively. But it also means that we can take read-only snapshots of data. So, the read-only snapshot is just a set of pointers. So, it’s instantaneous.

They scaled out their data science without scaling up their costs or the number of people required. They are now doing that in a hybrid cloud environment. And they only have to change two lines of code to push workloads into AWS, which is pretty magical, right?

They were able to scale-out their data science without scaling up their costs or the number of people required. Interestingly, recently, they’ve moved that on further as well. Now doing all of that in a hybrid cloud environment. And they only have to change two lines of code to allow them to push workloads into AWS, for example, which is pretty magical, right? And that’s where they’re doing the data science.

Another good example that I can name is GM Finance, a fantastic example of how having started in one area for business — all about risk and compliance — they’ve been able to extend the value to things like credit risk.

But doing credit risk and risk in terms of insurance also means that they can look at policy pricing based on dynamic risk. For example, for auto insurance based on the way you’re driving. How about you, Dana? I drive like a complete idiot. So I couldn’t possibly afford that, right? But you, I’m sure you drive very safely.

But in this use-case, because they have the data science in place it means they can know how a car is being driven. They are able to look at the value of the car, the end of that lease period, and create more value from it.

These are types of detailed business outcomes we’re talking about. This is about giving our customers the means to do more data science. And because the data science becomes better, you’re able to do even more data science and create momentum in the organization, which means you can do increasingly more data science. It’s really a very compelling proposition.

Gardner: Doug, if I were to come to you in three years and ask similarly, “Give me the example of a company that has done this right and has really reshaped itself.” Describe what you think a correctly analytically driven company will be able to do. What is the end state?

A data-science driven future

Cackett: I can answer that in two ways. One relates to talking to an ex-colleague who worked at Facebook. And I’m so taken with what they were doing there. Basically, he said, what originally happened at Facebook, in his very words, is that to create a new product in Facebook they had an engineer and a product owner. They sat together and they created a new product.

Sometime later, they would ask a data scientist to get involved, too. That person would look at the data and tell them the results.

Then they completely changed that around. What they now do is first find the data scientist and bring him or her on board as they’re creating a product. So they’re instrumenting up what they’re doing in a way that best serves the data scientist, which is really interesting.

The data science is built-in from the start. If you ask me what’s going to happen in three years’ time, as we move to this democratization of ML, that’s exactly what’s going to happen. I think we’ll end up genuinely being information-driven as an organization.

That will build the data science into the products and the applications from the start, not tack them on to the end.

Gardner: And when you do that, it seems to me the payoffs are expansive — and perhaps accelerating.

Cackett: Yes. That’s the competitive advantage and differentiation we started off talking about. But the technology has to underpin that. You can’t deliver the ML without the technology; you won’t get the competitive advantage in your business, and so your digital transformation will also fail.

This is about getting the right technology with the right people in place to deliver these kinds of results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in Cloud computing, storage | Tagged , , , , , , , , , , , , , , , | Leave a comment

How remote work promises to deliver new levels of engagement, productivity, and innovation

The way people work has changed more in 2020 than the previous 10 years combined — and that’s saying a lot. Even more than the major technological impacts of cloud, mobile, and big data, the COVID-19 pandemic has greatly accelerated and deepened global behavioral shifts.

The ways that people think about where and how to work may never be the same, and new technology alone could not have made such a rapid impact.

So now is the time to take advantage of a perhaps once-in-a-lifetime disruption for the better. Steps can be taken to make sure that such a sea change comes less with a price and more with a broad boon — to both workers and businesses.

The next BriefingsDirect work strategies panel discussion explores research into the future of work and how unprecedented innovation could very well mean a doubling of overall productivity in the coming years.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We’re joined by a panel to hear insights on how a remote-first strategy leads to a reinvention of work expectations and payoffs. Please welcoming our guests, Jeff Vincent, Chief Executive Officer at Lucid Technology ServicesRay Wolf, Chief Executive Officer at A2K Partners, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, you’ve done some new research at Citrix. You’ve looked into what’s going on with the nature of work and a shift from what seems to be from chaos to opportunity. Tell us about the research and why it fosters such optimism.

Minahan: Most of the world has been focused on the here-and-now, with how to get employees home safely, maintain business continuity, and keep employees engaged and productive in a prolonged work-from-home model. Yet we spent the bulk of the last year partnering with Oxford Analytica and Coleman Parkes to survey thousands of business and IT executives and to conduct qualitative interviews with C-level executives, academia, and futurists on what work is going to look like 15 years from now — in 2035 — and predict the role that technology will play.

Certainly, we’re already seeing an acceleration of the findings from the report. And if there’s any iota of a silver lining in this global crisis we’re all living through, it’s that it has caused many organizations to rethink their operating models, business models, and their work models and workforce strategies.

Work has no-doubt forever changed. We’re seeing an acceleration of companies embracing new workforce strategies, reaching to pools of talent in remote locales using technology, and opening up access to skill sets that were previously too costly near their office and work hubs.

Now they can access talent anywhere, enabling and elevating the skill sets of all employees by leveraging artificial intelligence (AI) and machine learning (ML) to help them perform as their best employees. They are ensuring that they can embrace entirely new work models, possibly even the Uber-fication of work by tapping into recent retirees, work-from-home parents, and caregivers who had opted-out of the workforce — not because they didn’t have the skills or expertise that folks needed — but because traditional work models didn’t support their home environment.

We’re seeing an acceleration of companies liberated by the fact that they realize work can happen outside of the office. Many executives across every industry have begun to rethink what the future of work is going to look like when we come out of this pandemic.

Gardner: Tim, one of the things that jumped out at me from your research was a majority feel that technology will make workers at least twice as productive by 2035. Why such a newfound opportunity for higher productivity, which had been fairly flat for quite a while? What has changed in behavior and technology that seems to be breaking us out of the doldrums when it comes to productivity?

Work 2035: Citrix Research

Reveals a More Intelligent Future

Minahan: Certainly, the doubling of employee productivity is a factor of a couple things. Number one, new more flexible work models allow employees to work wherever they can do their best work. But more importantly, it is the emergence of the augmented worker, using AI and ML to help not just offer up the right information at the right time, but help employees make more informed decisions and speed up the decision-making process, as well as automating menial tasks so employees can focus on the strategic aspects of driving creativity and innovation for the business. This is one of the areas we think is the most exciting as we look forward to the future.

Gardner: We’re going to dig into that research more in our discussion. But let’s go to Jeff at Lucid Technology Services. Tell us about Lucid, Jeff, and why a remote-first strategy has been a good fit for you.

Remote service keep SMBs safe

Vincent: Lucid Technology Services delivers what amounts to a fractional chief information officer (CIO) service. Small- to medium-sized businesses (SMBs) need CIOs but don’t generally have the working capital to afford a full-time, always-on, and always-there CIO or chief technology officer (CTO). That’s where we fill the gap.

We bring essentially an IT department to SMBs, everything from budgeting to documentation — and all points in between. And one of the big things that taught us to look forward is by looking backward. In 1908, Henry Ford gave us the modern assembly line, which promptly gave us the model T. And so horse-drawn buggy whip factories and buggy accessories suddenly became obsolete.

Something similar happened in the early 1990s. It was a fad called the Internetand it revolutionized work in ways that could not have been foreseen up to that point in time. We firmly believe that we’re on the precipice of another revolution of work just like then. The technology is mature at this point. We can move forward with it, using things like Citrix.

Gardner: Bringing a CIO-caliber function to SMBs sounds like it would be difficult to scale, if you had to do it in-person. So, by nature, you have been a pioneer in a remote-first strategy. Is it effective? Some people think you can’t be remote and be effective.

Vincent: Well, that’s not what we’ve been finding. This has been an evolution in my business for 20 years now. And the field has grown as the need has grown. Fortunately, the technology has kept pace with it. So, yes, I think we’re very effective.

Previously, let’s say a CPA firm of 15 providers, or a medical firm of three or four doctors with another 10 or so administrative and assistance staff on site all of the time, they had privileged information and data under regulation that needs safeguarding.

Well, if you are Arthur Andersen, a large, national firm, or Kaiser Permanente, or some really large corporation that has an entire team of IT staff on-site, then that isn’t really a problem. But when you’re under 25 to 50 employees, that’s a real problem because even if you were compromised, you wouldn’t necessarily know it.

If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do the work of a lot of people.

We leverage monitoring technology, such as next-generation firewalls, and a team of people looking after that network operation center (NOC) and help desk to head those problems off at the pass. If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do a lot of work for a lot of people. That is the secret sauce of our success.

Gardner: Jeff, from your experience, how often is it the CIO who is driving the remote work strategy?

Vincent: I don’t think remote work prior to the pandemic could have been driven from any other any other seat than the CIO/CTO. It’s his or her job. It’s their entire ethos to keep the finger on pulse of technology, where it’s going, and what it’s currently capable of doing.

In my experience, anybody else on the C-suite team has so much else going on. Everybody is wearing multiple hats and doing double-duty. So, the CTO is where that would have been driven.

But now, what I’ve seen in my own business, is that since the pandemic, as the CTO, I’m not generally leading the discussion — I’m answering the questions. That’s been very exciting and one of the silver linings I’ve seen through this very trying time. We’re not forcing the conversation anymore. We are responding to the questions. I certainly didn’t envision a pandemic shutting down businesses. But clearly, the possibility was there, and it’s been a lot easier conversation [about remote work] to have over the past several months.

The nomadic way of work

Gardner: Ray, tell us about A2K Partners. What do you have in common with Jeff Vincent at Lucid about the perceived value of a remote-first strategy?

Wolf: A2K Partners is a digital transformation company. Our secret sauce is we translate technology into the business applications, outcomes, and impacts that people care about.

Our company was founded by individuals who were previously in C-level business positions, running global organizations. We were the consumers of technology. And honestly, we didn’t want to spend a lot of time configuring the technologies. We wanted to speed things up, drive efficiency, and drive revenue and growth. So we essentially built the company around that.

We focus on work redesign, work orchestration, and employee engagement. We leverage platforms like Citrix for the future of work and for bringing in productivity enhancements to the actual processes of doing work. We ask, what’s the current state? What’s the future state? That’s where we spend a lot of our time.

As for a remote-first strategy, I want to highlight that our company is a nomadic company. We recruit people who want to live and work from anywhere. We think there’s a different mindset there. They are more apt to accept and embrace change. So untethered work is really key.

What we have been seeing with our clients — and the conversations that we’re having currently today — is the leaders of every organization, at every level, are trying to figure out how we come out of this pandemic better than when we went in. Some actually feel victims, and we’re encouraging them that this is really an opportunity.

Some statistics from the last three economic downturns: One very interesting one is that companies that started before the downturn in the bottom 20 percent emerged in the top 20 percent after the downturn. And you ask yourself, “How does a mediocre company all of a sudden rise to the top through a crisis?” This is where we’ve been spending time, in figuring out what plays they are running and how to better help them execute on it.

As Work Goes Virtual, Citrix Research Shows

Companies Need to Follow Talent Fleeing Cities

The companies that have decided to use this as a period to change the business model, change the services and products they’re offering, are doing it in stealth mode. They’re not noisy. There are no press releases. But I will tell you that next March, June, or September, what will come from them will create an Amazon-like experience for their customers and their employees.

Gardner: Tim, in listening to Jeff and Ray, it strikes me that they look at remote work not as the destination — but the starting point. Is that what you’re starting to see? Have people reconciled themselves with the notion that a significant portion of their workforce will probably be remote? And how do we use that as a starting point — and to what?

Minahan: As Jeff said, companies are rethinking their work models in ways they haven’t since Henry Ford. We just did OnePoll research polling with thousands of US-based knowledge workers. Some 47 percent have either relocated out of big metropolitan areas or are in the process of doing that right now. They can primarily because they’ve proven to themselves that they can be productive when not necessarily in the office.

No alt text provided for this image

Similarly, some 80 percent of companies are now looking at making remote work a more permanent part of their workforce strategy. And why is that? It is not just merely should Sam or Sally work in the office or work at home. No, they’re fundamentally rethinking the role of work, the workforce, the office, and what role the physical office should play.

And they’re seeing an opportunity, not just from real estate cost-reduction, but more so from access to talent. If we remember back nine months ago to before the great pandemic, we were having a different discussion. That discussion was the fact that there was a global talent shortage, according to McKinsey, of 95 million medium- to high-skilled workers.

That hasn’t changed. It was exacerbated at that time because we were organized around traditional work-hub models — where you build an office, build a call center, and you try like heck to hire people from around that area. Of course, if you happen to build in a metropolitan area right down the street from one of your top competitors — you can see the challenge.

In addition, there was a challenge around attaining the right skillsets to modernize and digitize your businesses. We’re also seeing an acceleration in the need for those skills because, candidly, very few businesses can continue to maintain their physical operations in light of the pandemic. They have had to go digital.

And so, as companies are rethinking all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere, just as Ray indicated. I like the nomadic work concept.

As companies rethink all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere. I like the nomadic work concept.

Now, how do I use technology to even further raise the skillsets of all of my employees so they perform like the very best. This is where that interesting angle of AI and ML comes in, of being able to offer up the right insights to guide employees to the right next step in a very simple way. At the same time, that approach removes the noise from their day and helps them focus on the tasks they need to get done to be productive. It gives them the space to be creative and innovative and to drive that next level of growth for their company.

Gardner: Jeff, it sounds like the remote work and the future of work that Tim is describing sets us up for a force-multiplier when it comes to addressable markets. And not just addressable markets in terms of your customers, who can be anywhere, but also that your workers can be anywhere. Is that one of the things that will lead to a doubling of productivity?

Workers and customers anywhere