Securing APIs demands tracing and machine learning to analyze behaviors and head off attacks

The burgeoning use of application programming interfaces (APIs) across cloud-native computing and digital business ecosystems has accelerated rapidly due to the COVID-19 pandemic.

Enterprises have had to scramble to develop and procure across new digital supply chains and via unproven business-to-business processes. Companies have also extended their business perimeters to include home workers as well as to reach more purely online end-users and customers.

In doing so, they may have given short shrift to protecting against the cybersecurity vulnerabilities inherent in the expanding use of APIs. The cascading digitization of business and commerce has unfortunately lead to an increase in cyber fraud and data manipulation.

Stay with us for Part 2 in our series where BriefingsDirect explores how APIsmicroservices, and cloud-native computing require new levels of defense and resiliency.

Listen the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest innovations for making APIs more understood, trusted, and robust, we welcome Jyoti Bansal, Chief Executive Officer and Co-Founder at Traceable.ai. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jyoti, in our last discussion, we learned how the exploding use of cloud-native apps and APIs has opened a new threat vector. As a serial start-up founder in Silicon Valley, as well as a tech visionary, what are your insights and experience telling you about the need for identifying and mitigating API risks? How is protecting APIs different from past cybersecurity threats?

Bansal: Protecting APIs is different in one fundamental way — it’s all about software and developers. APIs are created so that you can innovate faster. You want to empower your developers to move fast using DevOps and CI/CD, as well as microservices and serverless.

You want developers to break the code into smaller parts, and then connect those smaller pieces to APIs – internally, externally, or via third parties. That’s the future of how software innovation will be done.

Now, the way you secure these APIs is not by slowing down the developers. That’s the whole point of APIs. You want to unleash the next level of developer innovation and velocity. Securing them must be done differently. You must do it without hurting developers and by involving them in the API security process. 

Gardner: How has the pandemic affected the software development process? Is the shift left happening through a distributed workforce? How has the development function adjusted in the past year or so?

Software engineers at home

Bansal: The software development function in the past year has become almost completely work-from-home (WFH) and distributed. The world of software engineering was already on that path, but software engineering teams have become even more distributed and global. The pandemic has forced that to become the de facto way to do things.

Now, everything that software engineers and developers do will have to be done completely from home, across all their processes. Most times they don’t even use VPNs anymore. Everything is in the cloud. You have your source code, build systems, and CI/CD processes all in the cloud. The infrastructure you are deploying to is also in a cloud. You don’t really go through VPNs nor use the traditional ways of doing things anymore. It’s become a very open, connect-from-everywhere software development process.

Gardner: Given these new realities, Jyoti, what can software engineers and solutions architects do with APIs be made safer? How are we going to bring developers more of the insights and information they need to think about security in new ways?

Bansal: The most important thing is to have the insights. The fundamental problem is that people don’t even know what APIs are being used and which APIs have a potential security risk, or which APIs could be used by attackers in bad ways.

Learn More  

About Traceable.ai

And so, you want to create transparency around this. I call it turning on the lights. In many ways, developers are operating in the dark – and yet they’re building all these APIs.

Normally, these days you have a software development team of maybe five to 10 engineers. If you are developing using many APIs, each with augmentations, you might end up with 200 or 500 engineers. They’re all working on their own pieces, which are normally one or two microservices, and they’re all exposing them to the current APIs.

It’s very hard for them to understand what’s going on. Not only with their own stuff, but the bigger picture across all the engineering teams in the company and all the APIs and microservices that they’re building and using. They really have no idea.

No alt text provided for this image

For me, the first thing you must do is turn on the lights so that everyone knows what’s going on — so they’re not operating in the dark. They can then know which APIs are theirs, and which APIs talk to other APIs? What are the different microservices? What has changed? How does the data flow between them? They can have a real-time view of all of this. That is the number one thing to begin with.

We like to call it a Google Maps kind of view, where you can see how all the traffic is flowing, where the red lights are, and how everything connects. It shows the different highways of data going from one place to another. You need to start with that. It then becomes the foundation for developers to be much more aware and conscious about how to design the APIs in a more secure way.

Gardner: If developers benefit from such essential information, why don’t the older solutions like web application firewalls (WAFs) or legacy security approaches fit the bill? Why do developers need something different?

Bansal: They need something that’s designed to understand and secure APIs. If you look at a WAF, it was designed to protect systems against attacks on legacy web apps, like a SQL injection.

Normally a WAF will just look at whether you have a form field on your website where someone who can type in a SQL query and use it to steal some data. WAFs will do that, but that’s not how attackers steal data from APIs. They are completely different kinds of attacks.

Most WAFs work to protect against legacy attacks but they have had challenges. When it comes to APIs, WAFs really don’t have any kind of solutions to secure APIs.

Most WAFs work to protect against legacy attacks but they have had challenges of how to scale, and how to make them easy and simple to use.

But when it comes to APIs, WAFs really don’t have any kind of solution to secure APIs.

Gardner: In our last discussion, Jyoti, you mentioned how the burden for API security falls typically on the application security folks. They are probably most often looking at point solutions or patches and updates.

But it sounds to me like the insights Traceable.ai provides are more of a horizontal or one-size-fits-all solution approach. How does that approach work? And how is it better than spot application security measures?

End-to-end app security

Bansal: At Traceable.ai we take a platform approach to application security. We think application security starts with securing two parts of your app. 

One is the APIs your apps are exposing, and those APIs could be internal, external, and third-party APIs.

The second part is the clients that you yourselves build using those APIs. They could be web application clients or mobile clients that you’re building. You must secure those as well because they are fundamentally built on top of the same APIs that you’re exposing elsewhere for other kind of clients.

No alt text provided for this image

When we look at securing all of that, we think of it in a classic way. We think security is still about understanding and taking inventory of everything. What are all of the things that are there? Then, once you have an inventory, you look at protecting those things. Thirdly, you look to do it more proactively. Instead of just protecting the apps and services, can you go in and fix things where and when the problem was created.

Our solution is designed as an end-to-end, comprehensive platform for application security that can do all three of these things. All three must be done in very different ways. Compared to legacy web application firewalls or legacy Runtime Application Self Protection (RASP) solutions that security teams use; we take a very different approach. RASPs also have weaknesses that can introduce their own vulnerabilities.

Our fundamental approach builds a layer of tracing and instrumentation and we make these tracing and instrumentation capabilities extremely easy to use, thanks to the lightweight agents we provide. We have agents that run in different programming environments, like Java.NetPHPNode.js, and Python. These agents can also be put in application proxies or Kubernetesclusters. In just a few minutes, you can install these agents and not have to do any work.

We then begin instrumenting your runtime application code automatically and assess everything that is happening. First thing, in just a minute or two, based on your real-time traffic, we draw a picture of everything -the APIs in your system, all the external APIs, your internal microservices, and all the internal API endpoints on each of the microservices.

Learn More  

About Traceable.ai

This is how we assess the data flows between one microservice to a second and to a third. We begin to help you understand questions such as — What are the third-party APIs you’re invoking? What are the third-party systems you are invoking? And we’ll draw that all in Google Maps kind of traffic picture in just a matter of minutes. It shows you how everything flows in your system.

The ability to understand and embrace all of that is Traceable.ai solution’s first part, which is very different from any kind of legacy RASP app security approach out there. 

Once we understand that, the second part starts in our system that creates a behavioral learning model around the actual use of your APIs and applications to help you understand answers to questions such as – Which users are accessing which APIs? Which users are passing what data into it? What is the normal sequence of API calls or clicks in the web application that the users do? What internal microservices are invoked by every API? What pieces of data are being transferred? What volume of data is being transferred?

We develop a scoring mechanism whereby we can figure out what kind of attack someone might be trying to do. Are they trying to steal data? We can then create a remediation mechanism, such as blocking that specific user or blocking that way of invoking that API. 

All of that comes together into a very powerful machine learning (ML) model. Once that model is built, we learn the n-dimensional behavior around everything that is happening. There is often so much traffic, that it doesn’t take us long to build out a pretty accurate model.

Now, every single call that happens after that, we then compare it against the normal behavior model that we built. So, for example, normally when people call an API, they ask for data for one user. But if suddenly a call to the same API asks for data for 100,000 users, we will flag that — there is something anomalous about that behavior.

Next, we develop a scoring mechanism whereby we can figure out what kind of attack someone might be trying to do. Are they trying to steal data? And then we can create a remediation mechanism, such as blocking that specific user or blocking that particular way of invoking that API. Maybe we alert your engineering team to fix the bug there that allows this in the first place. 

Gardner: It sounds like a very powerful platform — with a lot of potential applications. 

Jyoti, as a serial startup founder you have been involved with AppDynamicsand Harness. We talked about that in our first podcast. But one of the things I’ve heard you talk about as a business person, is the need to think big. You’ve said, “We want to protect every line of code in the world,” and that’s certainly thinking big.

How do we take what you just described as your solution platform, and extrapolate that to protecting every line of code in the world? Why is your model powerful enough to do that?

Think big, save the world’s code

Bansal: It’s a great question. When we began Traceable.ai, that was the mission we started with. We have to think big because this is a big problem.

If I fast-forward to 10 years from now, the whole world will be running on software. Everything we do will be through interconnected software systems everywhere. We have to make sure that every line of the code is secure and the way we can ensure that every line of code is secure is by doing a few fundamental things, which are hard to do, but in concept they are simple.

Can we watch every line of code when it runs in a runtime environment? If an engineer wrote a thousand lines of code, and it’s out there and running, can we watch the code as it is running? That’s where the instrumentation and tracing part comes in. We can find where that code is running and watch how it is run. That’s the first part.

The second part is, can we learn the normal behavior of how that code was supposed to run? What did the developer intend when they wrote the code? And if we can learn that it’s the second part.

No alt text provided for this image

And the third component is, if you see anything abnormal, you flag it or block it, or do something about it. Even if the world has trillions and trillions of lines of code, that’s how we operate.

Every single line of code in the world should have a safety net built around it. Someone should be watching how the code is used and learn what is the normal developer intent of that code. And if some attacker, hacker, or a malicious person is trying to use the code in an unintended way, you just stop it.

That to me is a no-brainer — if we can make it possible and feasible from a technology perspective. That’s the mission we are on Traceable.ai – To make it possible and feasible.

Gardner: Jyoti, one of the things that’s implied in what we’ve been talking about that we haven’t necessarily addressed is the volume and speed of the data. It also requires being able to analyze it fast to stop a breach or a vulnerability before it does much damage.

You can’t do this with spreadsheets and sticky notes on a whiteboard. Are we so far into artificial intelligence (AI) and ML that we can take it for granted that this going to be feasible? Isn’t a high level of automation also central to having the capability to manage and secure software in this fashion?

Let machines do what they do 

Bansal: I’m with you 100 percent. In some ways, we have machines to protect against these threats. However, the amount of data and the volume of things is very high. You can’t have a human, like a security operations center (SOC) person, sitting at a screen trying to figure out what is wrong.

That’s where the challenge is. The legacy security approaches don’t use the right kind of ML and AI — it’s still all about the rules. That generates numerous false positives. Every application security, bot security, RASP, and legacy app security approach defines rules sets to define if certain variables are bad and that approach creates many false positives and junk alerts, that they drown the humans monitoring those- it’s just not possible for humans to go through it. You must build a very powerful layer of learning and intelligence to figure it out.

Learn More  

About Traceable.ai

The great thing is that it is possible now. ML and AI are at a point where you can build the right algorithms to learn the behavior of how applications and APIs are used and how data flows through them. You can use that to figure out the normal usage behaviors and stop them if they veer off – that’s the approach we are bringing to the market.

Gardner: Let’s think about the human side of this. If humans can’t necessarily get into the weeds and deal with the complexity and scale, what is the role for people? How do you oversee such a platform and the horizontal capabilities that you’re describing?

Do we need a new class of security data scientist, or does this is fit into a more traditional security management persona?

Bansal: I don’t think you need data scientists for APIs. That’s the job of products like Traceable.ai. We do the data science and convert it into actionable things. The technology behind Traceable.ai itself could be the data scientist inside.

But what is needed from the people side is the right model of organizing your teams. You hear about DevSecOps, and I do think that that kind of model is really needed. The core of DevSecOps is that you have your traditional SecOps teams, but they have become much more developer, code, and API aware, and they understand it. Your developer teams have become more security-aware than they have been in the past.

In the past we’ve had developers who don’t care about security and security people who don’t care about code and APIs. We need to bridge that from both sides.

Both sides have to come together and bridge the gap. Unfortunately, what we’ve had in the past are developers who don’t care about security, and security people who don’t care about code and APIs. They care about networks, infrastructures, and servers, because that’s where they spend most of their time trying to secure things. From an organization and people perspective, we need to bridge that from both sides.

We can help, however, by creating a high level of transparency and visibility by understanding what code and APIs are there, which ones have security challenges, and which ones do not. You then give that data to developers to go and fix. And you give that data to your operations and security teams to manage risk and compliance. That helps bridge the gap as well.

Gardner: We’ve traditionally had cultural silos. A developer silo and a security silo. They haven’t always spoken the same language, never mind work hand-in-hand. How does the data and analytics generated from Traceable.ai help bind these cultures together?

Bridge the SecOps divide

Bansal: I will give you an example. There’s this new pattern of exposing data through GraphQL. It’s like an API technology. It’s very powerful because you can expose your data into GraphQL where different consumers can write API queries directly to GraphQL.

Many developers who write these graphs to allow APIs, don’t understand the security implications. They write the API, and they don’t understand that if they don’t put in the right kind of checks, someone can go and attack them. The challenge is that most SecOps people don’t understand how GraphQL APIs work or that they exist.

So now we have a fundamental gap on both sides, right? A product like Traceable.ai helps bridge that gap by identifying your APIs, and that there are GraphQL APIs with security vulnerabilities where sensitive data can potentially be stolen.

No alt text provided for this image

And we will also tell if there is an attack happening. We will tell you that someone is trying to steal data. Once you have that data, and developers see the data, they become much more security-conscious because they see it in a dashboard that they built the GraphQL APIs from, and which has 10 security vulnerabilities and alerts that two attacks are happening.

And the SecOps team, they see the same dashboard. They know which APIs were crafted, and that by these patterns they know which attackers and hackers are trying to exploit them. Thus, having that common shared sense of data in a shared dashboard between the developers and the SecOps team creates the visibility and the shared language on both sides, for sure.

Gardner: I’d like to address the timing of the Traceable.ai solution and entry into the market.

It seems to me we have a level of trust when it comes to the use of APIs. But with the vulnerabilities you’ve described that trust could be eroded, which could be very serious. Is there a race to put in the solutions that keep APIs trustworthy before that trust gets eroded?

A devoted API security solution

Bansal: We are in the middle of the API explosion. Unfortunately, when people adopt a new technology, they think about its operational elements, and then security, performance, and scalability after that. Once they start running into those problems, they start challenging them.

We are at a point of time where people are seeing the challenges that come with API security and the threat vectors that are being opened. I think the timing is right. People, the market, and the security teams understand the need and feel the pain.

We already have had some very high-profile attacks in the industry where attackers have stolen data through improperly secured APIs. So, it’s a good time to bring a solution into the market that can address these challenges. I also think that CI/CD in DevOps is being adopted at such a rapid phase that API security and securing cloud-native microservices architectures are becoming a major bottleneck.

In our last discussion, we talked about Harness, another company that I have founded, which provides the leading CI/CD platform for developers. When we talk to our customers at Harness and ask, “What is the blocker in your adoption of CI/CD? What is the blocker in your adoption of public cloud, or using two or more microservices, or more serverless architectures?”

They say that they are hesitant due to their concerns around application security, securing these cloud-native applications, and securing the APIs that they’re exposing. That’s a big part of the blocker.

Learn More  

About Traceable.ai

Yet this resistance to change and modernization is having a big business impact. It’s beginning to reduce their ability to move fast. It’s impacting the very velocity they seek, right? So, it’s kind of strange. They should want to secure the APIs – secure everything – so that they can gain risk mitigation, protect their data, and prevent all the things that can burn your users.

But there is another timing aspect to it. If they can’t soon figure out the security, the businesses really don’t have any option other than to slow down their velocity and slow down adoption of cloud-native architectures, DevOps, and microservices, all of which will have a huge business and financial impact.

 So, you really must solve this problem. There’s no other solution or way out.

Gardner: I’d like to revisit the concept of Traceable.ai as a horizontal platform capability.

Once you’ve established the ML-driven models and you’re using all that data, constantly refining the analytics, what are the best early use cases for Traceable.ai? Then, where do you see these horizontal analytics of code generation and apps production going next?

Inventory, protection, proactivity

Bansal: There’s a logical progression to it. The low-lying fruit is to assume you may have risky APIs with improper authentication that can expose personally identifiable information (PII) and data. The API doesn’t have the right authorization control inside of it, for example. That becomes the first low-hanging fruit. Once, you put Traceable.ai in your environment, we can look at the traffic, and the learning models will tell you very quickly if you have these challenges. We make it very simple for a developer to fix that. So that’s the first level.

The second level is, once you protect against those issues, you next want to look for things you may not be able to fix. These might be very sophisticated business logic abuses that a hacker is trying to insert. Once our models are built, and you’re able to compare how people are using the services, we also create a very simple model for flagging and attributing any bad behaviors to a specific user.

The threat actor could be a bot, a particular authenticated user, or a non-authenticated user trying to do something that is not normal behavior. We see the patterns of such abuses around data theft or something happening around the data. We can alert you and block the threat actor.

This is what we call a threat actor. It could be a bot, a particular authenticated user, or a non-authenticated user trying to do something that is not normal behavior. We see the patterns of such abuses around data theft or something that is happening around the data. We can alert you and we can block the threat actor. So that becomes the second part of the value progression.

The third part then becomes, “How do we become even more proactive?” Let’s say you have something in your API that someone is trying to abuse through a sophisticated business logic approach. It could be fraud, for example. Someone could create a fraudulent transaction because the business logic in the APIs allows for that. This is a very sophisticated hacker.

Once we can figure that abuse out, we can block it, but the long-term solution is for the developers to go and fix the code logic. That then becomes the more proactive approach. By Traceable.ai bringing in that level of learning, that a particular API has been abused, we can identify the underlying root cause and show it to a developer so that they can fix it. That’s becoming the more progressive element of our solution.

Eventually you want to put this into a continuous loop. As part of your CI/CD process, you’re finding things, and then in production, you are also finding things when you detect an attack or something abnormal. We can give it all back to the developers to fix, and then it goes through the CI/CD process again. And that’s how we see the progression of how our platform can be used.

Gardner: As the next decade unfolds, and organizations are even more digital in more ways, it strikes me that you’re not just out to protect every line of code. You’re out there to protect every process of the business.

Where do the use cases progress to when it comes to things like business processes and even performance optimization? Is the platform something that moves from a code benefit to a business benefit? 

Understanding your APIs

Bansal: Yes, definitely. We think that the underlying model we are building will understand every line of code and how is it being used. We will understand every single interaction between different pieces of code in the APIs and we will understand the developer intent around those. How did the developers intend for these APIs in that piece of code to work? Then we can figure out anything that is abnormal about it.

So, yes, we are using the platform to secure the APIs and pieces of code. But we can also use that knowledge to figure out if these APIs are not performing in the right kinds of way. Are there bottlenecks around performance and scalability? We can help you with that.

What if the APIs are not achieving the business outcomes they are supposed to achieve? For example, you may build different pieces of code and have them interact with different APIs. In the end, you want a business process, such as someone applying for a credit card. But if the business process is not giving you the right outcome, you want to know why not? It may be because it’s not accurate enough, or not fast enough, or not achieving the right business outcome. We can understand that as well, and we can help you diagnose and figure out the root cause of that as well.

No alt text provided for this image

So, definitely, we think eventually, in the long-term, that Traceable.ai is a platform that understands every single line of code in your application. It understands the intent and normal behaviors of every single line of code, and it understands every time there is something anomalous, wrong, or different about it. You then use that knowledge to give you a full understanding around these different use cases over time.

Gardner: The lesson here, of course, is to know yourself by letting the machines do what they do best. It sounds like the horizontal capability of analyzing and creating models is something you should be doing sooner rather than later.

It’s the gift that keeps giving. There are ever-more opportunities to use those insights, for even larger levels of value. It certainly seems to lead to a virtuous adoption cycle for digital business.

Bansal: Definitely. I agree. It unlocks and removes the fear of moving fast by giving developers freedom to break things into smaller components of microservices and expose them through APIs. If you have such a security safety net and the insights that go beyond security to performance and business insights, it reduces the fear because you now understand what will happen.

When people start thinking of serverless, Functions, or similar technologies the idea is that you take those 200 microservices and break them into 2,000 micro-functions. And those functions all interact with each other. You can clip them independently, and every function is just a few hundred lines of code at most.

So now, how do you start to understand the 2,000 moving parts? There is a massive advantage of velocity, and reusability, but you will be challenged in managing it all. If you have a layer that understands and reduces that fear, it just unlocks so much innovation. It creates a huge advantage for any software engineering organization. 

Listen the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Traceable.ai.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, application transformation, artificial intelligence, big data, Cloud computing, Cyber security, Data center transformation, DevOps, digital transformation, enterprise architecture, Security, Software | Tagged , , , , , , , , , , , , , | Leave a comment

Rise of reliance on APIs brings new security vector — and need for novel defenses

API Application Programming Interface businessman pointing a visual icon.

Thinking of IT security as a fortress or a moat around your compute assets has given way to a more realistic and pervasive posture.

Such a cybersecurity perimeter, it turns out, was only an illusion. A far more effective extended-enterprise strategy protects business assets and processes wherever they are — and wherever they might reach.

As businesses align to new approaches such as zero trust and behavior-modeling to secure their data, applications, infrastructure, and networks, there’s a new, rapidly expanding digital domain that needs such pervasive and innovative protection.

The next BriefingsDirect security trends discussion explores how application programming interfaces (APIs), microservices, and cloud-native computing form a new frontier for cybersecurity vulnerabilities — as well as opportunities for innovative defenses and resilience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about why your expanding use of APIs may be the new weak link in your digital business ecosystem, please welcome Jyoti Bansal, Chief Executive Officer and Co-Founder at Traceable.ai. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Jyoti, has the global explosion of cloud-native apps and services set us up for a new variety of security vulnerability? How serious is this new threat?

Bansal: Well, it’s definitely new and it’s quite serious. If you look at every time we go through a change in IT architectures, we get a new set of security challenges. The adoption of cloud-native architectures means challenges in a few things. 

No alt text provided for this image
Bansal

One, you have a lot of APIs and these APIs are doors and entryways into your systems and your apps. If those are not secured properly, you have more opportunities for attackers to steal data. You want to open the APIs so that you can expose data, but attackers will try to exploit that. We are seeing more examples of that happening.

The second major challenge with cloud-native apps is around the software development model. Development now is more high-velocity, more Agile. People are using DevOps and continuous integration and continuous delivery (CI/CD). That creates the velocity. You’re changing things once every hour, sometimes even more often.

That creates new kinds of opportunities for developers to make mistakes in their apps and in their APIs, and in how they design a microservice; or in how different microservices or APIs interact with each other. That often creates a lot more opportunity for attackers to exploit.

Gardner: Companies, of course, are under a lot of pressure to do things quickly and to react to very dynamic business environments. At the same time, you have to always cover your backside with better security. How do companies face the tension between speed and safety?

Speed and safety for cloud-native apps

Bansal: That’s the biggest tension, in many ways. You are forced to move fast. The speed is important. The pandemic has been even more of a challenge for a lot of companies. They had to move to more of a digital experience much faster than they imagined. So speed has become way more prominent.

But that speed creates a challenge around safety, right? Speed creates two main things. One is that you have more opportunity to make mistakes. If you ask people to do something very fast because there’s so much business and consumer pressure, sometimes you cut corners and make mistakes.

Learn More  

About Traceable.ai

Not deliberately. It’s just as software engineers can never write completely bug-free code. But if you have more bugs in your code because you are moving very, very fast, it creates a greater challenge.

So how do you create safety around it? By catching these security bugs and issues much earlier in your software development life cycle (SDLC). If a developer creates a new API and that API could be exploited by a hacker — because there is a bug in that API around security authentication check — you have to try to find it in your test cycle and your SDLC.

The second way to gain security is by creating a safety net. Even if you find things earlier in your SDLC, it’s impossible to catch everything. In the most ideal world, you’d like to ship software that has zero vulnerabilities and zero gaps of any kind when it comes to security. But that doesn’t happen, right?

You have to create a safety net so that if there are vulnerabilities because the business pressure was there to move fast, that safety net that can still block what occurs and stop those from trying to do things that you didn’t intend from your APIs and applications.

Gardner: And not only do you have to be thinking about APIs you’re generating internally, but there are a lot of third-party APIs out there, along with microservices, when doing extended-enterprise processes. It’s a bit of a Wild West environment when it comes to these third-party APIs.

Bansal: Definitely. The APIs you are building and using internally through your microservices may also have an external consumer calling those APIs. Other microservices may also be calling them — and so there is exposure around that.

No alt text provided for this image

Third-party APIs manifest in two different ways. One is that you might be using a third-party API or library in your microservice. There might be a security gap there.

The second way comes when you’re calling on third-party APIs. And now almost everything is exposed as APIs – such as if you want to check for some data somewhere or call some other software as a service (SaaS) service or cloud service, or a payment service. Everything is an API, and those APIs are not always called properly. All of those APIs are not secure, and so your system fundamentally can become more insecure.

It is getting close to a wild, Wild West with APIs. I think we have to take API security quite seriously at this point.

Gardner: We’ve been talking about API security as a function of growing pains, that you’re moving fast, and this isn’t a process that you might be used to.

But there’s also malice out there. We’ve seen advanced, persistent threats in such things as zero-day exploits and with Microsoft Exchange Serversrecently. We’ve certainly seen with the SolarWinds exploits how a supply chain can be made vulnerable.

Have we seen people take advantage of APIs, too, or is that something that we should expect?

API attacks a global threat

Bansal: Well, we should definitely expect that. We are seeing people take advantage of these APIs. If you look at data from Gartner, they stated that by 2022, API abuses will move from an infrequent to the most frequent attack vector. That will result in more data breaches in enterprises and web applications. That is the new direction because of how applications are consumed with APIs.

The API has naturally become a more frequent form of attack vector now.

Gardner: Do you expect, Jyoti, that this is going to become mission-critical? We’re only part way into the “software eats the world” thing. As we expect software to become more critical to the world, APIs are becoming more part of that. Could API vulnerabilities become a massive, global threat vector?

Bansal: Yes, definitely. We are, as you said, only partially into the software-eats-the-world trend. We are still not fully there. We are only 30 to 40 percent there. But as we see more and more APIs, those will create a new kind of attack vector.

For a long time, people didn’t think about APIs. People only thought about APIs as internal. External APIs were very few. Now, APIs are a major source of how other systems integrate across the globe. The traffic coming through APIs is significantly increasing.

It’s a matter of now taking these threats seriously. For a long time, people didn’t think about APIs. People only thought about APIs as internal APIs; that you will put internal APIs between your code and different internal services. The external APIs were very few. Most of your users were coming through a web application or a mobile application, and so you were not exposing your APIs as much to external applications.

If you look at banking, for example, most of the bank services software was about online banking. End users came through a bank web site, and then users came through mobile apps. They didn’t have to worry too much about APIs to do their business.

Now, that’s no longer the case. For any bank, APIs are a major source of how other systems integrate with them. Banks didn’t have to expose their systems through those apps that they built, but now a lot of third-party apps are written on top of those APIs — from a wallet app, to different kinds of payment systems, to all sorts of things that are out there — because that’s what consumers are looking for. So, now — as you start doing that — the amount of traffic coming through that API is not just through the web or mobile front-ends directly. It’s significantly increasing.

The general use of internal APIs is increasing. With the adoption of cloud-native and microservices architectures, the internal-to-external boundary is starting to blur too much. Internal APIs could become external at any point because the same microservice that our engineering team wrote is now being used by your other internal microservices inside of your company. But they are also being exposed to your partners or other third-party systems to do something, right?

Learn More  

About Traceable.ai

More and more APIs are being exposed out there. We will see this continued explosion of APIs because that’s the nature of how modern software is built. APIs are the building block of modern software systems.

I think we have two options as an industry. Either we say, “Okay, APIs could be risky or someone could attack them, so let’s not use APIs.” But that to me is completely wrong because APIs are what’s driving the flexibility and fluidity of modern software systems and the velocity that we need. We have to just learn as an industry to instead secure APIs and be serious about securing them.

Gardner: Jyoti, your role there as CEO and co-founder at Traceable.ai is not your first rodeo. You’ve been a serial startup leader and a Silicon Valley tech visionary. Tell us about your other major companies, AppDynamics, in particular, and why that puts you in a position to recognize the API vulnerability — but also come up with novel ways of making APIs more secure.

History of troubleshooting

At that time, we were starting to see a lot of service-oriented architectures (SOA). People were struggling when something was slow and users experienced slowdowns from their websites. How do you figure out where the slowdown is? How do you find the root cause? 

That space eventually became what is called application performance management (APM). What we came up with was, “How about we instrument what’s going on inside the code in production? How about we trace the flow of code from one service to another service, or to a third service or a database? Then we can figure out where the slow down and bottlenecks are.”

No alt text provided for this image

By understanding what’s happening in these complex software systems, you can figure out where the performance bottleneck is. We were quite successful as a company. We were acquired by Cisco just a day before we were about to go IPO.

The approach we used there solves problems around performance – so monitoring, diagnosing, and troubleshooting diagnostics. The fundamental approach was about instrumenting and learning what was going on inside the systems.

That’s the same approach we at Traceable.ai apply to solving the problems around API security. We have all these challenges around APIs; they’re everywhere, and it’s the wild, Wild West of APIs.

So how do you get in control? You don’t want to ask developers to slow down and not do any APIs. You don’t want to reduce the velocity. The way you get control over it is fundamentally a very similar approach to what we used at AppDynamics for performance monitoring and troubleshooting. And that is by understanding everything that can be instrumented in the APIs’ environment.

That means for all external APIs, all internal APIs, and all the third-party APIs. It means learning how the data flows between these different APIs, which users call different APIs, what they are trying to achieve out of it, what APIs are changed by developers, and which APIs have sensitive data in them.

Once you are in control of what is there, you can learn if some user is trying to use these APIs in a bad way. You know what seems like an attack, or if something wrong is happening. Then you can quickly go into prevention mode. You can block that attack.

Once you automatically understand that — about all of the APIs – then you start to get in control of what is there. Once you are in control of what’s there, you can learn if some user is trying to use these APIs in a bad way. You know what seems like an attack, or if something wrong is happening. There might be a data breach or something. Then you can quickly go into prevention mode. You can then block that attack.

There are a lot of similarities from my experience at my last company, AppDynamics, in terms of how we solve challenges around API security. I also started a second company, Harness. It’s in a different space, targeting DevOps and software developers, and helping them with CI/CD. Harness is now one of the leading platforms for CI/CD or DevOps.

So I have a lot of experience from the vantage point of what do modern software engineer organizations have to do from a CI/CD DevOps perspective, and what security challenges they start to run into.

We talk to Harness customers doing modern CI/CD about application and API security. And it almost always comes as one big challenge. They are worried about microservices, about cloud-native architectures, and about moving more to APIs. They need to get in control and to create a safety net around all of this.

Gardner: Does your approach of trace, monitor, and understand the behavior apply to what’s going on in operations as well as what goes on in development? Is this a one-size-fits-all solution? Or do you have to attack those problems separately?

One-size-fits-all advantages

Bansal: That’s the beauty of this approach. It is in many ways a one-size-fits-all approach. It’s about how you use the data that comes out of this trace-everything instrument. Fundamentally it works in all of these areas.

It works because the engineering teams put in what we call a lightweight agent. That agent goes inside the runtime of the code itself, running in different programming languages, such as JavaPHP, and Python. The agents could also run in your application proxies in your environment.

You put the same kinds of instruments, lightweight agents, in for your external APIs, your internal microservices APIs, as well as the third-party APIs that you’re calling. It’s all the same.

Learn More  

About Traceable.ai

When you have such instrumentation tracing, you can take the same approach everywhere. Ideally, you put the same in a pre-production environment while you are going through the software testing lifecycle in a CI/CD system. And then, after some testing, staging, and load testing, you start putting the same instrumentation into production, too. You want the same kind of approach across all of that.

In the testing cycle, we will tell you — based on all instrumentation and tracing, looking at all the calls based on your tests – that these are the places that are vulnerable, such as these are the APIs that have gaps and could be exploited by someone.

Then, once you do the same approach in production, we tell you not only about the vulnerabilities but also where to block attacks that are happening. We say, “This is the place that is vulnerable, right now there is an attacker trying to attack this API and steal data, and this is how we can block them.” This happens in real-time, as they do it.

But it’s fundamentally the same approach that is being used across your full SDLC lifecycle.

Gardner: Let’s look at the people in these roles or personas, be it developer, operations, SecOps, and traditional security. Do you have any examples or metrics of where API vulnerabilities have cropped up? What vulnerabilities are these people already seeing?

Vulnerable endpoints to protect

Bansal: A lot of API vulnerabilities crop up around unauthenticated endpoints, such as exposing an API and it doesn’t have the right kind of authentication. Second is around not using the right authorization, such as calling an API that is supposed to give you data for you as user 1, but the authorization had a flaw that could be exploited for you to take data — not just as user 1 but from someone else, a user 2, or maybe even a large number of users. That’s a common problem that happens too often with APIs.

No alt text provided for this image

There are also leaky APIs that give you more data than they should, such as it’s only supposed to give the name of someone, but it also includes more sensitive data.

In the world of application security, we have the OWASP Top Ten list that the app security teams and the security teams have followed for a long time. And normally you would have things like SQL injection and cross-site scripting, and those were always in that list.

Now there’s an additional list called the OWASP API Security Top Ten, which lists the top threats when it comes to APIs. Some of the threats I described are key parts of it. And there are a lot of examples of these API-involved attacks these days.

Just recently in 2020, we had a Starbucks vulnerability in API calls, which potentially exposed 100 million customer records. It was around an authentication vulnerability. In 2019, Capital One was a high-profile example. There was an Amazon Web Services (AWS) configuration API that wasn’t secured properly and an attacker got access to it. It exposed all the AWS resources that Capital One had.

We are starting to see patterns emerge on the vulnerabilities attackers are exploiting in APIs. No one should take API security lightly these days. It’s a big mistake if companies are not getting to this faster.

There was a very high-profile attack that happened on T-Mobile in 2018, where there was an API leaking more data than it was supposed to. Some 2.3 million customers’ data was stolen. In another high-profile attack, at Venmo, a public API was not exposing the data for the right users so 200 million transactions of data were stolen from Venmo. As you can see from these examples, we are starting to see patterns emerge on the vulnerabilities attackers are exploiting in APIs.

Gardner: Now, these types of attacks and headlines are going to get the attention of the very top of any enterprise, especially now where we’re seeing GDPR and other regulations require disclosure of these sorts of breaches and exposures. This is not just nice to have. This sounds like potentially something that could make or break a company if it’s not remediated.

Bansal: Definitely. No one should take API security lightly these days. A lot of the traditional cybersecurity teams have put a lot of their focus and energy in securing the networks and infrastructure. And many of them are just starting to get serious about this next API threat vector. It’s a big mistake if companies are not getting to this faster. They are exposing themselves in a big way.

Gardner: The top lesson for security teams, as they have seen in other types of security vulnerabilities, is you have to know what’s there, protect it, and then be proactive. What is it about the way that you’re approaching these problems that set you up to be able to be proactive — rather than reactive — over time?

Know it, protect it, and go proactive

Bansal: Yes, the fundamentals of security are the same. You have to know what is there, you have to protect it, and then you become proactive about it. And that’s the approach we have taken in our solution at Traceable.ai.

Number one is all about API discovery and risk assessment. You put us there in your environment and very quickly we’ll tell you what all the APIs are. It’s all about discovery and inventory as the very first thing. These are all your external APIs. These are all your internal APIs. These are all the third-party APIs that you are invoking. So it starts with discovery. You have to know what is there. And you create an inventory of everything.

No alt text provided for this image

The second part, when you create that inventory, is to give a risk score. We give every API a risk score: internal API, external API, and third-party, all of them. The risk score is based on many dimensions, such as which APIs have sensitive data flowing through them, which APIs are exposed publicly versus not, which APIs have what kind of authentication to them, and what APIs are internally using your critical database systems and reading data from those. Based on all of these factors, we are creating a risk heat map of all of our APIs.

The most important part for APIs security is to do this continuously. Because you’re living in the world of CI/CD, any kind of API discovery or assessment cannot be static, like you do it once a month, once a quarter, or even once a week. You have to do it dynamically all the time because code is changing. Developers are putting new code continuously out there. So the APIs are changing, with new microservices. All of the discovery and risk assessment has to happen continuously. So, that’s really the first challenge we handle at Traceable.ai.

The second problem we handle is to build a learning model. That learning model is based on a very sophisticated machine learning (ML) approach on what is the normal usage behavior of each of these APIs. What users are calling an API? What sequence do they get called? What kind of data passes through them? What kinds of data are they fetching out of where? And on and on.

We are learning all of that automatically. Once you learn that, you start comparing every new API request with what the normal model of how your APIs are supposed to be used.

Now, if you have an attacker trying to use an API to extract much more data than what is normal for that data, you know that something is abnormal about it. You could flag it, and that’s a key part of how we think of the second part, which is how do you protect these APIs from bad behavior.

Learn More  

About Traceable.ai

That cannot be done with the traditional web application firewall (WAF)and runtime application self-protection (RASP), and those kinds of approaches. Those are very rule-based or static-rules-type of base approaches. For APIs, you have to build a behavioral learning-based system. That’s what our solution is about. That’s how we get to a very high degree of protection for these APIs.

The third element to the solution is the proactive part. After a lot of this learning, we also examine the behavior of these APIs and the potential vulnerabilities, based on the models. The right way to proactively use our system is to feed that into your testing and development cycle. That brings the issues back to the developers to fix the vulnerabilities. We can help find them earlier in the lifecycle so you can integrate that into what you’re doing in your application security testing processes. It closes the loop on you doing all of this – only proactively now.

Gardner: Jyoti, what should businesses do to prepare themselves at an early stage for API security? Who should be tasked with kicking this off?

Build your app security team

Bansal: API security falls under the umbrella of app security. In many businesses, app security teams are now tasked to secure the APIs in addition to the traditional web applications.

The first thing every business has to do is to create a responsibility around securing APIs. You have to bring in something to understand the inventory. They don’t even know what all of the APIs are. Then you can start securing and getting a better posture.

In many places, we are also seeing businesses create teams around what they call product security. If you are a company with FinTechproducts, your product is an API because your product is primarily exposed to APIs. Then people start building out product security teams who are tasked with securing all of these APIs. In some cases, we see the software engineering team directly responsible for securing APIs.

The problem is they don’t even know what all of their APIs are. They may have 500 or 2,000 developers in the company. They are building all of these APIs, and can’t even track them. So most businesses have to get an understanding and some kind of control over the APIs that are there. Then you can start securing and getting a better security posture around those.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Traceable.ai.

YOU MAY ALSO BE INTERESTED IN:

Posted in application transformation, artificial intelligence, Cloud computing, containers, Cyber security, data analysis, Data center transformation, digital transformation, machine learning, Security, Software | Tagged , , , , , , , , , , , , , | Leave a comment

Creating business advantage with technology-enabled flexible work

As businesses plan for a future where more of their workforce can be located just about anywhere, how should they rethink hiring, training, and talent optimization? This major theme for 2021 and beyond poses major adjustments for both workers and savvy business leaders.

The next BriefingsDirect modern workplace strategies discussion explores how a global business process outsourcing leader has shown how distributed employees working from a “Cloud Campus” are improving productivity and their end users’ experience. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about best practices and advantageous outcomes from a broadly dispersed digital workforce, we are now joined by José Güereque, Executive Vice President of Infrastructure and Nearshore Chief Information Officer at Teleperformance SE in Monterrey, Mexico; Lance Brown, Executive Vice President Global Network, Telecom, and Architecture at Teleperformance, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, when it comes to flexible and hybrid work models we often focus on how to bring the work to the at-home workforce. But this new level of flexibility also means that we can find and attract workers from a much broader potential pool of talent.

Are companies fully taking advantage of this decentralized talent pool yet? And what benefits are those who are not yet expanding their workforce horizons missing out on?

Pick your talent anywhere

Minahan: We are at a very interesting inflection point right now. If there is any iota of a silver lining in this global pandemic it’s that it has opened people’s minds to both accelerating digitization of their business, but also opening their minds to new ways of work. It’s now been proven that work can indeed occur outside the office. Smart companies like Teleperformance are beginning to look at their entire workforce strategies — their work models — in different ways. 

It’s not about should Sam or Susie work in the office or work at home. It’s, “Gee, now that I can enable everyone with the work resources they need, and in a secure workspace environment to do their best work wherever it is, does that allow me to do new things, such as tap into new talent pools that may not be within commuting distance of my work hubs?”

This now allows me to even advance sustainability initiatives or, in some cases, we have companies now saying, “Hey, now I can also reach workers that allow me to bring more diversity into my workforce. I can enable people to work from inner cities or other locations — rural locations — that I couldn’t reach before.”

This is the thought process that a lot of forward-thinking companies are going through right now. 

Gardner: It seems that a remote, hybrid, flexible work capability is the gift that keeps giving. In many cases we have seen projections of shortages of skilled workers and gaps between labor demand and supply. Are we in just the early innings of what we can expect from the benefits of remote work? 

Minahan: Yes. If you think way back in history, about a year ago, that’s exactly what the world was grappling with. There was a global shortage of skilled workers. In fact, McKinsey estimated that there was a global shortage of 95 million medium- to high-skilled workers. So managers were trying to hire amid all that. 

But, in addition, there was a shortage of the actual modern skills that a lot of companies need to advance their business, to digitize their business. And the third part is a lot of employees were challenged and frustrated with the complexity of their work environment.

Now, more flexible work models enabled by a digital workspace that ensures employees have access to all the work resources they need, wherever work needs to get done, begins to address each of those issues. Now you can reach into new areas to find new talent. You can reach skills that you couldn’t before because you were competing in a very competitive market.

Now you can enable your employees to work where and how they want in new ways that doesn’t limit them. They no longer have a long commute that gives them added stress in their lives. In fact, our research found that 80 percent of workers feel they are being as, if not more, productive working remotely than they could be in the office.

Gardner: Let’s find out from an organization that’s been doing this. José, at Teleperformance, tell us the types of challenges you faced in terms of the right fit between your workforce and your demands for work. How have you been able to use technology to help solve that?

Güereque: Our business was mostly a finite structure of brick-and-mortar operations. When COVID struck, we realized that we faced a challenge of not being able to move people to and from the work centers. So, we rushed to move all of our people, as much as possible, to work from home (WFH).

At-Home Workers May Explore Their Options. 

Here’s What They Will Be Looking For. 

Technically, the first challenge was to restructure our network, services, and all kinds of resources to move the workforce to WFH. As you can imagine, that came in hand with security measures. Security is one of the most important things we need to address and have in place. 

But while there were big challenges, big opportunities also arose for us. The new model allows us to be more flexible in how we look for new talent. We can now find that talent in places we didn’t search before.

Our team has helped expedite this work-at-home model for us. It was not embraced in the massive way it is right now. 

Gardner: Lance, tell us about Teleperformance, your workforce, your reach, and your markets.

Remote work: Simpler, faster, safer

Brown: Teleperformance is a global customer experience company based in France. We have more than 383,000 employees worldwide in 83 countries serving over 170 markets. So it’s a very large corporation. We have a number of agents who support many Fortune 500 companies all over the world, and our associates obviously have to be able to connect and talk [in over 265 languages and dialects] to customers. 

We sent more than 220,000 of these associates home in a very quick time frame at the onset of the pandemic.

Our company is all about being simpler, faster, and safer — and working with Citrix allowed us to meet all of our transition goals. Remote work is now a simpler, faster process — and it’s a safer process. All of our security that Citrix provides is on the back end. We don’t have to worry as much with the security on our endpoint as we would in other traditional models. 

Gardner: As José mentioned, you had to snap to it and solve some major challenges from the crisis. Now that you have been adjusting to this, do you agree that it’s the gift that keeps giving? Is flexible work here to stay from your perspective?

Our company is all about being simpler, faster, and safer — and working with Citrix allowed us to meet all of our transition goals. Remote work is now a simpler, faster process — and it’s a safer process.

Brown: Yes, from Teleperformance’s perspective, we fully are working to get our clients to remain at WFH — for a large percentage of the workforce. We don’t ever see the days of going back to 100 percent brick and mortar, or even mostly brick and mortar. We were at 90 percent on-site before the pandemic. Now, at the end of the day, that will become between 50 percent to 65 percent work at home.

Gardner: Tim, because they have 390,000 people, there is going to be a great diversity of how people will react to this. One of the nice things about remote work and digital workspaces is you can be dynamic. You can adjust, change, and innovate.

How are organizations such as Teleperformance breaking new ground? Are they finding innovation that goes beyond what they may have expected from flexible work at the outset? 

Minahan: Yes, absolutely. This isn’t just about can we enable ourselves to tap into new talent in some remote locations or for disenfranchised parts of the workforce. It’s about creating an agile workforce model. Teleperformance is on the frontlines of enabling that for its own workforce. But Teleperformance is also part of the solution, due to their business process outsourcing (BPO) solutions and how they serve their clients. You begin to rethink the workforce. 

We did a study as part of our Work 2035 Project, in which we went out over the past year-and-a-half and interviewed tens of thousands of employees, thousands of senior executives, and probed into what the world of work will look like in 2035. A lot of things we are talking about here have been accelerated by the pandemic.

One of those things is moving to a more agile workforce model, where you begin to rethink your workforce strategies, and maybe where you augment full-time employees with contractors or gig workers, so you have that agility to dial up your workforce. 

Maybe it’s due to seasonality, and you need for a call center or other services to be able to dial up or back down. Or work locations shift, moving due to certain needs or responses to certain catastrophes. And like I said, that’s what a lot of forward-thinking companies are doing.

What’s so exciting about Teleperformance is they are not only doing it for their own organization — but they are also providing the solution for their own clients.

Gardner: José, please describe for us your Cloud Campus concept. Why did you call it Cloud Campus and what does it do? 

Cloud Campus engages worldwide

Güereque: Enabling people to WFH is only part of what you need. You also need to guarantee the processes in place perform as well as they used to in a brick-and-mortar environment. So our cloud solution pushes subsets of those processes and enables control — to maintain the operational procedures – at a level where our clients feel confident of how we are managing their operations. 

In the past, you needed to do a lot of things if you were an agent in our company. You needed to physically go to a central office to fulfill processes, and then you’d be commuting. Today, the Cloud Campus digitalizes these processes. Now a new employee, in many different countries, can be hired, trained, and coached — everything — on a remote basis.

We use video technology to do virtual face-to-face interactions, which we believe is important to be successful. We still are a very human-centric company. If we don’t have this face-to-face contact, we won’t succeed. So, the Cloud Campus, which is maintained by a really small team, guarantees the needed processes so people can WFH on a permanent basis. 

Gardner: Lance, it’s impressive to think about you dealing face-to-face virtually with your clients in 83 different countries and across many cultures and different ways of doing business. How have you been able to use the same technology across such a diversity of business environments? 

Brown: That’s an excellent question. As José said, the Teleperformance Cloud Campus gives us the flexibility and availability to do just that. For our employees, it just becomes a one-on-one human interaction. Our employees are getting the same coaching, counseling, and support from all aspects of the business – just as they were when they were in the brick-and-mortar office.

Planning a Post-Pandemic Workplace Strategy? 

These Timeless Insights Will Help. 

We are leveraging, like José said, video technology and other technologies to deliver the same user experience for our associates, which is key. Once we deliver that, then that translates out to our clients, too, because once we have a good associate experience, that experience is the same for all of the clients that the associate is handling. 

Gardner: Lance, when you are in a brick-and-mortar environment, a physical environment, you don’t always have the capability to gather, measure, and digitize these interactions. But when you go to a digital workspace, you get an audit trail of data.

Is that something you have been able to utilize, or how do you expect that to help you in the future? 

Digital workspaces offer data insights 

Brown: Another really good question. We continue to gather data, especially as the world is all digitized. And, like you said, we provide many digital solutions for our clients. Now we are taking those same solutions and leveraging them internally for our employees.

We continue to see a large amount of data that we can work with for our process improvements and our technology, analysis, and process excellence (T.A.P.) teams and the transformation our agents do for our clients every day. 

Gardner: Tim, when it comes to translating the value through the workforce to the end user, are there ways we can measure that productivity benefit?

Minahan: One of the key things that came up early-on in the pandemic was a huge spike in worker productivity. Companies settled into a hybrid work model, and that phase was about unifying work and providing reliable access for employees in a remote environment to all the resources they needed.

The second part was, as José said, ensuring that all employees can safely access applications and information — that our corporate information remains secure.

A solid digital workspace environment provides an environment where employees can perform at their best and collaborate from the most remote locations.

Now we have moved into the simplify-and-optimize phase. A lot of companies are asking, “Gee, what are the tools I need to introduce to remove the noise from my employees’ day? How do I guide them to the right information and the right decisions? How do I support more collaboration or collaborative work execution, even in a distributed environment?”

If you have a foundation of a solid digital workspace environment that delivers all the work resources, that secures all the work resources, and then leverages things like machine learning (ML), virtual assistants, and new collaborative work management tools that we are introducing — it provides an environment where employees can perform at their best and can collaborate from the most remote locations.

Gardner: José, most businesses nowadays want to measure everything. With things like Net Promoter Scores (NPS) from your agents and employees, when it comes to looking for the metrics of whether your return on investment (ROI) or return on innovation is working, what have you found? Have you been able to verify what we have been talking about? Does this move beyond theory into practice, and can it be measured well?

Güereque: Yes, that’s very important. As I mentioned, being able to create a Cloud Campus concept, which has all the processes and metrics in place, allows us to compare apples with apples in a way that we can understand the behavior and the performance of an agent at home — same as in brick-and-mortar. We can compare across those models and understand exactly how they are performing. 

We found that a lot of our agents live in cities, which have a lot of traffic. The commuting time for them, believe it or not, was around one-and-a-half hours – as many as two hours for some of them — just going to and from work. Now, all that commuting time is eliminated when they WFH.

At-Home Workers May Explore Their Options. 

Here’s What They Will Be Looking For. 

People started to give lot of value to those things because they can spend their time smarter — or have more family time. So from customer, client, and employee satisfaction, those employees are more motivated — and they’re performing great. Their scores are similar – and in some cases better — than before. 

So, again, if you are able to measure everything through the digitalization of the processes, you can understand the small things you need to tweak in order to maintain better satisfaction and improve all scores across both clients and employees.

No alt text provided for this image

Gardner: Lance, over the past 30 years in IT, we’ve been very fortunate that we can often do more with less. Whether it’s the speed of the processor, or the size of the disk drive. I’m wondering if that’s translating into this new work environment.

Are you able to look at cost savings when it comes to the type of client devices for your users? Are your networks more efficient? Is there a similar benefit of doing more with less when we get to remote work and digital workspaces?

Cost savings accumulate via BYOD

Brown: Yes, especially for the endpoint device costs. It becomes an interesting conversation when you’re leveraging technology like Citrix. For that [thin client] endpoint, all of the compute is back in the data center or in the cloud.

Your overall total cost of ownership continues to go down because you’re not spending as much money on your endpoint, as you had in the past. The other thing is the technology allows us to take an existing PC and make it a thin client, too. That gives you a longer life of that endpoint, which, overall, reduces your cost.

It’s also much, much safer. I can’t stress enough about the security benefits, especially in this current environment. It just makes you so much safer because your target environment and exposed landscape is reduced. Your data center is housing all the proprietary information. And your endpoint is just a dumb endpoint, for lack of better word. It doesn’t have a large attack vector. So you really reduce your attack vector by leveraging Citrix and putting more IT infrastructure in your data center and in your cloud.

Güereque: There is another really important factor, which is to enable bring your own device (BYOD) to be a reality. With the pandemic, the manufacturers of equipment, the PCs and everything, their time to deliver has been longer.

What used to take them two to three weeks to deliver now takes up to 10 weeks. Sometimes the only way to be on time is to leverage the employees’ equipment and enable its use in a secure way. So, this is not just an economic perspective of avoiding the investment in the end device, but is an opportunity to enable them to work faster rather than waiting on the delivery time of new equipment.

No alt text provided for this image

Minahan: At Citrix, we’re seeing other clients do that, too. I was recently talking with the CIO of a financial services company. For them, as the world moved through the pandemic, they saw the demand for their digital banking services quadruple or more. They needed to hire thousands of new financial guidance agents to support that.

And, to José’s point, they couldn’t be bothered with sending each one a new laptop. So BYOD allowed them to gain a distributed digital workspace and to onboard these folks very quickly. They attained the resources they needed to service their end banking clients much faster.

Güereque: Just following on Tim’s comments, I want to give you an example. Two weeks ago we were contacted by a client who needed to have 1,200 people up and running within a week. At the beginning, we were challenged. We wanted to be able to put 1,200 new employees with equipment in place, and weirdly our team came back with a plan. I can tell you that last week they were all in production. So, without this flexibility, and these enablers like Citrix, we wouldn’t be able to do it in such a small time frame.

Gardner: Lance, as we seek work-from-home solutions, we’re using words like “life” and “work balance.” We’re talking about employee behaviors and cultures. It sounds like IT is closer to human resources (HR) than ever.

Has the move to remote work using Citrix helped bond major parts of your organization — your IT capability and your HR capability, for example?

IT enables business innovation

Brown: Yes, now they’re seeing IT as an enabler. We are the enabler to allow those types of successes, from a work-life balance and human standpoint. We’re in constant contact with our operations team, our HR team, and our recruiting team. We are the enabler to help them deliver everything that we need to deliver to our clients.

In the old days, IT wasn’t viewed as an enabler. Now we’re viewed as an enabler. We come up with innovative solutions to enable the business to meet its business needs.

In the old days, IT wasn’t viewed as an enabler. Now we’re viewed as an enabler, and José and I are at the table for every conversation that’s happening in the company. We come up with innovative solutions to enable the business to meet those business needs.

Gardner: Tim, I’m going to guess that this is a nice way of looking at the glass as half full. IT enabling such business innovation is going to continue. How do you expect in the future that we’re going to continue the trend of IT as an enabler? What’s in the pipeline, if you will, that’s going to help foster that?

Minahan: With the backdrop of the continued global shortage of skills, particularly the modern skills that are needed, companies such as Teleperformance are looking at what it means for their workforce strategies. What does it mean for their customer success strategies? Employee experience is certainly becoming a top priority to recruit the best talent, but also to ensure that they can perform at their best and deliver the best services to clients.

In fact, if you look at what employees are looking for going forward, there’s the salary thing and there’s the emergence of purpose. Is this company doing something that I believe in that’s contributing to the world, the environment?

Planning a Post-Pandemic Workplace Strategy? 

These Timeless Insights Will Help. 

But right behind that is, “What are the tools and resources? How effectively are they delivering them to me so I can perform at my best?” And so IT, to Lance’s point, is a critical pillar, a key enabler, of ensuring that every company can work on making employee experience a competitive advantage.

Gardner: José, for other companies trying to make the most of a difficult situation and transitioning to more flexible work models, what would you recommend to them now that you’ve been through this at such a large, global scale? What did you learn in the process that you think they should be mindful of?

Change, challenge, partner up

Güereque: First of all, be able to change, and to challenge yourself. We can do much more than we believe sometimes. That’s definitely something that one can be skeptical of, because of the legacy we have been working through over many years. Today, we have been challenged to reinvent ourselves.

The second one is, there is tons of public information that we can leverage to be able to find successful use cases and learn from them. And the third one is, approach one consultant or partner that has experience in putting all these things in place. Because it is, as I mentioned, not a matter of just enabling people to WFH, it’s a matter of putting all the security environment in place, and all of the tools that are required to be able to perform as a team so you can deliver the results.

No alt text provided for this image

Brown: I’ll add one thing to that. It was about a year ago that I was visiting with Tim and the pandemic was starting to come to fruition. The pandemic had started overseas and was rapidly moving toward the US and other parts.

I met with Tim at Citrix and I said, “I’m not sure exactly what’s going to happen. I don’t know if this is going to be 100 people that go home or 300,000 people. But I know we need a partner to work with, and I know we have to partner through this process.”

So the big thing is that Citrix was that partner for us. You have to rely on your partners to do this because you just can’t simply do it by yourself.

Gardner: Tim, it sounds like an IT organization within Teleperformance is much more of an enabler to the rest of the organization, but you, at Citrix, are the enabler to the IT department at Teleperformance.

Minahan: Dana, to borrow a phrase, “It takes an ecosystem.” You move up that chain. We certainly partner with Teleperformance to enable their vision for a more agile workforce.

But, again, I’ll repeat that they’re doing that for their clients, allowing them to dial up and dial down resources as they need, to work-shift around the globe. So it is a true kind of agile workforce value chain that we’re creating together.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

YOU MAY ALSO BE INTERESTED IN:

Posted in BYOD, Citrix, Cloud computing, contact center, Cyber security, data center, Data center transformation, digital transformation, Enterprise transformation, Help desk, Information management, professional services, Security, User experience | Tagged , , , , , , , , , , , , , , , , | Leave a comment

Disaster Recovery to Cyber Recovery–The New Best Future State

The clear and present danger facing businesses and governments from cybersecurity threats has only grown more clear and ever-present in 2021. As the threats from runaway ransomware attacks and state-sponsored backdoor access to networks deepen, too many businesses have a false sense of quick recovery using traditional business continuity and backup measures.

That’s because the criminals are increasingly compromising vulnerable backup systems and data first — before they attack. As a result, visions of flipping a switch to get back to a ready state may be a dangerous illusion that keeps leaders under a false sense of business as usual. 

The next BriefingsDirect security strategies discussion explores new ways of protecting backups first and foremost so that cyber recovery becomes an indispensable tool in any IT and business security arsenal. We will now uncover how Unisys and Dell Technologies are elevating what was data redundancy to protect against natural disasters into something much more resilient and powerful.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in rapid cyber recovery strategies and technologies, please welcome Andrew Peters, Director of Global Business Development for Security at Unisys, and David Finley, Director of Information Assurance and Security in the Data Protection Division at Dell Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: David, what’s happened during the last few years — and now especially with the FireEye and SolarWinds attacks — that makes cyber recovery as opposed to disaster recovery (DR) so critical?

Best defense is good offense

Finley: I have been asked that question a few times just in the last few weeks, as you might imagine. And there are a couple of things to note with these attacks, SolarWinds and FireEye.

One, especially with FireEye, it was demonstrated to the entire world something that we didn’t really have our eyes on, so to speak, and that is the fact that folks that have really good security — where they sit back and the Chief information security officer (CISO) and the security team say, “We have really good security, we spent a lot of money, we have done a lot of things, we feel pretty good about what we have done.” That’s all great, but what was demonstrated with FireEye is that even the best can be compromised. 

If you have a nation state-led attack or you are targeted by a cybercrime family, then all bets could be off. They can get in and they have demonstrated that with these latest attacks. 

The other thing is, they were able to steal tools. Nothing worse can happen than the bad guys having new toolsets that they can actually use. We believe that with the increased threat from the bad actors because of these things, we really, really need the notion of a cyber vault or the third copy, if you will. Think about the 3:1 rule — three copies, two different locations, one off-site or offline. This is really where we need to be. 

Gardner: Andrew, it sounds like we have to assume that we are going to be or are already attacked. Just having a good defense isn’t enough. What’s the next level that we need to attain? 

Peters: A lot of times organizations think their security and their defenses are strong enough to mitigate virtually anything that happens to the organization. But what’s been proven now is that the bad guys are clever and are finding ways in. With SolarWinds, they found a backdoor into organizations and are coming in as a trusted entity.Just because you have signed Security Assertion Markup Language (SAML) tokens and signed certificates that you trust, you are still letting them in. It’s just been proven that you can’t exactly trust them. And when they come inside an organization and they win, what do you do next? What do you do when you lose? The concept here is to plan to win, but at the same time prepare to lose.

Gardner: David, we have also seen an uptick in the success of ransomware payouts. How is that also changing the landscape for how we protect ourselves? 

Finley: I was recently was thinking about that and I saw something written, it might have been a Wall Street Journal article, on security recently. They said CISOs in organizations have a decision to make after these kinds of attacks. The decision really becomes pretty simple. Do they pay the ransom or do they not pay the ransom? 

We would all like to say, “Don’t pay the ransom.” The FBI says don’t pay the ransom, because of the obvious reasons. If you pay it, they may come back, they are going to want more, and it sets a bad precedent, all those things. But the reality is when this actually happens to a company, they have to sit down and make the hard decision: Do I pay or do I not pay? It’s based upon getting the business running again. 

We want to position ourselves together with Unisys to create a cyber vault that is secured in a way that our customers will never have to pay the ransom.

If we have a protected set of data, and it’s protected in a vault secured by zero trust, to be able to get it back into play — that’s the best answer. It means not paying the ransom.

If we have a protected set of data that is the most important data to the firm – the stuff that they have to have tomorrow morning to actually run the business — and it’s in a protected vault secured by zero trust, through Unisys Stealth software, to be able to secure it and get it back out and put it back into play, that’s the best answer.

So that means not paying the ransom and still having the data available to bring the business back into play the next day. A lot of these attacks, as we know, are not only stealing data, like they did recently with FireEye, but also encrypting, deleting, and destroying the data.

Gardner: Another threat vector these days is that more people are working remotely, so there are more networks involved and more vulnerable endpoints. People are having to be their own IT directors in their own homes, in many cases. How does the COVID-19 work-from-home (WFH) trend impact this, Andrew? 

Work from home opens doors 

Peters: There are far more points of entry. Whereas you might have had anywhere from 10 percent to 15 percent of your workforce remotely accessing the network, and that access was fairly controllable, now you have up to 100 percent of your knowledge workers working remotely and accessing the network. There are more points of entry. From a security perspective, more rules need to be addressed to control access into the network and into operations. 

Then one of the challenges an organization has is that once they are on the inside of these big, flat networks the bad guys can map that network. They learn the systems that are there and they learn the operations extremely well and manipulate them, taking advantage of zero-day vulnerabilities in the systems and so operate within that environment without even being discovered. Once again, going back to the SolarWinds, they were operating for about eight months before they were eventually discovered.

No alt text provided for this image

Gardner: And so are we at a point going on 30 years of using wide area networks (WANs), and we are still under a false sense of security. David, do we not understand the threats around us?

Finley: There is the notion within our organizations and within the public sector that we believe what we have done is good enough. And good enough can be our enemy. I can’t tell you the number of times I have spoken with folks during incident response or after incident response from a cyberattack where they said, “We thought we were secured. We didn’t know that this could happen to us, but it did happen to us.”

That false sense of security is very real, evidenced by these high-level attacks on firms that we never thought it would happen to. It’s not just FireEye and it’s not just SolarWinds. We have had attacks on COVID-19 clinical trial providers, we have had attacks on our own government entities. Some of these attacks have been successful. And a lot of these attacks don’t even get publicized.

The most dangerous thing is a false sense of security. A lot of times these attacks happen and get swept under the rug. They quietly get cleaned up. That leads to a false sense of security.

Here is the most dangerous thing in this false sense of security we are talking about. I ask customers what percentage of the attacks do you actually believe you have visibility into within your own region? And the answer, the honest answer, is usually probably less than 20 percent.

But because I do this every day for a living, as does Andrew, and we probably have visibility to maybe 50 percent, because a lot of times these attacks happen and they get swept under the rug. They quietly get cleaned up, right? So we don’t know what’s happening. That also leads us to a false sense of security.

So again, I believe that we do everything we can upfront to secure our systems, but in the event that something does get through, we need to make sure that we have a secure offline copy of these backups and of our data.

Be prepared to resist ransom

Peters: An interesting dynamic I have noticed since the pandemic is that organizations, while they recognize it’s important to have that cyber recovery third copy to bring themselves back from the brink of extinction, say they can’t afford to do it right now. The pandemic has squeezed them so much. 

Well, we know that they are invested in backup. We know they are invested in DR, but they say, “Okay, we may table this one because it’s something that is a bit too expensive right now.”

However, on the other side, there are organizations that are picking up on this at this time, saying, “You know what? We see this is way more critical because we know the attacks are picking up.”

No alt text provided for this image

But the challenge here is the organizations that are feeling squeezed, that they can’t afford to invest in a solution like this, the question is, can they afford not to invest in this given all the exposure of the threats to their organizations. And we keep going back to SolarWinds, which is a big wake-up call.

But if we go back to other attacks that happened to organizations in the recent past — such as the WastedLocker backdoor and the procedures the bad guys are using to get into organizations to learn how they operate, to find additional backdoors and operate within that environment, and to even learn to avoid the security technologies that were put in there specifically to detect such breaches – they can operate with impunity within that environment. Then they eventually learn that environment well enough to shut them down enough so that the company has two choices. That company can either pay the ransom or go out of business. 

And if you are a bad guy, what would be your goal? Do you want to expose the company’s information and embarrass them? No, you want to make money. And if they are in the process of making money, how do they do it? You have to squeeze an organization as much as possible. And that’s what ransomware and these backdoors are designed to do — squeeze an organization enough to where they are forced to pay the ransom.

Gardner: So we need a better, fuller digital insurance policy. Yet many organizations have insurance in the form of DR designed for business continuity, but that might not be enough.

So what are we talking about when we make this shift from business continuity to cyber recovery, David? What are the fundamental challenges organizations need to overcome to make that transition? 

Cyber more likely than natural disaster

Finley: The number-one challenge I have seen over the past four or five years is that we need to realize that DR — and all the tenets of DR — will not cover us in the event of a cyber disaster. So those are two very different things, right? 

Oftentimes I challenge people with the notion of how they differ. And just to paint a picture, we have been doing DR basically the same way for many decades. The way it normally works is we have our key systems and their data connected to another site outside of a disaster radius, such as for earthquakes, floods, tornados, and hurricanes. We copy that data through a wide-open pipe to the other side on a regular basis. It’s an always-open circuit to the other side, and we have been doing it that way for 40 years.

What I often ask customers is based on that, how much do you spend every year to do DR? What does it really cost? Do you test? What are the real costs for DR for you? And there is usually a tangible answer.

The probability of cyber events is much higher than disaster events.The IT infrastructure and security groups have been making cyber recovery part of DR planning — and it’s taken a long time to get there. We have to change how we approach this.

With that in mind, the next question is, “If you look at the probability of something happening in the future to you, what do you think is more probable — a natural disaster event or a cyber disaster? What’s more probable?” And the answer is unanimously, it’s been 100 percent in recent years, it’s going to be a cyber disaster.Of course, the next question is, “How do you deal with cyber recoveries and is it a function of DR within your organization?” And the answer usually is, “Well, we don’t deal with it very well.”

So the IT infrastructure and security groups have in the last year been making cyber recovery part of DR planning — and it’s taken a long time to get there. When you think about that, if the probability of cyber events is much higher than disaster events — and we spend $1 million a year on DR — how much do we spend for cyber recovery? The answer historically has been that they spend very little on true cyber recovery.

That’s what has to change. We have to change how we approach this. We have to bring the security and risk folks into those decisions on protecting data. We need to look at it through the lens of a cyber event destroying all of the data, just as a hurricane may destroy all of the data. 

Peters: You know, Dave, in talking to a lot of organizations on what exactly they are going to do if they have a ransomware meltdown, we ask, “How are you going to recover?” They say, “We are going to go to our DR.” 

Hmm, okay. But what if you discover in your recovery process those files are polluted? That’s going to be a bad situation. Then they may go find some tapes and stuff. I ask, “Okay, do you have a runbook for this?” They say, “No.” Then how will they know exactly what to do?

No alt text provided for this image

And then the corollary to that is, how long is this recovery going to take? How long can you sustain your operations? How long can you sustain your company, and what kinds of losses are you prepared to sustain? 

Wow, and you are going to figure this all out when you are going through the process of trying to bring your organization back after a meltdown? That’s usually the tipping point where you are going to say, like other organizations have said, “You know what? We are just going to have to pay the ransom.”

Finley: Yes, and that also begs the question that we often see folks miss. And that is, “Do you believe that your CEO and/or your board of directors — the folks who don’t do IT as an everyday job, the folks who are running the business — do they understand the difference between DR and cyber recovery?”

If I were to ask people on the board of any organization if they were secure in their DR plans, most of them would say, “Yes, that’s what we pay our teams to do.”

If I were to ask them, “Well, do you believe that being able to recover from cyber disasters is included in that and done well?” The answer would also be, “Yes.” But oftentimes that is simply not the truth.

They don’t understand the difference between DR and cyber recovery. The data can all be gone from a cyber event just as easily as it can be gone from a hurricane or a flood. We have to approach it from that perspective and start thinking through these things.

We have to take that to our boards and have them understand, “You know what? We’ve spent a lot of money for 40 years on DR, but we really need to start spending money on cyber recovery.”

Yet we still get a lot of pushback from customers saying, “Well, yes, of course making a third copy and storing it somewhere secure in a way that we can always get it back — that’s a great idea — but that costs money.”

Well, you have been spending millions of dollars on DR, so make cyber recovery part of that effort.

Gardner: To what degree are the bad guys already targeting this discrepancy? Do they recognize a capability to go in and compromise the backups, the DR, in such a way that there is no insurance policy? How clever have the bad guys become at understanding this vulnerability?

Bad guys targeting backups

Peters: What would you do if you were the bad guy and you wanted to extort money from an organization? If you know they have any way of quickly recovering, then it’s going to be pretty hard to extort from them. It’s going to be hard to squeeze them. 

These guys are not broke, they are often professional organizations. There’s a lot of focus on the GRU, the former KGB operation that’s in Russia, and Cozy Bear and a number of these different organizations are well-funded. They have very clever people there. They are able to obtain technologies, reverse engineer them, understand how the security technologies operate, and understand how to build tools to avoid them. They want to get inside of organizations and learn how the operation runs and learn specifically what’s key and critical to an organization. 

No alt text provided for this image

The second thing, while they want to take out the primary systems, they also want to make sure you are not able to restore them. This is not rocket science. 

So, of course they are going to target backups. Are they going to pollute the files that you are going to actually put in your backups so if an organization tries to recover, they can create a situation that is bad, if not worse, than it was previously? What would you do? You have to figure that this is exactly what the bad guys are doing in organizations — and they are getting better at it. 

Finley: Andrew, they are getting better at it. We have been watching this pretty closely for the last year now. If you go out to any of the pundits or subscribe to folks like Bleeping ComputerSecurity TodayCIO.com, or CISO, you see the same thing. They talk about it getting worse. It’s getting worse on a regular basis. 

They are targeting backups. We are finding it actually written in the code. The first part of what they are going to do when they drop this on the network is they are going to go seek out security tools to disable them. Then they are going to seek out shadow copies to link to them and seek out backup catalogs and link to them. 

And this is the one that a lot of people miss. I just read this recently, by the FDIC, and they are publishing this to their member banks. They said DR has been done well for a number of decades. You copy information from one bank to another or from one banking location to another and you are able to recover from disasters and spin up applications and data in a secondary location. That’s all great. 

But realize that if you have malware attacking you in your primary location, it very often will make its way to your DR location, too. The FDIC said this pointblank, they said, “And you will get infected in both locations.”

A lot of people don’t think about that. I had a conversation last year with a CISO who said that if an attack gets to your production environment they can manage to move laterally and get to your DR site. And then the date is gone. And this particular CISO said, “You know, we call that an ‘Oh, crap’ moment because there is nothing we can do.”

That’s what we now need to protect against. We have to have a third copy. I can’t stress it nearly enough.

Gardner: We have talked about this third copy concept quite a bit. Let’s hear more about the Dell-Unisys partnership. What’s the technology and strategy for getting in front of this so that cyber recovery becomes your main insurance policy, not your afterthought insurance policy?

Essential copy keeps data dynamic

Finley: We want everyone to understand the reality. The bad guys can get in, they can destroy DR data, we have seen it too many times. It is real. These backups can be encrypted, deleted, or exfiltrated. And that is the fact, so why not have that insurance policy of a third copy?

There’s only way to truly protect this information. If the bad guys can see it, get to the machines that hold it, and get to the data – whether the data is locked on disk or not – they can destroy it. It’s a real simple proposition. 

No alt text provided for this image

We identified many years ago that the only way to really, truly protect against that is to make a copy of the data and get it offline. That is evidenced today by the guidance being given to us by the US federal governmentHomeland Security agency, and FBI. Everybody is giving us the same guidance. They are saying take the backups, the copies of your data, and store them somewhere away from the data that you are protecting – and ideally on the other side of an air gap and offline. 

When we create this third copy from our Dell solution for cyber recovery we take the data that we backup every day and move that key data to another site, across an air gap. The idea is the connection between the two locations is dark until we run a job to actually move the data from production to a cyber recovery vault

With that in mind, there is no way in until we bring up that connection. Now, that connection is secured through Unisys Stealth and through key exchanges and certificate exchanges to where the bad guys can’t get across that connection. They can’t get in. In other words, if you have a vault that’s going to hold all your important data, the bad guys can’t get in. They can’t get through the door. Even though we open a connection, they can’t use that connection to ride into our vault. 

And with that in mind we can take that third copy and store it in this cyber vault and keep it safe. Now, getting the data there and having the systems outside the vault communicate to the machines inside the vault – to make sure that all of that is secure – is something we partnered with Unisys on. I will let Andrew tell you about how that works.

Secure data swiftly in cyber vault

Peters: Okay. First off, Dave, you are not talking about putting all of the data into the vault, right? Specifically people are looking at only the data that’s critical to an operation, right?

Finley: Yes. And a quick example of that, Andrew, is an unnamed company in the paint industry. They create paint around the world and one of their key assets is their color-matching databases. That’s the data they put into the cyber vault, because they have determined that if that proprietary data is gone, they can lose $1 million per day.

We can take a third copy and store it in the cyber vault and keep it safe. We have partnered with Unisys on getting the data there and making the communication with all of the machines secure. 

Another example is an investment firm we work with. This investment firm puts their trade databases inside of the cyber vault because they have discerned that if their trade databases are infected, affected, or deleted or encrypted – and they go down – then they lose multiple millions of dollars per hour. So, to your point, Andrew, it’s usually about the critical business systems and essential information, things like that. But we also have to be concerned with the critical IT materials on your networks, right?

Peters: That’s right, other key assets like your Active Directory and your domain servers. If you are a bad guy, what are you going to attack? If they want to cripple you so much that even if you had that essential data, you couldn’t use it. They are going to try and stop you in your tracks. 

From a security perspective, there are a few things that are important – and one is data efficacy. First is knowing what I am going to protect. Next, how best am I going to securely move that critical data to a cyber vault? There is going to be automation so I am not depending on somebody to do this. This should happen automatically. 

So, to be clear, I am going to move it into the secure vault, and I want that vault to be air gapped. I want it to be abstracted from the network and the environment so bad guys can’t find it. Even if they could find it, they can’t see anything, and they can’t talk to it. 

The second thing I want is to make sure that the data I’m moving has high efficacy. I want to know that it’s not been polluted because bad guys are going to want to pollute that data. Typically, the things you put into the backup – you don’t know, is it good, is it bad, has it been corrupted? So if it’s going to be moved into the vault, we want to know if it’s good or if it’s bad. That way, if we are going to be going into a recovery, I can select the files that I know are good and I can separate them from the bad.

This is really important. That’s one of the critical things when you’re going into any form of cyber recovery. Typically you aren’t going to know what’s good data unless you have a system designed to discern good from bad.

No alt text provided for this image

You don’t want to be rebuilding your domain server and have the thing find out that it’s been polluted, that it’s locked, and that it has ransomware embedded in it. Bad guys are clever. You have to ask, “What would I do if I were a clever bad guy?” Sometimes it’s hard to think like that unless you put your bad guy hat on. 

There’s another important element here, too. The element of time. How quickly am I going get to this protected data? I have all of this data, these files and these applications, and they’re in my protected vault. Now, how am I going to move them back into my production environment?

But my production environment actually might still be polluted. I might still have IT and security personnel trying to clean up that environment. At the same time, I have to get my services back up and running, but I have a compromised network. And what’s the problem? The problem is time.

Ultimately, all of this comes down to business continuity and time. How quickly can I continue my critical operations? How quickly am I going to be able to get them up and running – despite the fact that I still have a lot of issues with ransomware and with hackers inside my IT operations?

From a security and rapid recovery perspective, there are some unique things that we can do with a cyber recovery approach. A cyber recovery solution automates the movement of your critical data into a secure vault, then analyzes it for data efficacy to determine if the data has been compromised. It also provides you with a runbook so you know how you’re going to get that data back out and get those systems operating so you can get users back online.

So even with a zero-day attack, by being able to use things like cryptography, cloaking, and basically hiding things from the rest of the network, I can get cryptographic micro-segmentation to restore the operations of critical services and get users back up on those services. Even if my network is compromised, I can start doing that very, very quickly.

When you put the whole cyber recovery solution that we have together – with automation, the security built in, to get to the critical data on a daily basis, move it into a vault, analyze it, and then obtain a runbook capability – you can quickly move it all back out and get those critical services back up and running. 

Manage, monitor, and restore data

Finley: One of the things that I hope everyone understands is that we can create a secure vault, put information in it, and do that all securely. But as Andrew was saying, most folks also want the ability to monitor, manage, and update that secure vault from their security operations center (SOC) or from their network operating system (NOS).

When we first began our relationship with Unisys, around the Stealth software, I was very excited. For a couple years before that, we were working with folks to show them how to use firewalls to protect information going in and out of our cyber vault, or how to configure virtual private networks (VPNs) to make that happen.

But when we got together and I looked at the Unisys Stealth software a few years ago, from a zero trust networks perspective – instead of just agents on the machines – it becomes invisible.

When I saw the tunnels that Unisys creates to our Dell vault I realized it not only allows us to have a new way to manage everything from the outside, it allows us to take clean data inside the vault and restore it quickly through the secure tunnels back to the outside.

When I first saw that those tunnels Unisys creates to our Dell vault are as secure as they are, I quickly realized that not only did it allow us to have a new way to manage everything from outside – we can also monitor everything from outside. It allows us to take what we know is clean data inside the vault and be able to restore it quickly through one of those secure Stealth tunnels back out to the outside.That is hugely important. We all know there are various ways to secure communications like this. Probably the least secure nowadays are VPNs, or remote access, if you will. The next secure, quite frankly, is viral access, or import access, and then the most secure is, I believe, zero trust software like we get with Unisys Stealth.

Peters: It’s not that I want to beat down on firewalls, because firewalls and ancillary technologies are very effective in protecting organizations – but they’re not 100 percent effective. If they were, we wouldn’t be talking about ransomware at all. The reason that we are is because breaches occur. The bad guys go after the low-hanging fruit, and they’re going to hit those organizations first. Then they’re going to get better at their craft and they’re going to go after more-and-more organizations.

Even when organizations have excellent security, you can’t always prevent against the things that people do. Or now, with SolarWinds, you can’t even trust the software that you’re supposed to trust. There are more avenues into an organization. There are more means to compromise. And the bad guys can monetize what they are doing through Bitcoin in these demands for ransoms.

So, at the end of the day, the threats to organizations are changing. They’re evolving, and even with the best defenses an organization has, you’re probably going to have to plan on being compromised. When the compromise happens, you have to ask, “What do we do now?”

Gardner: Are there any examples that you can point to and show how well recovery can work? Do we have use cases or actual customer stories that we can relate to show how zero trust cyber recovery works when it’s done properly?

Get educated on recovery processes

Finley: Sure, one happened not too long ago. It was a school system in California. And that particular school system worked with us to procure the cyber recovery solution, created a cyber vault, the third copy, and secured all of that. We installed it and got it all up and running and moved data into the vault on a Thursday of a particular week. And then they had a cyber event happen to the school system. This is one of the biggest school systems in that part of California. They had a cyber event over the weekend in that school system, and they had just gotten the vault up and running and had copied all of the critical data into it.

The data in the vault was secure. They were able to recover it as soon as they forensically could, according to the FBI, because the data was secure. It saved a bunch of time and a lot of effort and money.

Now, I contrast that to a couple other major attacks on other companies that happened in the last 120 days. One where they had no cyber vault, the customer data was attacked in production and a lot of DR was attacked. That particular set of events was done through a whole series of social engineering, but they were taken down encrypted and a lot of the data was destroyed.

No alt text provided for this image

It took them days, if not weeks, to begin the recovery process because of a lot of things that we all need to be aware of that happen. If you don’t have data that you know is secured somewhere else and that is clean, you’re going to have to verify that it’s clean before you can recover it. You’re going to have to do test recoveries to systems and make sure you’re not restoring malware. That’s going to take a long period of time. You’re not even going to be able to do that until law enforcement tells you that you can.

Also, when you’re in the middle of an incident response, regardless of who you are, the last thing you’re going to do is connect to the Internet. So if your data is stuck somewhere in a public cloud or clouds, you’re not going to be able to get it while you’re in the middle of an incident response.

The FBI characterizes your systems as a crime scene, right? They put up yellow tape around the crime scene, which is your network. They are not going to allow anybody in or out until they’re satisfied they’ve gathered all the date to be able figure out what happened. A lot of folks don’t know that, but it is simply true.

So having your critical data accessible offline, on the other side of the crime area, having it scrubbed every day do make sure it is absolutely clean, is very important. 

In a case of a second company, it took days if not weeks before they could recover information.

There is a third example. The IT people there told me the cyber vault saved their company, and “saved our butts,” they said. In this particular case, the data was encrypted in all of their systems. They were using backup software to write to a virtual client and they were copying that day from virtual clients into our cyber vault.

They also had our physical clients, called Data Domain from Dell, in production and writing into the cyber vault. They did not have our analytics software to scrub and make sure it was clean because it was an older implementation. But at the end of the day, everything in production was gone. But they went to the vault data and realized that the data there was all still good.

The bad guys couldn’t get there. They couldn’t see the cyber vault, didn’t know how to get there, and so there was no way they could get to that information. In this case, they were able to spin up and restore it rather quickly.

In another incident example, in the cyber vault, they had our CyberSense software, which does cyber analytics on the data being stored. We can verify the data is clean at a 99.7 percent effective level to tell the customer the data is restorable and clean. In this case the FBI got involved.

The FBI actually used the information from our CyberSense software to help them to ascertain the who, what, when, and where of what happened. Once they knew who, what, when, and where, they knew the stored data was clean and we were able to do a more rapid rescue.

Plan ahead with precise processes

Peters: What’s important too is knowing what to do. For example, what applications are you going to recover first? What do you need to do to get your operations running? Where are you going to find the needed files? Who’s going to actually do the work? What systems you are going to recover them onto?

Have a plan of action versus, “Okay, we’re going to figure this out right now.” Have a pre-prescribed runbook that’s going to take you through the processes, procedures, and decisions that need to be made. Where is the data going to be recovered from? What’s going to be determined? How is it recovered? Who’s going to get access to it?

This is different than DR. This is different than backup, it’s way different. It’s its own animal. You can define the runbook so that you can recover fully.

All of these things. There’s a whole plan that goes into this. This is different than DR. This is different than backup, it’s way different, it’s its own animal. And this is another place where Dell expertise comes in, being able to do the consulting work with an organization to define the plan or the runbook so that they can recover.

Finley: I wanted to also point out a consideration about ransomware payments. It’s not always a clean option to actually make the payment because of the U.S. Treasury Office of Foreign Assets’ controls. If an organization pays the ransom, and the recipients of that payoff are considered a threat to the United States, they may be breaking another law if you pay them the ransom.

So that needs to be taken into consideration if an organization is breached for ransom. If they pay the ransom off, they may be breaking a federal law.

Gardner: Do the Dell cyber recovery vault and Unisys Stealth technologies enable a crawl, walk, and run approach to cyber recovery? Can you identify those corporate jewels and intellectual property assets, and then broaden it from there? Is there a way to create a beachhead and then expand?

Build the beachhead first

Finley: Yes, we like to protect what we call critical rebuild materials first. Build the beachhead around those critical materials first, then get those materials Active Directory and DNS zone tables in the vault.

Next put the settings for networks, security logs, and event logs into the vault — the stuff in your production environment that you could get out of the vault and make everything work again.

If you have studied the Maersk attack in 2017, they didn’t have any of that, and that was a very bad day. They finally found those copies in Africa, but if they hadn’t found them it would’ve been a very bad month or year. So with that kind of a thing in mind, it has happened to many folks besides just them where this had to be most publicized.

So with that in mind, get those materials into the vault as a beachhead, if you will. Let’s build together the notion of this third location, let’s secure it with Unisys Stealth, and let’s secure it with an air gap that’s engulfed in Stealth, and with all of the connections in and out of the vaults protected by Stealth using zero trust. Let’s take those critical materials and build that beachhead there. Ideally, I’ve seen great success when I was doing that, and then gathering maybe total of three to five of the most critical business applications that a firm may have and concentrating on them first.

No alt text provided for this image

Here’s what we don’t want to do. I see no success in sitting down and saying, “Okay, we’re going to go through 150 different applications, with all of their dependencies, and we’re going to decide which of those pieces go into the cyber vault.”

It can be done, it has been done, and we have consulting that can help do that between Dell and Unisys, but let’s not start that way. Let’s instead start like we did recently with a big, big company in the U.S. We started with critical materials, we chose five major applications first, and for the first six months that’s what we did.

We protected that environment and those five major applications. And as time goes on, we will move other key applications into that cyber vault. But we decided not to boil the ocean, not look at 2,000 different applications and put all that data into the vault.

I recently talked to a firm that does pharmaceuticals. Intellectual property is huge for them. Putting their intellectual property into the cyber vault is really key. It doesn’t mean all of their systems. It means they want intellectual property in the vault, those critical materials. So build the beachhead and then you can move any number of things into it over time.

Peters: We have a demonstration to show what this whole thing looks like. We can show what it looks like to make things disappear on your network through cloaking, moving data from a production environment into a vault, and in-retention locking that, analyzing the data, and finding out if something is bad on it, and being able to select the last known good copy of data and start to rebuild systems in your production environment. 

If somehow you had an environment you’re recovering and malware manages to slip inside of that we can detect that and we can shut it down in about 10 to 15 seconds. For organizations interested in seeing this working in real-time, we have a real live demo.

Finley: That’s a powerful, powerful demo for all of the folks who are listening. You can see this thing work from beginning to end to see how the buttons are put in and how the data essentially moves out of scrubbing of the data to make sure it’s clean. It was fascinating for me the first time I saw this. It was great. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Dell Technologies.

YOU MAY ALSO BE INTERESTED IN: 

Posted in Cloud computing, Cyber security, Dell, disaster recovery, Identity, Security, Software, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

Rethinking employee well-being means innovative new support for the digital work-life balance

The tumultuous shift over the past year to nearly all-digital and frequently at-home work has amounted to a rapid-fire experiment in human adaptability.

While there are many successful aspects to the home-exodus experiment, as with all disruption to human behavior, there are also some significant and highly personal downsides.

The next BriefingsDirect work-from-home strategies discussion explores the current state of employee well-being and examines how new pressures and complexity from distance working may need new forms of employer support, too.

 Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about coping in an age of unprecedented change in blended work and home life, we’re now joined by Carolina Milanesi, Principal Analyst at Creative Strategies and founder at The Heart of TechAmy Haworth, Senior Director, Employee Experience at Citrix, and Ray Wolf, Chief Executive Officer at A2K Partners. The panel discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Amy, how are predominantly digital work habits adding to employee pressures and complexities? And at this point, is it okay not to be okay with all of these issues and new ways of doing things?

Haworth: Thanks, Dana. It’s such an important question. What we have witnessed in the last 12 months is an unfolding of the humanness of a very powerful transformational experience in the world. It is absolutely okay not to be okay. To be able to come alongside those who are courageous enough to admit it is one of the most important roles that organizations are being called upon to play in the lives of our employees. 

Oftentimes, I think about what’s happened in 2020 and 2021. It’s as if the tide went out. It exposes fissures in our connectedness in the way organizations operate — even in the support systems we have in place for employees.

We’ve learned that unless employees are okay, our organizational health is at risk, too. Taking care of employees and enabling employees to take care of themselves shifts the conversation to new, innovative ways of doing that.

The last 12 months have shown us that we’ve never faced something like this before, so it’s only natural that we lacked a lot of the support systems and mechanisms to enable us to get through it.

There has been some amazing innovation to help close that gap. But it’s also been as if we’ve been flying the plane, while also figuring out how to do this all better. So, absolutely, yes, there are new challenges — but also a lot of growth. Being able to come alongside and being able to raise the white flag when needed makes it worth doing.

Gardner: Carolina, the idea for corporations of where their responsibility is has shifted a great deal. It used to be that employees would drive out of the parking lot — and they’d be off on their way, and there was no further connection. But when they’re working at home and facing new forms of fatigue or emotional turmoil, the company is part of that process, too. Do you see companies recognizing that?

Milanesi: Absolutely. To be honest with you, it’s been a long time in coming because although I might drive away from the parking lot — for a lot of employees — that’s not when the work stops.

Either because you’re working across different time zones or because you’re on call, if you’re a knowledge worker, chances are that your days are not a nine-to-five kind of experience. That had not been fully understood. The balance that people have to find in working and their private life has been under strain for quite some time.

Now that we’ve been at home, there’s no escape [from work]. That’s the realization companies have come to — that we are in this changed world and we are all at home. It’s not just that I decided to be a remote worker, and it’s just me. It’s me and whoever else is living with me — a partner, or maybe parents that I’m looking after, and children, all co-sharing apartments and all of that.

So, the stress is not just mine. It’s the stress of all of the people living with me. That is where more attentiveness needs to come in, to understand the personal situations that individuals are in — especially for under-represented groups.

For example, if you think about women and how they feel about talking — or not talking — about their children or caregiver responsibilities, they often shy away from talking about it. They may think it reflects badly on them.

All of those stresses were there before, but they became exacerbated during the pandemic. This has made organizations realize how much weight is on the shoulders of their employees, who are human beings after all.

Gardner: Ray at A2K Partners, you probably find yourself between the companies and their employees, helping with the technology that joins them and makes them productive. How are you seeing the reaction of both the employees and the businesses? Are they coming together around this — or are we just starting that process?

Wolf: I think we’re only in the second inning here, Dana. In our conversations with chief human resources officers (CHROs), they come to the conversation saying, “Ray, is there a better way? Do we really need to live with the way things are for our employees, particularly with the way they interface with technology and the applications that we give them to get their jobs done?”

We’re able to reassure them that, yes, there is a better way. The level of dissatisfaction and anxiety that employees have working with technology doesn’t have to be there anymore. What’s different now is that people are not accepting the status quo. They’re asking for a better way forward. The great news — and we’ll get into this a little bit later — is there are a lot of things that can be done.

The concept of work-life balance, right? It’s no longer two elements at the end of a see-saw that’s in balance. It looks more like a puzzle, where you’re shifting in and out — often in 15-minute or 30-minute intervals — between your personal life and your work life.

So how can technology better facilitate that? How can we put people into their flow state so they have a clear cognitive view of what they need to get done, set the priorities, and lead them into a good state when they need to return to their family activities and duties?

Gardner: Amy, what hasn’t changed is the fundamental components of this are people, process, and technology. The people part, the human resources (HR) part, perhaps needs to change because of what we’ve seen in the last year.

Do you see the role of HR changing? Is it even being elevated in importance within the organization?

Empowered employees blend life, work 

Haworth: The role of HR really has elevated. I see it as an amplification of employee voice. HR is the employee advocate and the employee’s voice into the organization.

It’s one thing to be the voice when no one’s listening. It’s much more interesting to be the voice when people are listening and to steer the organization in the direction that puts talent at the center, with talent first.

We’re having discussions and dialog about what’s needed to create the most powerful employee experience, one where employees are seen or heard and feel included in the path forward. One thing that’s so clear is we are shaping this all together, collectively. We are together shaping the future in which we will all live.

Being able to include that employee voice as we craft what it means to go to work or to do work in the years ahead means in many ways that it’s an open canvas. There are many ways to do hybrid work.

Being able to include that employee voice as we craft what it means to go to work or to do work in the years ahead means in many ways that it’s an open canvas. There are many ways to do hybrid work, which clearly seems to be the direction most organizations are going. Hybrid is quite possibly the future direction education is heading, too.

A lot of rethinking is happening. As we harness that collective voice, HR’s leadership is bringing that to the table, bringing it into decisions, and entering into a more experimental mindset. Where we are looking to in the future and how we find ways to innovate around hybrid work is increasingly important.

Gardner: Carolina, when we look at the role of technology in all of this, how should an HR organization such as Amy’s use technology to help — rather than contribute to the problem?

Milanesi: That’s the key question, right? Technology cannot come as another burden that I have to deal with when it comes to employees.

I love Ray’s analogy of the puzzle of the life we live. I stopped talking about work-and-life balance years ago and started talking instead about working-life-blend because if you blend there’s room to maneuver and change. You can compromise and put less stress on one area versus the other.

So, technology needs to come in to help us create that blend – and it has to be very personal. The most important thing for me is that one size doesn’t fit all. We’re all individuals, we’re all different. And although we might share some commonalities, the way that my workflow is setup is very different from yours. It has to speak to me because otherwise it becomes another burden.

So, one part is helping with that blend. Another part for technology to play is not making me feel that the tool I’m using is an overseer. There are a lot of concerns when it comes to remote working, that organizations are giving you tools to manage you — versus help you. That’s where the difference lies, right? For me, as an employee, I need to make sure that the tool is there to just help me do my work.

No alt text provided for this image

It doesn’t have to be difficult. It has to be straightforward. It keeps me in the flow, and helps me with my blended life. I also think that the technology needs to be context-aware. For example, what I need in the office is different from what I need when I’m at home or when I’m at the airport — or wherever I might be to doing work.

The idea that your task is dependent or is influenced by the context you’re in is important as well. But simplicity, security, and my privacy are all three components that are important to me and should be important to my organization.

Gardner: Ray, Carolina just mentioned a few key words: context, feelings, and the idea of an experience rather than fitting into what the coder had in mind. It wasn’t that long ago that applications pretty much forced people to behave in certain ways in order to fit set processes. 

What I’m hearing, though, is that we have to have more adaptable processes and technologies to account for a person’s experiences and feelings. Is that not possible? Or is it pie-in-the-sky to bring the human factor and the technology together?

Technology helps workers work better

Wolf: Dana, the great news is the technology is here today with the capability to that. The sad part is the benchmark is still pretty low. The fact is when it comes to providing technology to enable workers to get their jobs done, there is really very little forethought as to how it’s architected and orchestrated.

People are often simply given login information to the multiple applications that they need to use to get things done during the day. The most that we do in terms of consideration for these employees is create a single sign-on. So, for the first five minutes of your day, we have a streamlined, productive, and secure way to login — but then it’s a free for all. Processes are standard across employee types. There’s no consideration for how the individual employee wants to get work done, of what works best for them.

We subject very highly talented and creative people to a lot of low-value, repetitive tasks. Citrix Workspace allows you to automate out those mundane tasks, allowing workers to contribute more to critical business needs.

In addition, we subject very highly talented and creative people to a lot of low-value, repetitive tasks. One of the things that CHROs bring up to me all the time is, “How can I get my employees working at the top of their skills range, as opposed to the bottom of their skills range?”

Today there are platforms such as Citrix Workspace that allow you to automate out those mundane tasks, take into consideration where the employees should be spending their time, and allowing them to contribute more to the critical business needs of an organization.

Gardner: Amy, to that point of the way employees perceive of their work value, are you seeing people mired in doing task-based work? Or are you seeing the opportunity for people to move past that and for the organization to support them so that they can do the work they feel most empowered by? How are organizations helping them move past task to talent?

Haworth: Great question, and I love how you phrase that move from task to talent. So a couple things come to mind. Number one, organizations are looking to take friction out of the work-day. That friction is energy, and that energy could be better spent for an employee doing something they love to do — something that is their core skill set or why they were hired into that organization to start with.

A recent statistic I heard was that average workflow tends to involve at least four different stops along an application’s path. Think about what it takes to submit an expense report.

As much as possible, we’re looking for ways that take friction out of those interactions so employees get a sense of progress at the end of the day. The energy they’re expending in their jobs and roles should feel like it’s coming back threefold.

No alt text provided for this image

Ray touched on the idea of flow, but the conversation in 2021, based on the data we’ve seen, shows that employees feel fatigued because of the workload. What emerged from a lot of the survey work across multiple research firms last year was this sense of fatigue. You know, “My workload doesn’t match the hours that I have in the day.”

So, in HR circles, we’re beginning to think about, “Well, what do we do about that?” Is this a conversation more about energy and energetic spend? Initially [in the pandemic] there was a lot of energy spent just transforming how things were done. And now we get to think about when things are done. When do I have the most energy to do that hard thing? And then, “How is the technology helping me to do it? And is it telling me when it’s probably time to take a break?”

At Citrix we’ve recently introduced some really interesting notifications to help with this idea of well-being so that integration of technology into the workday helps as an employee manages their energy – to take, for example, a five-minute meditation break because they have been working solid for three hours. That might be a really good idea rather than that cup of coffee, for example.

So we’re starting to see a combination of the helpfulness of technology in a way that’s invited by employees. Carolina makes a great point about the privacy concerns, and so it comes in a way that’s invited by employees. That ultimately enables a state of flow and that feeling of progress and good use of the talent that each employee brings into the organization.

Gardner: Carolina, when we think about technology of 10 or more years ago, oftentimes developers would generate a set of requirements, create the application, and throw it over the wall. People would then use it. 

But what I just heard from Amy was much more about the employee having agency in how they use the technology, maybe even designing the flow and processes based on what works for them.

Have we gotten to the point where the technology is adaptive and people have a role in how services — maybe micro-services — are assembled? Are people becoming more like developers, rather than waiting for developers to give them the technology to use?

Optimize app workflows

Milanesi: Absolutely. Not everybody is in that kind of no-code environment yet to create their own applications from scratch, but certainly a lot of people are using micro-apps that come together into a workflow in both their private and work lives. 

Smartphone growth marked the first time that each of us started to be more in control of the applications that create workflows in a private way. The arrival of your own device into enterprise also meant bringing your own applications into enterprise.

As you know, it was a bit of the Wild West for a while, and then we harnessed that. Organizations that are most successful are the ones that stopped fighting this change and actually embraced it. To Amy’s point, there are ways to diminish and lower the friction that we feel as employees when we want to work in a certain way and to use all of the applications and tools, even ones that an IT department may not want us to. 

There is more friction and time loss in someone trying to go around that problem and creating back doors that bypass IT than for IT to empower me to do that work, as long as my assets and data are secure. As long as it’s secure, I should have a list of applications and tools that I can choose from and create my own best workflows.

Gardner: Ray, how do you see that balance between employee-agency and -agility and what the IT department allows? How do we keep the creativity flowing from the user, but at the same time put in the necessary guardrails?

Wolf: You can achieve both. This is not employee workflow at the sacrifice of security. That’s the state of technology today. Just in terms of where to get started with the idea of employees designing their workflows, this is exactly how we’re going about it with many customers today.

I mean, what an ironic thought: To actually ask the people involved in the day-to-day work what’s working for them and what’s not. What’s causing you frustration and is high-value to the company? So you can easily identify five places to go get started to automate and streamline.

What an ironic thought: To actually ask the people in the day-to-day work what’s working for them and what’s not. What’s causing you frustration and is high-value to the company? 

And the beautiful thing about it is when you ask the worker where that frustration is, and you solve it, two things happen. One, they have ownership and the adoption is very high as opposed to leadership-driven decisions. And we see this happening everyday today. It’s kind of the “smart guy in the room” syndrome where the people who don’t actually have to do the work are telling everybody what and how the workers actually want to get things done. It doesn’t work that way. 

The second is, once employees see — with as little as two to three changes in their daily workflow — what’s possible, their minds open up in terms of all the automation capabilities, all the streamlining that can occur, and they feel invigorated and energized. They become a much more productive and engaged member of the team. And we’re seeing this happen. It’s really an amazing process overall.

No alt text provided for this image

We used to think of work as 9 am to 5 pm — eight hours out of your awake hours. Today, work occurs across every waking hour. This is something that remote workers have known for a long time. But now some 45 percent to 50 percent of the workforce is remote. Now it’s coming to light. Many more people are feeling like they need to do something about it.

So we need to sense what’s going on with those employees. Some of the technology that we’re working on is evaluating and looking at someone’s schedule. How many back-to-back meetings have they had? And then enforcing a cognitive break in their schedules so people can take a breather — maybe take care of something in their personal lives.

And then, even beyond that — with today’s technology such as smart watches — we could look at things such as blood pressure and heart rates and decide if the anxiety level is too high or if an employee is in their proper flow. Again, we can then make adjustments to schedules, block out times on their calendars — or, you know, even schedule some well-being visits with someone who could help them through the stresses in their lives.

Gardner: Amy, building on Ray’s point of enhancing well-being, if we begin using technology to allow employees to be productive, in their flow, but also gain inference information to support them in new ways — how does that change the relationship between the organization and the employee? And how do you see technology becoming part of the solution to support well-being?

Trust enhances well-being

Haworth: There’s so much interesting data coming out over the last year about how the contract between employees and the organization is changing. There has been, in many cases, a greater level of trust. 

According to the research, many employees have trusted what their organizations have been telling them about the pandemic — more than they trusted state and local governments or even national governments. I think that’s something we need to pay attention to.

Trust is that very-hard-to-quantify organizational benefit that fuels everything else. When we think about attraction, retention, engagement, and commitment — some in HR believe that higher organizational commitment is the real driver to discretionary effort, loyalty, and tenure.

No alt text provided for this image

As we think about the role of the organization when it comes to well-being and how we build on trust where it’s healthy — how can we uphold that with high regard? How can we better bridge that into a different employer-employee relationship — perhaps one that’s better than we’ve ever seen before?

If we stand up and say, “Our talent is truly the human capital that will be front-and-center to helping organizations achieve their goals,” then we need to think about this. What is our new role? According to Maslow’s hierarchy of needs, it’s hard to think about being a high-performing employee if things are falling apart on the home front, and if we’re not able to cope.

For our organization, at Citrix, we are thinking about not only our existing programs and bolstering those, but we’re also looking for other partners who are truly experts in the well-being space. We can perhaps bring that new information into the organization in a way that integrates with and intersects into an employee’s day.

For us at Citrix, that is done through Citrix Workspace, and in many cases with the rapport of a managerial capability. That’s because we know so much of the trust relationship is between the employee and the manager, and it is human first and foremost.

Then we also need to think about how we continue to evolve and learn as we go. So much of this is uncharted. We want to make sure we’re open to learning. We’re continuing to invest. We’re measuring how things are working. And we’re inviting that employee voice in — to help co-create.

Gardner: Carolina, from what we just heard from Amy, it sounds like there’s a richer, more intertwined relationship between the talent pool and the organization. And that they are connected at the hip, so to speak, digitally. It sounds like there’s a great opportunity for new companies and a solutions ecosystem to develop around this employee well-being category.

Do you see this as a growth opportunity for new companies and for organizations within existing enterprises? It strikes me that there’s a huge new opportunity.

Tech and the human touch

Milanesi: I do think there’s a huge opportunity. And that’s good and bad in my view because obviously, when there’s a lot of opportunity, there also tends to be fragmentation.

Many different things are going to be tried. And not everybody has the expertise to help. There needs to be an approach from the organization’s perspective so that these solutions are vetted.

But what is exciting is the role that companies like Citrix are taking on to become a platform for that. So there might be a start-up that has a great idea and then leverages the Citrix Workspace platform to deliver that idea.

Then you have the opportunity to use the expertise that Citrix brings to the table. They have been focused on workflows and employee empowerment for many years. What I’m excited to see is organizations that come out and offer that platform to make the emerging ecosystem even richer.

No alt text provided for this image

I also love what Amy said about human trust as first-and-foremost. That’s what I caution people to make it all about. Technology should not be a crutch, where technology comes in to try and make you suffer less, but still does not solve the problem. And technology should not be the only solution you adopt.

I might have a technological check-in that tells me that I’m taking on too many meetings or that I should take a break, but there is nothing better than a manager giving you a call or sending you an email to let you know you are seen as a human, that your work is seen by other humans.

I love what you were saying earlier about the difference between the task and the talent. That’s another part where — if we have more technology that helps us with the mundane stuff and we can focus on what we enjoy doing — that also helps us showcase the value that we bring as an employee and then the value of the task, not just the output.

A lot of times, some of these technology solutions that are delivered are about making me more productive. I don’t know about you guys, but I don’t wake up in the morning and say, “I want to be more productive today.” I wake up and want to get through the day. I want to enjoy myself; I want to make a contribution and to feel that I make a difference for the company I’m working for.

And that’s what technology should be able to do: Come in and take away the mundane, take away the repetitive, and help me focus on what makes a difference — and what makes me feel like I’m contributing to the success within my company.

Gardner: Ray, I would like to visit the idea of consequences of the remote-work era. Just as people can work from anywhere, that also means they can work for just about anyone.

If you’re working for a company that doesn’t seem to have your well-being as a priority and doesn’t seem to be interested in your talents as much as your tasks, you can probably find a few other employers quite easily from the very same spot that you’re in.

How has the competitive landscape shifted here? Do companies do this because it’s a nice thing to do? Or will they perhaps find themselves lacking the talent if the talent wants to work for someone who better supports them?

Employees choose work support

Wolf: Dana, that ultimately is the consequence. Once we get through this immediate situation from the pandemic, and digest the new learning about working remote, we will have choices.

Employers are paying attention to this in a number of ways. For example, I was just on the phone with a CHRO from a Fortune 50 company. They have added a range of well-being applications that help in the taking care of the employees there.

But there are also some cultural changes that need to occur. This CHRO was explaining to me that even though they have all these benefits — including 12 hours off a month or more so-called mental health days – they are struggling with some of the managers. They are having trouble getting managers, some of whom may be later on in their careers, to actually model these new behaviors and give the employees and workers permission to take advantage of the benefits from these well-being applications.

The ones who evolve culturally, and who pay attention to this change, are ultimately going to be the winners. It may be another 6 or 18 months, but we’ll get there.

So we have a way to go. But the ones who evolve culturally, and who pay attention to this change, are ultimately going to be the winners. It may be another 6 or 18 months, but we’ll definitely get there. In the interim, though, workers can do something for themselves.

There are a lot of ways to stay in-tune with how you’re feeling and give yourself a break and better scheduling of time. I know we would like to have technology that forces that into the schedule, but you can do that for yourself now as an interim step. And I think there are a lot of possibilities here — and more not that far in the future.

There are things that could be done immediately to bring a little bit of relief, help people see what’s possible, and then encourage them to continue working down this path of the intersection of well-being and employee workflow.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

YOU MAY ALSO BE INTERESTED IN:

Posted in Citrix, digital transformation, Enterprise transformation, Security, User experience | Tagged , , , , , , , , , , , , , , , , | Leave a comment

How HPE Pointnext Tech Care changes the game for delivering enhanced IT solutions and support

The next BriefingsDirect Voice of Innovation discussion explores how services and support for enterprise IT have entered a new era.

For IT technology providers, the timing of the news couldn’t be better. That’s because those now consuming tech support are demanding higher-order value — such as improved worker productivity from hybrid services delivered across many remote locations.

At the same time, the underlying technologies and intelligence to enhance traditional help desk support are blossoming to deliver proactive — and even consultative — enhancements.

Stay with us here to examine how Hewlett Packard Enterprise (HPE) Pointnext Services has developed new solutions to satisfy those enhanced expectations for the next era of IT support. HPE’s new generation of readily-at-hand IT expertise, augmented remote services, and ongoing product-use guidance together propel businesses to exploit their digital domains — better than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the Pointnext vision for the future of advanced IT operational services are Gerry Nolan, Director of Operational Services Portfolio at HPE Pointnext Services, and Rob Brothers, Program Vice President, Datacenter and Support Services, at IDC. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. 

Here are some excerpts:

Gardner: Rob, what are enterprise IT leaders and their consumers demanding of tech support in early 2021? How are their expectations different from just a year or two ago?

Brothers: It’s a great question, Dana. I want to jump back a little bit further than just a year or so ago. That’s because support has really evolved so much over the past five, six, or seven years.

If you think about product support and support in general back in the day, it was just that. It was an add-on. It was great for fix services. It was about being able to place a phone call to get something fixed.

But that evolved over the past few years due to the fact that we have more intelligent devices and customers are looking for more proactive, predictive capabilities, with direct access to experts and technicians. And now that all has taken a fast-track trajectory during the pandemic as we talk about digital transformation.

During COVID-19, customers need new ways to work with tech-support organizations. They need even more technical assistance. So, we see that a plethora of secure, remote-support capabilities have come out. We see more connected devices. We see that customers look for expertise over the phone — as well as via chat or via augmented reality. Whatever the channel, we see a trajectory and growth that has spurred on a lot of innovation — and not just the innovation itself, but the consumption of that innovation.

Those are a couple of the big differences I’ve seen in just the past couple of years. It’s about the need for newer support models, and a different way of receiving support. It’s also about using a lot of the new, proactive, and predictive capabilities built inside of these newer systems — and really getting connected back to the vendor.

Those enterprises that connect back to their vendors are getting that improved experience and can then therefore pass that better experience to their customers. That’s the important part of the whole equation.

Those enterprises that connect back to their vendors are getting that improved experience and can then therefore pass that better experience to their customers. That’s the important part of the whole equation — making sure that better IT experiences translate to those enterprise customers. It’s a very interesting time.

Gardner: I sense this is also about more collective knowledge. When we can gather and share how IT systems are operating, it just builds on itself. And now we have the tools in place to connect and collaborate better. So this is an auspicious time — just as the demand for these services has skyrocketed.

Brothers: Yes, without a doubt. I find the increased use of augmented reality (AR) to deliver support extremely interesting, too, and a great use case during a pandemic.

If you can’t send an engineer to a facility in-person, maybe you can give that engineer access to the IT department using Google Glass or some other remote-access technology. Maybe you can walk them through something that they may not have been able to do otherwise. With all of the data and information the vendor collects, they can more easily walk them through more issues. So that’s just one really cool use case during this pandemic.

Gardner: Gerry, do you agree that there’s been auspicious timing when it comes to the need for these innovative support services and the capability to deliver them technically?

Pandemic accelerates remote services

Nolan: Yes, there’s no question. I totally agree with Rob. We saw a massive spike with the pandemic in terms of driving to remote access. We already had significant remote capabilities, but many of our customers all of a sudden have a huge remote workforce that they have to deal with.

They have to keep their IT running with minimal on-site presence, and so you have to start quickly innovating and delivering things such as AR and virtual reality (VR), which is what we did. We already have that solution.

But it’s amazing how something like a pandemic can elevate that use to our thousands and thousands of technical engineers around the world who are now using that technology and solution to virtually join customer sites and help them triage, diagnose, and even do installations. It’s allowing them to keep their systems and their businesses running during a very tough period.

Another insight is we’ve seen customers struggling, even before the pandemic, with having enough technical personnel bandwidth. You know, how they need more people resources and skills as more new technologies hit the streets.

To Rob’s point, it’s difficult for customers to keep pace with the speed of change in IT. There’s more hunger for partners who can go deep on expertise across a wide plethora of technologies. So, there’s a variety of new support activities going on.

Brothers: Yes, around those technical capabilities, one of the biggest things I hear from enterprises is just trying to find that talent pool. You need to get employees to do some of the technical pieces of the equation on a lot of these new IT assets. And they’re just not out there, right?

They need programmers and big data data scientists. Getting folks to come in to assist on that level is more and more difficult. Hence, working with the vendor for a lot of these needs and that technical expertise really comes in handy now.

Gardner: Right, when you can outsource — people do outsource. That’s been a trend for 10 or 15 years now.

What are the challenges enterprises — as the IT vendors and providers — have in closing that skills gap? 

DX demands collaboration

Brothers: I actually did a big study around digital transformation. One of the big issues I’ve seen within enterprises is a lot of siloed structures. The networking team is not talking to the storage team, or not talking to the server team, and protecting their turf.

As an alternative, you can have the vendor come in and say, “Look, we can do this for you in a simpler fashion. We can do it a little bit faster, too, and we can keep downtime out of your environment.”

But trying to get the enterprise convinced [on the outsourcing] can sometimes be tricky and difficult. So I see that as one of the inhibitors to getting some of these great tech services that the vendors have into these environments.

A lot of these legacy systems are mixed in with the newer systems. This is where you see a struggle within enterprises. It’s still the stovepipe silos in enterprises that can make transitions very difficult. 

A second big challenge I see is around the big, legacy IT environments. This goes back to that connectedness piece I talked about. A lot of these legacy systems are mixed in with the newer systems. This is where you see a struggle within enterprises. They are asking, “Okay, well, how do I support this older equipment and still migrate to this new platform that I want to do a lot of cloud-based computing with and become more operationally efficient?” The vendors can assist with that, but it’s still the stovepipe silos you sometimes see in enterprises that can make transitions very difficult.

Gardner: Right. The fact is we have hybrid everything, and now we have to adjust our support and services to that as well.

Gerry, around these challenges, it seems we also have some older thinking around how you buy these tech services. Perhaps it has been through a warranty or a bolt-on support plan. Do we need to change the way we think about acquiring these services?

Customer experience choice 

Nolan: Yes, customers are all about experiences these days. Think about pretty much every part of your life — whether you’re going to the bank, booking a vacation, or even buying an electric car. They’ve totally transformed the experience in each of those areas.

IT is no different. Customers are trying to move beyond, as Rob was saying, that legacy IT thinking. Even if it’s contacting a support provider for a break-fix issue, they want the solution to come with an end-to-end experience that’s compelling, engaging, and in a way that they don’t need to think about all the various bits and pieces. The fewer decisions a customer has to make and the more they can just aim for a particular outcome, the more successful we’re going to be.

Brothers: Yes, when a customer invested $1 million in a solution set, the old mindset was that after three or four years it would be retired and they would buy a new one — but that’s completely changed.

Now, you’re looking at this technology for a longer term within your environment. You want to make sure you’re getting all the value out of it, so that support experience becomes extremely important. What does the system look like from a performance perspective? Did I get the full dollar value out of it?

No alt text provided for this image

That kind of experience is not just between the vendor and with my own internal IT department, but also in how that experience correlates out to my end-user customer. It becomes about bringing that whole experience circle around. It’s really about the experience for everybody in the environment — not just for the vendor and not just for the enterprise. But it’s for the enterprise’s customers. 

Gardner: Rob, I think it behooves the seller of the IT goods if they’ve moved from a CapEx to an OpEx model so that they can make those services as valuable as possible and therefore also apply the right and best level of support over time. It locks the customer in on a value basis, rather than a physical basis.

Brothers: Yes, that’s one great mindset change I’ve seen over the past five years. I did a study about six years ago, and I asked customers how they bought support. Overwhelmingly they said they just bought a blanket support contract. It was the same contract for all of the assets within the environment.

But just recently, in the past couple of years, that’s completely changed. They are now looking at the workloads. They’re looking at the systems that run those workloads and making better decisions as to the best type of support contract on that system. Now they can buy that in an OpEx- or CapEx-type manner, versus that blanket contract they used to put on it.

It’s really great to see how customers have evolved to look at their environments and say, “I need different types of support on the different assets I have, and which provide me different experiences.” That’s been a major change in just the past couple of years.

Nolan: We’re also seeing customers seek the capability to evolve and move from one support model to another. You might have a customer environment where they have some legacy products where they need help. And they’re implementing some new technologies and new solutions, and they’re developing new apps.

It’s really helpful for that customer if they can work with a single vendor — even if they have multiple, different IT models. That way they can get support for their legacy, deploy new on-premises technologies, and integrate that together with their legacy. And then, of course, having that consumption-as-a-service model that Rob just talked about, they also have a nice easy way of transitioning workloads over to hybrid models where appropriate.

I think that’s a big benefit, and it’s what the customers seem to be looking for more and more these days.

Gardner: Gerry, what’s the vision now behind HPE to deliver on that? What’s Pointnext Services doing to provide a new generation of tech support that accommodates these new and often hybrid environments?

Tech Care’s five steps toward support

Nolan: We’re very excited to launch a new support experience called HPE Pointnext  Tech Care. It’s all about delivering on much of what’s just been said in terms of moving beyond a product break-fix experience to helping customers get the most out of that product — all the way from purchasing through its lifecycle to end-of-life.

Our main goal for HPE Pointnext Tech Care is to help customers maximize and expose all the value from that product. We’re going to do that with HPE Pointnext Tech Care through five key elements.

Products are going to be embedded with a support experience called HPE Pointnext Tech Care. It’s a very simple experience. It has some choices on the SLA side, but it’s going to dramatically simplify the buying and owing experience at HPE.

The first is to make it a very simple experience. Today, we have four different choices when you’re buying a product as to which experience you want to go with. Now with HPE Pointnext, products are going to be sold embedded with a support experience called HPE Pointnext Tech Care. It’s a very simple experience. It has some choices on the service-level-agreement (SLA) side, but it’s going to dramatically simplify the buying and owning experience for our HPE customers.

The second aspect is the digital-transformation component that we see everywhere in life. That means we’re embedding a lot of data telemetry into the products. We have a product called HPE InfoSight that’s now embedded in our technology being deployed.

InfoSight collects all that data and sends it back to the mother ship, which allows our support experts to gain all of those insights and provide help with the customer in mitigating, predicting, planning capacity, and helping to keep that system running and optimized at all times. So, that’s one element of the digital component.

The other aspect is a very rich support portal, a customer engagement platform. We’ve already redone our support center on hpe.com and customers will see it’s completely changed. It has a new look and feel. Over the coming quarters, there will be more and more new capabilities and functionality added. Customers will be able to see dashboards, personalized views of their environments, and their products. They’ll get omni-channel access to our experts, which is the third element we are providing.

We have all this great expertise. Traditionally, you would connect with them over the telephone. But going forward, you’re going to have the capability, as Rob mentioned, for customers to do chat. They may also want to watch videos of the experts. They may want to talk to their peers. So we have a moderated forum area where customers can communicate with each other and with our experts. There’s also a whole plethora of white papers and Tech Tip videos. It’s a very rich environment.

No alt text provided for this image

Then the fourth HPE Pointnext Tech Care element touches on a key trend that Rob mentioned, which goes beyond break-fix. With HPE Pointnext Tech Care, you’ll have the capability to communicate with experts beyond just talking about a broken part of your system. This will allow you to contact us and talk about things such as using the product, or capacity planning, or configuration information that you may have questions about. This general tech guidance feature of HPE Pointnext Tech Care, we believe, is going to be very exciting for customers, and they’re going to really benefit from it. 

And lastly, the fifth component is about a broader spectrum of full lifecycle help that our customers want. They don’t just want a support experience around buying the product, they want it all the way through its lifetime. The customer may need help with migration, for example, or they may need help with performance, training their people, security, and maybe even retiring or sanitizing that asset. 

With HPE Pointnext Tech Care, they will have a nice, easy mechanism where you have a very robust, warm-blanket-type of support that comes with the product and can easily be augmented with other menu choices. We’re very excited about launch of HPE Pointnext Tech Care and it comes with those five key elements. It’s going to transform the support experience and help customers get the most from their HPE products.

Gardner: Rob, how much of a departure do you sense the HPE Pointnext Tech Care approach is from earlier HPE offerings, such as HPE Foundation Care? Is this a sea change or a moderate change? How big of a deal is this?

Proactive, predictive capabilities

Brothers: In my opinion, it’s a pretty significant change. You’re going to get proactive, predictive capabilities at the base level of the HPE Pointnext Tech Care service that a lot of other vendors charge a real premium for.

I can’t stress enough how important it is for those proactive, predictive capabilities to come with environments. A survey that I did not long ago supported a cost-downtime study. In that study, customers saw approximately 700 or so hours of downtime per year across their environments. These are servers, storage, networking, and security, and take human error into account. If customers enabled proactive, predictive capabilities, they saw approximately 200 hours of saved downtime. That’s because of what those corrective, predictive capabilities can do at that base layer. They allow you to do the one big thing that prevents downtime — and that’s patch management and patch planning.

Now, those technical experts that Gerry talked about can access all of this beautiful, feature-rich information and data. They can feed it back to the customer and say, “Look, here’s how your environment looks. Here’s where we see some areas that you can make improvements, and here’s a patch plan that you can put in place.”

Now technical experts can access all of this beautiful, feature-rich information and data. They can feed it back to the customer to make improvements. That’s precious information and data.

Then all of the data comes back from enterprises, saying, “If I do a better job of that patching and patch planning that just saves a copious amount of unplanned and planned downtime out of my environment because I now do a better job of that.” That’s precious information and data.

That’s the big fundamental change. They’re showing the real value to the customer so they don’t have to buy some of those premium levels. They can get that kind of value in the base level, which is extremely important and provides that higher-order experience to end-user customers. So I do think that’s a huge fundamental shift, and definitely a new value for the customers.

Gardner: Rob, correct me if I’m wrong, but having this level of proactive, baked-in-from-the-start support comes at an auspicious time, too, because people are also trying to do more automation with their security operations. It seems to me that we’re dovetailing the right approaches for patching and proactive maintenance along with what’s needed for security. So, there’s a security benefit here as well?

Brothers: Oh, massive. Especially if you look at this day-and-age with a lot of the security breaches we just had just over the past year due to new security remote access to a lot of systems. Yes, it definitely plays a major factor in how enterprises should be thinking about how they’re patching and patch planning.

Gardner: Gerry, just to pull on that thread again about data and knowledge sharing, the more you get the relationship that you’re describing with HPE Pointnext Tech Care — the more back and forth of the data and learning what the systems are doing — and you have a virtuous cycle. Tell us how the machine learning (ML) and data gathering works in aggregate and why that’s an auspicious virtuous cycle.

Nolan: That’s an excellent question and, of course, you’re spot-on. The combination is of the telemetry built into the actual products through HPE InfoSight, our back-end experts, and the detailed knowledge management processes. We also have our experts who are watching, listening, and talking to customers as they deal with issues.

That means you have two things going on. You have the software learning over time and we have rules being built in there so that when it spots an issue it can go and look for all the other similar environments and then help those customers mitigate and predict ahead of time.

That means that customers will immediately get the benefit of all of this knowledge. It might be a Tech Tip video. It might be a white paper. It might be an item or an article in a moderated forum. There’s this rich back-and-forth between what’s available in the portal and what’s available in the knowledge that the software will build over time. And all of this just comes to bear in a richer experience for the customer, where they can help either self-solve or self-serve. But if they want to engage with our experts, they’re available in multiple different channels and in multiple different ways.

Gardner: Rob, another area where 2+2=5 is when we can take those ML and data-driven insights that Gerry described across a larger addressable market of installed devices. And then, we can augment that with MyRoom-type technologies and the VR and AR capabilities that you described earlier.

What’s the new sum value when we can combine these insights with the capability to then deliver the knowledge remotely and richly? 

Autonomous IT reduces human error 

Brothers: That’s a really great point. The whole idea is to attain what we call autonomous IT. That means to have IT systems that are more on the self-repair side, and that have product pieces shipped prior to things going wrong. 

One of the biggest and most-costly pieces of downtime is from human error. If we can pull the human touch and human interaction out of the IT environment, we save each company hundreds of thousands of dollars a year. That’s what all this data and information will provide to the IT vendors. They can then say, “Look, let’s take the human interactions out of it. We know that’s one of the most-costly sides of the equation.”

If we can pull the human touch and interaction out of the IT environment we save money and reduce human error. We can optimize systems. It gets to the point where we’re relying on the intelligence of the systems to do more. That’s the direction we’re heading in. 

If we can do that in an autonomous fashion — where we’re optimizing systems on a regular basis, equipment is being shipped to the facility prior to anything breaking, we can schedule any downtime during quiet times, and make sure that workloads are moved properly — then that’s the endgame. It gets to the point where the human factor gets more removed and we’re relying more on the intelligence of the systems to do more.

That’s definitely the direction we’re moving in, and what we’re seeing here is definitely heading in that direction.

Gardner: Yes, and in that case, you’re not necessarily buying IT support, your buying IT insurance.

Brothers: Yes, exactly. That gets back to the consumption models. HPE is one of the leaders in that space with HPE GreenLake. They were one of the pioneers to come up with a solution such as that, which takes the whole IT burden off of IT’s plate and puts it back on the vendor.

Nolan: We have a term for that concept that one of my colleagues uses. They call it invisible IT. That’s really what a lot of customers are looking for. As Rob said, we’re still some ways from that. But it’s a noble goal, and we’re all in to try and achieve it.

Gardner: So we know what the end-goal is, but we’re still in the progression to it. But in the meantime, it’s important to demonstrate to people value and return on investment (ROI).

Do we have any HPE Pointnext Tech Care examples, Gerry? Rob already mentioned a few of his studies that show dramatic improvements. But do we have use cases and/or early-adoption patterns? How do we measure when you do this well and you get?

Benefits already abound

Nolan: There are a ton of benefits. For example, we already have extensive Tech Tip video libraries. We have chat implemented. We have the moderated forums up and running. We have lots of different elements of the experience already live in certain product areas, especially in storage.

Of course, many HPE products are already connected through HPE InfoSight or other tools, which means those systems are being monitored on a 24 x 7 basis. The software already monitors, predicts, and mitigates issues before they occur, as well as provides all sorts of insights and recommendations. This allows both the customer and our support experts to engage and take remediation action before anything bad happens. 

Customers seem to love this more-rich experience approach. Yes, there’s a lot more data and a lot more insights. But to have those experts on-hand, to be able to gain or build an action plan from all of that data, is really important.

Now, in terms of some of the benefits that we’re seeing in the storage space, those customers that are connected are seeing 73 percent fewer trouble tickets and 69 percent faster time-to-resolution. To date, since InfoSight was first deployed in that storage environment alone, we’ve measured about 1.5 million hours of saved productivity time.

So there are real benefits when you combine being connected with ML tools such as InfoSight. When the rich value available in HPE Pointnext Tech Carecomes together, it further reduces downtime, improves performance, and helps reach the end-goal that Rob talked about, the autonomous IT or invisible IT. 

Gardner: Rob, we started our conversation about what’s changed in tech support. What’s changed when it comes to the key performance indicators (KPIs) for evaluating tech support and services?

Brothers: The big, new KPIs that we’re seeing do not just evaluate the experience that the enterprise has with the IT vendors. Although that’s obviously extremely important, it’s also about how does that correlate to the experiences my end-users are receiving?

No alt text provided for this image

You’re beginning to see those measurements come to the fore. An enterprise has its own SLAs and KPIs with its end-users. How is that matching to the KPIs and SLAs I have back to my IT vendors? You’re beginning to see those merge and come together. You’re beginning to see new matrices put in place where you can evaluate the vendor through how well you’re delivering user experiences to your own end-users. 

It takes a bit of time and energy to align that because it’s a fairly complex measurement to put in place. But we’re beginning to see that from enterprises, to seek that level of value from the vendors. And the vendors are stepping up, right? They’re beginning to show these dashboards back to the enterprise that say, “Hey, here’s the SLA, here are the KPIs, here are the performance matrices that we’re collecting and that should correlate fairly well to what you’re providing to your end-user customers.”

Gardner: Gerry, if we properly align these values, it better fits with digital transformation because people have to perceive the underlying digital technologies as an enabler, not as a hurdle. Is HPE Pointnext Tech Care an essential part of digital transformation when we think about that change of perception?

Incident management transforms

Nolan: It totally is. One of our early Pointnext customers is a large, US retailer. They’ve gone through a situation where they had a bunch of technology. Each one had its own individual support contract. And they’ve moved to a more centralized and simpler approach where they have one support experience, which we actually deliver across each of their different products — and they’re seeing huge benefits.

They’ve gone from firefighting and having their small IT team predominantly focused on dealing with issues and support calls regarding hardware- and update-type issues and all of a sudden, they were measuring themselves on incidents — how many incidents — and they were trying to keep that at a manageable level.

One large, US retailer has moved to a more centralized and simpler approach where they have one support experience — and they’re seeing huge benefits. 

Well, now, they’ve totally changed. The incidents have almost disappeared — and now they’re focused on innovation. How fast can they get new applications to their business? How fast can they get new projects to market in support of the business?

They’re just one customer who has gone through this transformation where they’re using all of the things we just talked about and it’s delivering significant benefits to them and to their IT group. And the IT group, in turn, are now heroes to their business partners around the US.

Gardner: I want to close with some insights into how organization should prepare themselves. Rob, if you want to gain this new level of capability across your IT organization, you want the consumers of IT in your enterprise to look to IT for solutions and innovation, what should you be thinking about now? What should you put in place to take advantage of the offerings that organizations such as HPE are providing with HPE Pointnext Tech Care?

Evaluating vendor experiences

Brothers: It all starts with the deployment process. When you’re looking and evaluating vendors, it’s not just, “Hey, how is the product? Is the product going to perform and do its task?” 

Some 99 percent of the time, the stand-alone IT system you’re procuring is going to solve the issue you’re looking to solve. The key is how well is that vendor going to get that system up and running in your environment, connected to everything it needs to be connected to, and then supports it optimizes it for the long run.

It’s really more about that life cycle experience. So, as an enterprise, you need to think differently on how you want to engage with your IT vendor. You need to think about all the different performance KPIs, and match that back to your end-user customer.

The thought process of evaluating vendors, in my opinion, is shifting. It’s more about the type of experience I get with this vendor versus the product and its job. That’s one of the big transitional phases I’m seeing right now. Enterprises are thinking about more the experience they have with their partners, more so then if the product is doing the job. 

Gardner: Gerry, what do you recommend people do in order to get prepared to take advantage of such offerings as HPE Pointnext Tech Care?

Nolan: Following on from what Rob said, customers can already decide what experience they would like. HPE Pointnext Tech Care will be the embedded support experience that comes with their HPE products. It’s going to be very easy to buy because it’s going to be right there embedded with the product when the product is being configured and when the quote is being put together. 

HPE Pointnext Tech Care is a very simple, easy, and fully integrated experience. They’re buying a full product experience, not a product — and then choose their support experience on the side. If they want something broader than just a product experience — what I call the warm blanket around their whole enterprise environment — we have another experience called Datacenter Care that provides that.

We also have other experiences. We can, for example, manage the environment for them using our management capabilities. And then, of course, we have our HPE GreenLake as-a-service on-premises experience. We’ve designed each of these experiences so they can totally live together and work together. You can also move and evolve from one to the other. You can buy products that come with HPE Pointnext Tech Care and then easily move to a broader Datacenter Care to cover the whole environment.

We can take on and manage some of that environment and then we can transition workloads to the as-a-service model. We’re trying to make it as easy and as fast as possible for customers to onboard through any and all of these experiences.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise Pointnext Services.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, artificial intelligence, Cloud computing, contact center, Cyber security, data center, Data center transformation, DevOps, digital transformation, disaster recovery, Enterprise transformation, Help desk, Hewlett Packard Enterprise, managed services, professional services, Security, storage, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Work from anywhere unlocks once-hidden productivity and creativity talent gems

Image for post

Now that hybrid work models have been the norm for a year, what’s the long-term impact on worker productivity? Will the pandemic-induced shift to work from anywhere agility translate into increased employee benefits — and better business outcomes — over time?

The next BriefingsDirect workspace innovation discussion explores how a bellwether UK accounting services firm has shown how consistent, secure, and efficient digital work experiences lead to heightened team collaboration and creative new workflows.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the ways that distributed work models fuel innovation, please welcome our guests, Chris Madden, Director of IT and Operations for Kreston Reeves, LLP in the UK, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, we’ve been in a work-from-anywhere mode for a year. Is this panning out as so productive and creative that people are considering making it a permanent feature of their businesses?

Minahan: Dana, if there’s one small iota of a silver lining in this global crisis we’ve all been going through together it’s that it has shone a light on the importance of flexible and remote work models.

Image for post

Companies are now rethinking their workforce strategies and work models — as well as the role the office will play in this new world of work. And employees are, too. They’re voting with their feet, moving out of high-cost, high-rent districts like San Francisco and New York because they realize they can not only do their work effectively remotely, but they can also be more productive and have a better work life.

A few data points that are important: This isn’t a temporary shift. The pandemic has opened folks’ eyes to what’s possible with remote work. In fact, in a recent Gartner study, 82 percent of executives surveyed plan to make remote work and flexible work a more permanent part of their workforce and cost-management strategies — and it’s for very good business reasons.

As the pandemic has proven, this distributed work model can significantly lower real estate and IT costs. But more importantly, the companies that we talk to, the most forward-looking ones, are realizing that flexible work models make them more attractive as an employer. And that prompts them to rethink their staffing strategies because they have access to new pools of talent and in-demand skills of workers that live well beyond commuting distance to one of their work hubs.

Businesses work from anywhere

Such flexible work models can also advance other key corporate initiatives like sustainability and diversity, which are increasingly becoming board-level priorities at most companies. Those companies that remain laggards — that are still somewhat reluctant to embrace remote work or flexible work as a more permanent part of their strategies — may soon be forced to change as their employees look for more flexible work approaches.

We’ve heard about the mass exodus from some of those large metropolitan areas to more suburban — and even rural locales. At Citrix, our own research of thousands of workers and IT and business executives finds that more than three-quarters of workers now prefer to shift to a more remote and flexible work model — even if it means taking a pay cut. And 80 percent of workers say that flexible work arrangements will be a top selection criterion when evaluating employers in the future.

Gardner: Chris, based on your experience at Kreston Reeves, do you agree that these changes to a more flexible and hybrid work location model are here to stay?

Learn How Digital Workspaces Help

Companies Support Hybrid Work Models

Madden: I would. At Kreston Reeves, we are expecting to move permanently to a three- or two-days a week in an office with the remaining time working from home and away from the office. That’s for many of the reasons already covered, such as reduced commuter time, reduced commuting cost, more time at home with family, best work-life balance, and a lot better for the environment as well because of people travelling less and all those greenhouse gases not going up into the atmosphere.

Gardner: We certainly hear how there are benefits to the organization. But how about the end users, the customers? Have your experiences at Kreston Reeves led you to believe that you can maintain the quality of service to your customers and consumers?

Madden: It’s probably ultimately going to be a balance. I don’t think it will shift totally one way or go back to how it was. I think for our customers and clients, there are distinct advantages, depending on the type of work. There isn’t always a need to go and have a face-to-face meeting that can take a lot of time for people, time that they could spend elsewhere in their business.

Depending on the nature of the interactions, quite a lot will shift to video calling, which has become the norm overt the last year even as in the past people may have thought it impersonal.

Depending on the nature of the interactions, quite a lot will shift to video calling, which has become the norm over the last year even as in the past people may have thought it impersonal. So I think that will become a lot more accepted, and face-to-face meetings will be then kept for those meetings that really require everybody to sit down together.

Gardner: It sounds like we’re into a more fit-for-purpose approach. If it’s really necessary, that’s fine, we can do it. But if it’s not necessary, there are benefits to alleviating the pressure on people.

Tell us, Chris, about how your organization operates and how you reacted to the pandemic.

Madden: Yes, we began best part of 10 years ago, when we moved on to Citrix as the platform to distribute computer services to our users. Over the years, we have upgraded that and added on the remote-access solutions. And so, when it came to early 2020 and the pandemic, we were ready to take off. We could see where we were heading in terms of lockdowns and the pandemic, so we closed two or three of our offices — just to see how the system coped.

Image for post

It was designed to do that, but would it really work when we actually closed the offices and everybody worked from home? Well, it worked brilliantly, and was very easy to deal with. And then a few days after that, the UK government announced the first national lockdown and everybody had to work from home within a day.

From our point of view, it worked really well. The only wrinkles in the whole process were to get everybody the appropriate apps on their phones to make sure they could have remote access using multifactor authentication. But otherwise, it was very seamless; the system was designed to cope with everybody working from anywhere — and it did.

Gardner: Chris, we often hear that there is a three-legged stool when it comes to supporting business process — as in people, technology, and process. Did you find that any of those three was primary? What led you to succeed in making such as rapid transition when it comes to the three pillars?

A new world of flexible work

Madden: I think it’s all three of those things. The technology is the enabler, but the people need to be taken with you, and the processes have to adapt for new ways of working. I don’t think any one of those three would lead. You have to do all three together.

Gardner: Tim, how does Citrix enable organizations to keep all three of those plates in the air spinning, if you will, especially on that point about the right applications on the right device at the right time?

Learn to Deliver Superior

Employee Work Environments

Minahan: What’s clear in our research — and what we’re seeing from our customers — is that we’re accelerating to a new world of work. And it’s a more hybrid and flexible world where that employee experience become a key differentiator.

To the point Chris was making, success is going to go to those organizations that can deliver a consistent and secure work experience across any and all work channels — all the Slacks, all of the apps, all the Teams, and in any work location.

Whether work needs to be done in the office, on the road, or at home, delivering that consistent and secure work experience — so employees have secure and reliable access to all their work resources — needs to come together to service end customers regardless of where they’re at.

Kreston Reeves is not alone in what they have experienced. We’re seeing this across every industry. In addition to the change in work models, we are also seeing a rapid acceleration of their digitization efforts, whether it is in the financial services sector, or other areas such as retail and healthcare. They may have had plans to digitize their business, but over the past year they’ve out of necessity had to digitize their business.

Kreston Reeves is not alone in what they have experienced. We’re seeing this across every industry. In addition to the change in work models, we are also seeing a rapid acceleration of digitization efforts. Over the past year out of necessity they have had to digitize their businesses.

For example, there’s the healthcare provider in your neck of the woods, up in the Boston area, Dana, that has seen a 27-times increase in monthly telemedicine visits. During the COVID crisis, they went from 9,000 virtual visits per month to over 250,000 per month — and they don’t think they’re ever going to go back.

In the financial services sector, we hear consistently customers hiring thousands of new advisors and loan officers in order to handle the demand — all in a remote and digital environment. What’s so exciting, as I said earlier, is as companies begin to use these approaches as key enablers, it becomes a liberator for them to rethink their workforce strategies and reach for new skills and new talent that’s well beyond commuting distance to one of their work hubs.

It’s not just about, “Should Sam or Suzy come back and work in the office full time?” That’s a component of the equation. It’s not even about, “Do Sam and Suzy perform at their best even when they’re working at home?” It’s about, “Hey, what should our workforce look like? Can we now reach skills and talent that were previously inaccessible to us because we can empower them with a consistent work experience through a digital workspace strategy?”

Gardner: How about that, Chris? Have you been simply repaving work-in-the-office paths with a different type of work from home? Or are you reinventing and exploring new business processes and workflows as a result of the flexibility?

Remote work retains trust, security

Madden: There is much more willingness amongst businesses and the people working in businesses to move quickly with technology. We’re past being cautious. With the pandemic, and the pressure that that brings, people are more willing to move faster — and be less concerned about understanding everything that they may want to know before embracing technology.

The other thing is with relationships with clients. There is a balance, to not go as far as some industries. Some never see their clients any longer because everything is done remotely, and everything is automated through apps and technology.

Image for post

And the correct balance that we will be mindful of as we embrace remote working — and as we have more virtual meetings with clients — is that we still need to maintain the relationship of being a trusted advisor to the client — rather than commoditizing our product.

Gardner: I suppose one of the benefits to the way the technology is designed is that you can turn the knobs. You can experiment with those relationships. Perhaps one client will require a certain value toward in-person and face-to-face engagements. Another might not. But the fact is the technology can accommodate that dynamic shift. It gives us, I think, some powerful tools.

Image for postMadden: Absolutely. The key is that for those clients who really want to embrace the modern world and do everything digitally, there is a solution. If a client would still like to be very traditional and have lots of invoices and things on paper and send those into their accountant, that, too, can be accommodated.

But it is about moving the industry forward over time. And so, gradually I can see that technology will become a bigger contributor to the overall service that we provide and will probably do the basic accountancy work, producing an end result that a human then looks at provides the answer back to the client.

Gardner: Now, of course, the materials that you’re dealing with are often quite sensitive and there are business regulations. How did the reaction of your workforce and your customer base come down on the issues of privacy, control, and security?

Madden: The clients trust that we will get it right and therefore look to us to provide the secure solution for them. So, for example, there are clients who have an awful lot of information to send us and cannot come into an office to hand over whatever that is.

We can get them new technologies that they haven’t used in the past such as Citrix ShareFile to share those documents with us securely and efficiently, but in a way that allow us to bring those documents into our systems and into the software we need to use to produce the accounts and the audits for the clients.

Gardner: Tim, you mentioned earlier that sometimes when people are forced into a shift in behavior, it’s liberating. Has that been the case with people’s perceptions around privacy and security as well?

Learn How Digital Workspaces Help

Companies Support Hybrid Work Models

Minahan: If you’re going to provide a consistent and secure work experience, the other thing folks are beginning to see as they embrace hybrid and more distributed work models is that their security posture needs to evolve too. People aren’t all coming into the office every day to sit at their desk on the corporate network, which had much better-defined parameters and arguably was easier to secure.

Now, in a truly distributed work environment, you need to not only provide a digital workspace that gives employees access to all the work resources they need — and that is not just their virtual desktops, but all of their software-as-a-service (SAAS) apps or web apps or mobile apps — it needs to be all in one unified experience that’s accessible across any location.

That is another dynamic we’re seeing. Companies are accelerating their embrace of new more contextual zero trust access security models as they look forward to a post-pandemic world.

It also needs to be secure. It needs to be wrapped in a holistic and contextual security model that fosters not just zero trust access into that workspace, but ongoing monitoring and app protection to detect and proactively remediate any access anomalies, whether initiated by a user, a bot, or another app.

And so, that is another dynamic we’re seeing. Companies are accelerating their embrace of new more contextual zero trust access security models as they look forward to preparing themselves for how they’re going to operate in a post-pandemic world.

Gardner: Chris, I suppose another challenge has been the heterogeneity of the various apps and data across the platforms and sources that you’re managing. How has working with a digital Workspace environment helped you provide a singular view for your employees and end customers? How do workspace environments help mitigate what had been a long-term integration issue for IT consumption?

Madden: For us, whether we are working from home remotely or are in an office, we are consuming the same desktop with the same software and apps as if we were sitting in an office. It’s really exactly the same. From a colleague’s point of view, whether they are working from home in a pandemic or sitting in their office in Central London, they are getting exactly the same experience with exactly the same tools.

And so for them, it’s been a very easy transition. They’re not having to learn the technology and different ways to access things. They can focus instead on doing the client work and making sure that their home arrangement is sorted out.

Gardner: Tim, regardless of whether it’s a SaaS app, cloud app, on-premises data — as long as that workspace is mobile and flexible — the complexity is hidden?

Workspace unifies and simplifies tasks

Minahan: Well, there is another challenge that the pandemic has shone a light on, which is this dirty little secret of the business world. And that is our work environment is too complex. For the past 30 years, we’ve been giving employees access to new applications and devices. And more recently, chat and collaboration tools — all with the intent to help get work done.

While on an independent basis, each of these tools adds value and efficiency, collectively they’ve created a highly fragmented and complex work environment that oftentimes interrupts, distracts, and stresses out employees. It keeps them possibly from getting their actual work done.

Just to give you a sense, with some real statistics: On any given workday, the typical employee uses more than 30 critical apps to get their work done, oftentimes needing to navigate four or more just to complete a single business process. They spend more than 20 percent of their time searching across all of these apps and all of these collaboration channels to find the information they need to make decisions to do their jobs.

Learn to Deliver Superior

Employee Work Environments

To make matters worse, now we’ve empowered these apps and these communication and collaboration channels. They’re all vying for our attention throughout the day, shouting at us about things we need to get done, and oftentimes distracting us from our core work. By some estimates, all of these notifications, chats, texts, and other disruptions interrupt us from our core work about every two minutes. That means the typical employee gets interrupted and forced to switch context between apps, emails, and other chat channels more than 350 times each day. Not surprisingly, what we are seeing is a huge productivity gap — and it is turning our top talent into task rabbits.

As companies think through this next phase of work, how do they provide a consistent and secure work experience and a digital workspace environment for employees no matter where they’re working? It not only be needs to be unified — giving them access to everything they need and security, ensuring that corporate information, applications, and networks remain secure no matter where employees are doing the work — but it also needs to be intelligent.

Image for post

Leveraging intelligent capabilities such as machine learning (ML), artificial intelligence (AI) assistance, bots, and micro apps personalize and simplify work execution. It’s what I call adding an experience layer between an employee and their work resources. This simplifies their interactions and work execution across all of the enterprise apps, content, and other resources so employees are not overwhelmed and can perform at their best no matter where work needs to get done.

Gardner: Chris, are you interested in elevating people from task rabbits to a higher order of value to the business and their end users and customers? And is the digital environment and workplace a part of that?

Madden: Absolutely. There are lots of processes, many firms, and across multiple campuses. They have grown up over the years and they’ve always been done that way. This is a perfect time to reappraise how we do those things smarter digitally using some robotic process automation (RPA) tools and AI to take a lot of the rework and data from one system into another to produce the end result for the client.

We want to free up our people to do more value-added work — and it would be more interesting work for those people. It will give a better quality role for people, which will help us to attract better talent.

There is a lot of that on our radar for the coming year or two. We want to free our people up to do more value-added work — and it would be more interesting work for those people. It will give a better quality of role for people, which will help us to attract better talent. And given the fact that people now have a taste of a different work-life balance, there will be a lot of pressure on new recruits to our business to continue with that.

Gardner: Chris, now that your organization has been at this for a year — really thrust into much more remote flexible work habits — were there any unexpected and positive spins? Things that you didn’t anticipate, but you could only find out with 20–20 hindsight?

Virtual increases overall efficiency

Madden: Yes. One is the speed at which our clients were happy to switch to video meetings and virtual audits. Previously, on audits, we would send a team of people to a client’s premises and they would look through the paperwork, look at the stock in a warehouse, et cetera, and perform the audit physically. We were able to move quickly to doing that virtually.

For example, if we’re looking in a warehouse to check that a certain amount of stock is actually present, we can now do that by a video call and walk around the warehouse and explain what we’re looking for and see that on the screen and say, “Yes, okay, we know that that stock is actually available.” It was a really big shift in mindset for our regulators, for ourselves, and for our clients, which is a great positive because it means that we can become much more efficient going forward.

The other one that sticks out in my mind is the efficiency of our people. When you’re at home, focusing on the work and without the distractions of an office, the noise, and the conversations, people are generally more efficient. There is still the need for a balance because we don’t want everybody just sitting at home in silence staring at a screen. We miss out on some of the richness of business relationships and conversations with colleagues, but it was interesting how productivity generally increased during the lockdown.

Gardner: Tim, is that what you’re finding more generally around the globe among the Citrix installed base, that productivity has been on the uptick even after a 20- or 30-year period where, in many respects and measurements, productivity has been flat?

Minahan: Yes, that is a trend we have been seeing for decades. Despite the introduction of more technology, employee productivity continued to trend down, ironically, until the pandemic. We talked with employees, executives, and through our own research and it shows that more than 80 percent of employees feel that they’re as, if not more, productive when working from home — for a lot of the reasons that Chris mentions. What they’ve seen at Kreston Reeves has continued to be sustained.

Image for post

It’s introduced the need for more collaborative work management tools in the work environment in order to foster and facilitate that higher level of engagement and that more efficient execution that we mentioned earlier. But overall, whether it’s the capability to avoid the lengthy commute or the ability to avoid distractions, employees are indeed seeing themselves as more productive.

In fact, we’re seeing a lot of customers now talk about how they need to rethink the very role of the office. Where it’s not just a place where people come to punch their virtual time cards, but is a place that’s more purpose-built for when you need to get together with a client or with other teammates to foster collaboration. You still keep the flexibility to work remotely to focus on innovation, creativity, and work execution that oftentimes, as Chris indicated, can be distracting or difficult to achieve strictly in an office environment.

Gardner: Chris, what’s interesting to me about your business is you’re in a relationship with so many client companies. And you were forced to go digital very rapidly — but so were they. Is there a digital transformation accelerant at work here? Because they all had to go digital at the same is there a network effect?

Because your customers have gone digital, Chris, could you then be better digital providers in your relationships together?

Collaborative communication

Madden: To an extent. It depends on the type of client industry that they’re in. In the UK, certain industries have been shut for a long time and therefore, they are not moving digitally. They are just stuck waiting until they are able to reopen. In the meantime, there’s probably very little going on in those businesses.

Those businesses that are open and working are very much embracing modern technology. So, one of the things that we’ve done for our audit clients, particularly, is providing different ways in which they can communicate with us. Previously, we probably had a straightforward, one-way approach. Now, we are giving clients three or four different ways they can communicate and collaborate with us, which helps everybody and moves things along a lot more quickly.

It is going to be interesting post-pandemic. Will people intrinsically go back to what they were always doing? Will what drove us forward keep us creating and becoming more digital or will the instinct be to go back to how it was because that’s how people are more comfortable?

Gardner: Yes, it will be interesting to see if there’s an advantage for those who embrace digital methods more and whether that causes a competitive advantage that the other organizations will have to react to. So we’re in for an interesting ride for a few more years yet.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

YOU MAY ALSO BE INTERESTED IN:

Posted in application transformation, artificial intelligence, BYOD, Citrix, Cloud computing, Cyber security, Data center transformation, digital transformation, enterprise architecture, Security, Software, User experience, Virtualization | Tagged , , , , , , , , , , , , , , | Leave a comment

How to gain advanced cyber resilience and recovery across the IT ops and SecOps divide

Cyber attacks are on the rise, harming brands and supply chains while fomenting consumer and employee distrust — as well as leading to costly interruptions and service blackouts.

At the same time, more remote workers and extended-enterprise processes due to the pandemic demand higher levels of security across all kinds of business workflows.https://www.linkedin.com/embeds/publishingEmbed.html?articleId=8874013840541958622

Stay with us now as the next BriefingsDirect discussion explores why comprehensive cloud security solutions need to go beyond on-premises threat detection and remediation to significantly strengthen extended digital business workflows.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about ways to shrink the attack surface and dynamically isolate process security breaches, please join Karl Klaessig, Director of Product Marketing for Security Operations, at ServiceNow, and E.G. Pearson, Security Architect at Unisys. The discussion is moderated byDana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Karl, why are digital workflows so essential now for modern enterprises, and why are better security solutions needed to strengthen digital businesses?

Klaessig: Dana, you touched on cyber attacks being on the rise. It’s a really scary time if you think about MGM Resorts and some of the really big attacks in 2020 that took us all by surprise. And 23 percent of consumers have had their email or social media accounts hacked, taken over, or used. These are all huge threats to our everyday life as businesses and consumers.

And when we look at so many of us now working from home, this huge new attack surface space is going to continue. In a recent Gartner chief financial officer (CFO) survey, 74 percent of companies have the intent to shift employees to work from home (WFH) permanently.

These are huge numbers indicating a mad dash to build and scale remote worker infrastructures. At the end of the day, the teams that E.G. and I represent, as vendors, we strive hard to support these businesses as they seek to scale and address an explosive impact for cyber resilience and cyber operations in their enterprises.

Gardner: E.G., we have these new, rapidly evolving adoption patterns around extended digital businesses and workflows. Do the IT and security personnel, who perhaps cut their teeth in legacy security requirements, need to think differently? Do they have different security requirements now?

IT security requirements rise

Pearson: As someone who did cut their teeth in the legacy parts, I say, “Yes,” because things are new. Things are different.

The legacy IT world was all about protecting what they know about, and it’s hard to change. The new world is all about automation, right? It impacts everything we want to do and everything that we can do. Why wouldn’t we try to make our jobs as simple and easy as possible?

When I first got into IT, one of my friends told me that the easiest thing you can do is script everything that you possibly can, just to make your life simpler. Nowadays, with the way digital workflows are going, it’s not just automating the simple things — now we’re able to easily to automate the complex ones, too. We’re making it so anybody can jump in and get this automation going as quickly as possible.

Gardner: Karl, now that we’re dealing with extended digital workflows and expanded workplaces, how has the security challenge changed? What are we up against?

Klaessig: The security challenge has changed dramatically. What’s the impact of Internet of things (IoT) and edge computing? We’ve essentially created a much larger attack surface area, right?

What’s changed in a very positive way is that this expanded surface has driven automation and the capability to not only secure workflows but to collaborate on those workflows.

We have to have the capability to quickly detect, respond, and remediate. Let’s be honest, we need automated security for all of the remote solutions now being utilized – virtually overnight – by hundreds of thousands of people. Automation is going to be the driver. It’s what’s really rises to the top to help in this.

Gardner: E.G., one of the good things with the modern IT landscape is that we can do remote access for security in ways that we couldn’t before. So, for IoT, as Karl mentioned, we’re talking about branch offices — not just sensors or machines.

We increasingly have a very distributed environment, and we can get in there with our security teams in a virtual sense. We have automation, but we also have the virtual capability to reach just about everywhere. 

Pearson: Nowadays, IoT is huge. Operational technology (OT) is huge. Data is huge. Take your pick, it’s all massive in scope nowadays. Branch offices? Nowadays, all of us are our own branch office sitting at our homes.

Now, everybody is a field employee. The world changed overnight. And the biggest concern is how do we protect every branch office and every individual who’s out there? It used to be simpler, you used to create a site-to-site virtual private network (VPN) or you had communications that could be easily taken care of.

Everybody is now a field employee. The world changed overnight. And the biggest concern is how do we protect every branch office and every individual who’s out there? The world is different.

Now the communication is open to everybody because your kids want to watch Disney in the living room while you’re trying to work in your office while your wife is doing work for her job three rooms down. The world is different.

The networks that we have to work through are different. Now, instead of trying to protect an all-encompassing environment, it’s about moving to more individual or granular levels of security, of protecting individual endpoints or systems.

I now have smart thermostats and a smart doorbell. I don’t want anybody attaching to those. I don’t need somebody talking to my kids through those things. In the same vein, I don’t need somebody attaching to my company’s OT environment and doing something silly inside of there. So, in my opinion, it’s less about the overarching IT environment, and more about how to protect the individuals.

Gardner: To protect all of those vulnerable individuals then, what are the new solutions? How are the Unisys Stealth and ServiceNow Platformcoming together to help solve these issues?

Collaborate to protect individuals

Klaessig: Well, there are a couple of areas I’ll touch on. One is that Unisys has an uncanny capability to do isolation and initially contain a breach or threat. That is absolutely paramount for our customers. We need to get a very quick handle on how to investigate and respond. Our teams are all struggling to scale faster and faster with higher volume. So, every minute bought is a huge minute gained. Right out of the gate, between Unisys and ServiceNow, that buys us time — and every second counts. It’s invaluable.

Another thing that’s driving our solutions are the better ties between IT and security; there’s much more collaboration. For a long time they tended to be in separate towers, so to speak. But the codependences and collaborative drivers between Unisys and ServiceNow mean that those groups are so much more effective. The IT and security teams collaborate thanks to the things we do in the workloads and the automation between both of our solutions. It becomes extremely efficient and effective.

Gardner: E.G., why is your technology, Unisys Stealth for Dynamic Isolation a good fit with ServiceNow? Why is that a powerful part of this automation drive?

Pearson: The nice part about dynamic isolation is it’s just a piece of what we can do as a whole with Unisys Stealth. Our Stealth core product is doing identity-based microsegmentation. And, by nature, it flows into software-defined networking, and it’s based on a zero trust model.

The reason that’s important is, in software-defined networking, we’re gathering tons of information about what’s happening across your network. So, in addition to what’s happening at the perimeter with firewalls, you are able to get really good, granular information about what’s happening inside of your environment, too.

We’re able to gather that and send all of that fantastic information over the ServiceNow Platform to your source, whatever it may be. ServiceNow is a fantastic jumping point for us to be able to get all that information into what would have been separate systems. Now they can all talk together through the ServiceNow Platform. 

Klaessig: To add to that, this partnership solves the issues around security data volume so you can prioritize accurately because you’re not inundated. E.G. just described the perfect scenario, which is that the right data gets into the right solution to enable effective assessment and understanding to make prioritizations on threat responses and threat actions based on business impact.

That huge but managed amount of data that comes in is invaluable. It’s what drives everything to get to prioritizing the right incidents. 

Gardner: The way you’re describing how the solutions work together, it sounds like the IT people can get better awareness about security priorities. And the security people can perhaps get insights into making sure that the business-wide processes remain safe.

Critical care for large communities

Klaessig: You’re absolutely right because the continuous threat prioritization and breach protection means that the protective measures have to go through both IT and security. That collaboration and automation enables not just the operational resilience that IT is driving for, but also the cyber resilience that the security teams want. It is a handshake.

That shared data and workloads are part of security but they reflect actual IT processes, and vice versa. It makes both more effective. 

Gardner: E.G., anything more to offer on this idea of roles, automation, and how your products come together?

Pearson: I wholeheartedly agree with Karl. IT and security can’t be siloed anymore. They can’t be separate organizations.

IT relies on what security operations puts in play, and security operations can’t do anything unless IT mitigates what security finds. So they can’t act individually any more. Otherwise, it’s like telling a football player to lace up their ice skates and go score a couple of goals.

IT relies on what security operations puts in play, and security operations can’t do anything unless IT mitigates what security finds. So they can’t act individually any more. Otherwise, it’s like telling a football player to lace up their ice skates and go score a couple of goals.

Gardner: As we use microsegmentation and zero trust to attend to individual devices and users, can we provide a safer environment for sets of users or applications?

Pearson: Yes, we have to do this in smaller and smaller groups. It’s about being able to understand what those communities need and how to dynamically protect them. 

As we adjust to the pandemic and the humungous security breaches like we found at the end of 2020, protecting large communities can’t be done as easily. It’s so much easier to break those down into smaller chunks that can be best protected.

We can group things out based on use and the impact to the business. And again, this all contributes to the prioritization and the response when we coordinate between the two solutions, Unisys and ServiceNow.

Gardner: So it’s an identity-driven model but on steroids. It’s not just individual people. It’s critical groups.

Klaessig: Well said.

Pearson: Yes.

Gardner: How can people consume this, whether you’re in IT, security personnel, or even an end user? If you’re trying to protect yourself, how do you avail yourself of what ServiceNow and Unisys have put together?

Speed for bad-to-worse scenarios

Klaessig: The key is we target enterprises. That’s where we work together and that’s where ServiceNow workflows go. But to your point, nowadays I’m essentially a lone, solo office person, right? With that in mind, we need to remember those new best practices.

The appropriate workflows and processes within our collective solutions must reflect the actual individual users and processes. It goes back to our comments a couple of minutes ago, which is what do you use most? How often do you use it? When do you use it, and how critical is it? Also, who else is involved?

That’s something we haven’t touched on up until now — who else will be impacted? At the end of the day, what is the impact? In other words, if someone just had a credential stolen, I need the quick isolation from Unisys based on the areas of IT impacted. I can do that in ServiceNow, and then the appropriate response puts a workflow out and it’s automated into IT and security. That’s critical. And that’s the starting point for the other processes and workflows.

Gardner: We now need to consider what happens when you inevitably face some security issues. How does the ServiceNow Security Incident Response Platform and Unisys Stealth come together to help isolate, reduce, and stifle a threat rapidly?

Pearson: The reason such speed is important is that many of you all have already been impacted by ransomware. How many of you all have actually seen what ransomware will do if left unchecked for even just 30 minutes inside of a network? It’s horrible. That to me, that is your biggest need.

Whether it is just a regular end-user or if it’s a full-scale, enterprise-level-type workflow, speed is a huge reason that we need a solution to work and to work well. You have to be fast to keep bad things from going really, really wrong.

One of the biggest reasons we have come together with Stealth doing microsegmentation and building small communities and protecting them is to watch the flow of what happens with whom across ports and protocols because it is identity based. Who’s trying to access certain systems? We’re able to watch those things.

As we’re seeing that information, we’re able to say if something bad is happening on a specific system. We’re able to show that weird or bad traffic flow is occurring, send that to ServiceNow and allow the automated operations to protect an end point or a server.

Because the process is automated, it brings the response down to less than 10 seconds, using automated workflows within ServiceNow. With dynamic isolation, we’re able to isolate that specific system and cut if off from doing anything else bad within a larger network.

That’s huge. That gives us the capability to take on something fast that could bring down an entire system. I have seen ransomware go 30 minutes unchecked, and it will completely ravage an entire file server, which brings down an entire company for three days until everything can be brought back up from the backups. Nobody has time for that. Nobody has time for the 30 minutes it took to do something silly to cost you three days of extra work, not to mention what else may come from that.

With our combined capabilities, Unisys Stealth provides the information we’re able send to the ServiceNow platform to have protection put in place to isolate and start to remediate within 10 seconds. That’s best for everybody because 10 seconds worth of damage is a whole lot easier to mitigate than 30 minutes’ worth.

Klaessig: Really well-said, E.G.

Gardner: I can see why 2+2=6 when it comes to putting your solutions together. ServiceNow gets the information from Stealth that something is wrong, but then you could put the best of what you do together to work.

Resolve to scale with automation

Klaessig: We do. And this leads us to do even more automation. How can you get to that discovery point faster, and what does that mean to resolve the problem?

And there’s another angle to this. Our listeners and readers are probably saying, “I know we need to respond quickly, and, yes, you’re enabling me to do so. And, yes, you’re enabling me to isolate and do some orchestration that ties things up to buy me time. But how do I scale the teams that are already buried beyond belief today to go ahead and address that?”

That’s a bit overwhelming. And here’s another added wrinkle. E.G. mentioned ransomware, and the scary part is in 2020 ransomware was paid 50 percent of the time versus one-third of the time in 2019. Even putting aside the pandemic and natural disasters, this is what our teams our facing.

It again goes back to what you heard E.G. and I touch on, which is automation of security and IT is what’s critical here. Not only can you respond consistently quicker, but you’ll be able to scale your teams and skills — and that’s where the automation further kicks in.

Businesses can’t take on this type of volume around security management with the teams they have in place today. That’s why automation is so critical. As attacks escalate, they can’t just go and add more people in time, right?

In other words, businesses can’t take on this type of volume around security management with the teams they have in place today. That’s why automation is so critical. Comprehensive tooling increases detection on the Unisys side, and that gives them not only more time to respond but allows them to be more effective as well. As attacks escalate, they can’t just go ahead and add more people in time, right? This is where they need that automation to be able to scale with what they have.

It really pays off. We’ve seen customers benefit from a dollars and cents prospective, where they saw a 74 percent improvement in time-to-identify. And now 46 percent of their incidents are handled by automation, saving more than 8,700 hours annually for their teams. Just wrap your head around that. I mean, that’s just a huge advantage from putting these pieces together and automating and orchestration like E.G. has been talking about.

Gardner: Is it too soon, Karl, to talk about bots and more automation where the automation is a bit more proactive? What’s going to happen when the data and the speed get even more useful, but more compressed when it comes to the response time? How smart are these systems going to get?

Get people to do the right thing

Klaessig: The reality is, we’re already going there. When you think of machine learning (ML) and artificial intelligence (AI), we’re already doing a certain amount of that in the products.

As we leverage more of the great data from Unisys, it drives who can resolve those vulnerabilities because they have a predetermined history of dealing with those types of vulnerabilities. That’s just an example of being able to use ML to align the right people to the right resolution. Because, at the end of the day, it still comes down to certain people doing certain things and it always will. But we can use that ML and AI to put those together very quickly, very accurately, and very efficiently. So, again, it takes that time to respond down to seconds, as E.G. mentioned.

Gardner: Are we going to get to a point where we simply say, “J.A.R.V.I.S., clean up the network”?

Pearson: I hope so! Going back to my old days of being an admin, I was an extremely lazy admin. If I could have just said, “J.A.R.V.I.S., remediate my servers,” I would have been all over it.

I don’t think there’s any way we can’t move toward more automation and ML. I don’t necessarily want us to get to the point where Skynet is not going to delete the virus, saying, “I am the virus.” We don’t need that.

But being able to automate helps overcome the mundane, such as resetting somebody’s password and being able to pull a system offline that’s experiencing some sort of weird whatever it may be. Automating those types of things helps everybody go faster through their day because if you’re working a helpdesk, you’ve already gotten 19 people with their hair on fire begging for your attention.

If you could cut off five of those people by automating and very easily allowing some AI to do the work for you, why wouldn’t you? I think their time is more valuable than the few dollars it’s going to cost to automate those processes.

Klaessig: That’s going to be the secret to success in 2021 and going forward. You can scale, and the way you’re going to scale is to take out those mundane tasks and automate all of those different things that can be automated.

As I mentioned, 46 percent of the security incidents became automated for our customer. That’s a huge advantage. And at the end of the day, putting J.A.R.V.I.S. aside, the more ML we can get into it, the better and more repeatable the processes and the workflows will be — and that much faster. That’s ultimately what we’re driving toward as well.

Gardner: Now that we understand the context of the problem, the challenges organizations face, and how these solutions come together, I’m curious at how this actually gets embedded into organizations? Is this something that security people do, that the IT people do, that the helpdesk people do? Is it all of the above? 

Everybody has role to reap benefits

Pearson: The way we usually get this going is there needs to be buy-in from everybody because it’s going to touch a lot of folks. I’m willing to bet Karl’s going to say similar things. It’s nice to have everybody involved and to have everybody’s buy-in on this.

It usually starts for us at Unisys with what we’re doing with microsegmentation and with a networking and security group. They need to talk to be able to get this rolled out. We also need the general IT folks because they’re going to have to install and get this rolled out to endpoints. And we need the server admins involved as well.

At the end of the day, this goes back to being a collaborative opportunity … for IT and security to join together. These solutions benefit both teams and can piggyback on investments they have already made elsewhere.

When it comes down to it, everybody’s going to have to be involved a little bit. But it generally starts with the security folks and the networking folks, saying, “How can I protect my environment just a little bit more than I was before?” And then it rolls from there.

And that’s a big advantage as well. Going forward, I strongly believe in — and I’ve seen the results of this — being a driver toward greater collaboration. It is that type of deployment and should be done in that manner. And then quite frankly, both organizations reap the benefits.

Pearson: Wholeheartedly.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and ServiceNow.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, Cloud computing, containers, Cyber security, digital transformation, Enterprise architect, Information management, machine learning, Security, ServiceNow, storage, Unisys | Tagged , , , , , , , , , , , , , , , | Leave a comment

How HPE Pointnext ‘Moments’ provide a proven critical approach to digital business transformation

The next edition of the BriefingsDirect Voice of Innovation video podcast series explores new and innovative paths for businesses to attain digital transformation.

Even as a vast majority of companies profess to be seeking digital business transformation, few proven standards or broadly accepted methods stand out as the best paths to take.

And now, the COVID-19 pandemic has accelerated the need for bold initiativesto make customer engagement and experience optimization an increasingly data-driven and wholly digital affair.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video.

Stay with us here to welcome a panel of experts as they detail a multi-step series of “Moments” that guide organizations on their transformations. Here to share the Hewlett Packard Enterprise (HPE) view on helping businesses effectively innovate for a new era of pervasive digital business are:

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Craig, while some 80 percent of CEOs say that digital transformation initiatives are under way, and they’re actively involved, how much actual standardization — or proven methods — are available to them? Is everyone going at this completely differently? Or is there some way that we can help people attain a more consistent level of success?

Partridge: A few things have emerged that are becoming commonly agreed upon, if not commonly executed upon. So, let’s look at those things that have been commonly agreed-upon and that we see consistently in most of our customers’ digital transformation agendas.

The first principle would be — and no shock here — focusing on data and moving toward being a data-driven organization to gain insights and intelligence. That leads to being able to act upon those insights for differentiation and innovation.

It’s true to say that data is the currency of the digital economy. Such a hyper-focus on data implies all sorts of things, not least of all, making sure you’re trusted to handle that data securely, with cybersecurity for all of the good things come that out of that data.

Another thing we’re seeing now as common in the way people think about digital transformation is that it’s a lot more about being at the edge. It’s about using technology to create an exchange value as they transact value from business-to-business (B2B) or business-to-consumer (B2C) activities in a variety of different environments. Sometimes those environments can be digitized themselves, the idea of physical digitization and using technology to address people and personalities as well. So edge-centric thinking is another common ingredient.

These may not form an exact science, in terms of a standardized method or industry standard benchmark, but we are seeing these common themes now iterate as customers go through digital transformation.

Gardner: It certainly seems that if you want to scale digital transformation across organizations that there needs to be consistency, structure, and common understanding. On the other hand, if everyone does it the same way, you don’t necessarily generate differentiation.

How do you best attain a balance between standardization and innovation?

Partridge: It’s a really good question because there are components of what I just described that can be much more standardized to deliver the desired outcomes from these three pillars. If you look, for example, at cloud-use-enablement, increasingly there are ways to become highly standardized and mobilized around a cloud agenda.

Moving toward containerization and leveraging microservices, or developing with an open API mindset, these are now pervasive principles in almost every industry. IT has to bring its legacy environment to play in all of that at high velocity and high agility.

And that doesn’t vary much from industry to industry. Moving toward containerization, for example, and leveraging microservices or developing with an open API mindset — these principles are pervasive in almost every industry. IT has to bring its legacy environment to play in that discussion at high velocity and high agility. So there is standardized on that side of it.

The variation kicks in as you pivot toward the edge and in thinking about how to create differentiated digital products and services, as well as how you generate new digital revenue streams and how you use digital channels to reach your customers, citizens, and partners. That’s where we’re seeing a high degree of variability. A lot of that is driven by the industry. For example, if you’re in manufacturing you’re probably looking at how technology can help pinpoint pain or constraints in key performance indicators (KPIs), like overall equipment effectiveness, and in addressing technology use across the manufacturing floor.

If you’re in retail, however, you might be looking at how digital channels can accelerate and outpace the four-walled retail experiences that companies may have relied on pre-pandemic.

Gardner: Craig, before we drill down into the actual Moments, were there any visuals that you wanted to share to help us appreciate the bigger picture of a digital transformation journey?

Partridge: Yes, let me share a couple of observations. As a team, we engage in thousands of customer conversations around the world. And what we’re hearing is exactly what we saw from a recent McKinsey report.

No alt text provided for this image

There are number of reasons why seven out of 10 respondents in this particular survey say they are stalled in attaining digital execution and gaining digital business value. Those are centered around four key areas. First of all, communication. It sounds like such a simple problem statement, but it is so hard to sometimes communicate what is a quite complex agenda in a way that is simple enough for as many people as possible — key stakeholders — to rally behind and to make real inside the organization. Sometimes it’s a simple thing of, “How do I visualize and communicate my digital vision?” If you can’t communicate really clearly, then you can’t build that guiding coalition behind you to help execute.

A second barrier to progress centers on complexity, so having a lot of suspended, spinning plates at the same time and trying to figure out what’s the relationship and dependencies between all of the initiatives that are running. Can I de-duplicate or de-risk some of what I’m doing to get that done quicker? That tends to be major barrier.

The third one you mentioned, Dana, which is, “Am I doing something different? Am I really trying to unlock the business models and value that are uniquely mine? Am I changing or reshaping my business and my market norms?” The differentiation challenge is really hard.

No alt text provided for this image

The fourth barrier is when you do have an idea or initiative agenda, then how to lay out the key building blocks in a way that’s going to get results quickly. That’s a prioritization question. Customers can get stuck in a paralysis-by-analysis mode. They’re not quite sure what to establish first in order to make progress and get to that minimum valuable product as quickly as possible. Those are the top four things we see.

To get over those things, you need a clear transformation strategy and clarity on what it is you’re trying to do. As I always say before the digital transformation — everything from edge, business model, how to engage with customers and clients, and through to a technology-as-assembly — to deliver those experiences and differentiation you have to have a distinctive transformation strategy. It leads to an acceleration capability, getting beyond the barriers, and planning the digital capabilities in the right sequence.

You asked, Dana, at the opening if there are emerging models to accomplish all of this. We have established at HPE something called Digital Next Advisory. That’s our joined customer engagement framework, through which we diagnose and pivot beyond the barriers that we commonly see in the customer digital ambitions. So that’s a high-level view of where we see things going, Dana.

Gardner: Why do you call your advisory service subsets “Moments,” and why have you ordered them the way you did?

Moments create momentum for digital

Partridge: We called them Moments because in our industry if you start calling things services then people believe, “Oh, well, that sounds like just a workshop that I’ll pay for.” It doesn’t sound very differentiated.

We also like the way it expresses co-innovation and co-engagement. A moment is something to be experienced with someone else. So there are two sides to that equation.

In terms of how we sequence them, actually they’re not sequenced. And that’s key. One of the things we do as a team across the world is to work out where the constraint points and barriers are. So think of it as a methodology.

And as with any good methodology, there are a lot of tools in the toolkit. The key for us as practitioners in the Digital Next Advisory service is to know what tool to bring at the right point to the customer.

As with any good methodology, there are a lot of tools in the toolkit. The key for us as practitioners in the Digital Next Advisory service is to know what tool to bring at the right point to the customer.

Sometimes that’s going to mean a communication issue, so let’s go solve for that particular problem first. Or, in some cases, it’s needing a differentiated technology partner, like HPE, to come in and create a vision, or a value proposition, that’s going to be different and unique. And so we would engage more specifically around that differentiation agenda.

There’s no sequencing; the sequencing is unique to each customer. And the right Moment is to make sure that the customer understands it is bidirectional. This is a co-engagement framework between two parties.

Gardner: All right, very good. Let’s welcome back Yara.

Schuetz: To reiterate what Craig mentioned, when we engage with a customer in a complex phenomenon such as digital transformation, it’s important to find common ground where we can and then move forward in the digital transformation journey specific to each of our customers.

Common core beliefs drive outcomes

We have three core beliefs. One is being edge-centric. And on the edge-centric core belief we believe that there are two business goals and business outcomes that our customers are trying to achieve.

No alt text provided for this image

In the top left, we have the human edge-centric journey, which is all about redefining customer experiences. In this journey, for example, the corporate initiative could mean the experiences of two personas. It could be the customer or the employees.

These initiatives are designed to increase revenues and productivity via such digital engagements as new services, such as mobile apps. And also to complement this human-to-edge journey we have the physical journey, or the physical edge. To gain insight and control means dealing with the physical edge. It’s about using, for example, Internet of things (IoT) technology for the environment the organization works in, operates in, or provide services in. So the business objective here in this journey consists of improving efficiency by means of digitizing the edge.

Complementary to the edge-centric side, we also have the core belief that the enterprise of the future will be cloud-enabled. By being cloud-enabled, we again separate the cloud-enabled capabilities into two distinct journeys.

The bottom right-hand journey is about modernizing and optimization. In this journey, initiatives address how IT can modernize its legacy environment with, for example, multi-cloud agility. It also includes, for example, optimization and management of services delivery, where different workloads should be best hosted. We’re talking about on-premises as well as different cloud models to focus the IT journey. That also includes software development, especially accelerating development.

This journey also involves the development improvement around personas. The aim is to speed up time-to-value with cloud-native adoption. For example, calling out microservices or containerization to shift innovation quickly over to the edge, using certain platforms, cloud platforms, and APIs.

The third core belief that the enterprise of the future should strive for is the data-driven, intelligence journey, which is all about analyzing and using data to create intelligence to innovate and differentiate from competitors. As a result, they can better target, for example, business analytics and insights using machine learning (ML) or artificial intelligence (AI). Those initiatives generate or consume data from the other journeys.

And complementary to this aspect is bringing trust to all of the digital initiatives. It’s directly linked to the intelligence journey because the data generated or consumed by the four journeys needs to be dealt with in a connected organization with resiliency and cybersecurity playing leading roles resulting in interest to internal as well as external stakeholders.

At the center is the operating model. And that journey really builds the center of the framework because skills, metrics, practices, and governance models have to be reshaped, since they dictate the outcomes of all digital transformation efforts.

These components build the enabling considerations that one must consider when you’re pursuing different business goals such as driving revenues, building productivity, or modernizing existing environments via multi-cloud agility. To put that all in the context of what many companies are really asking for right now is to put it in the context of everything-as-a-service.

Everything-as-a-service does not just belong to, for example, the cloud-enabled side. It’s not only about how you’re consuming technology. It also applies to the edge side for our customers, and in how they deliver, create, and monetize their services to their customers.

Gardner: Yara, please tell us how organizations are using all of this in practice. What are people actually doing?

Communicate clearly with Activate

Schuetz: One of the core challenges we’ve experienced together with customers is that they have trouble framing and communicating their transformation efforts in an easily understandable way across their entire organizations. That’s not an easy task for them.

Communication tension points tend to be, for example, how to really describe digital transformation. Is there any definition that really suits my business? And how can I visualize, easily communicate, and articulate that to my entire organization? How does what I’m trying to do with technology make sense in a broader context within my company?

So within the Activate Moment, we familiarize them with the digital journey map. This captures their digital ambition and communicates a clear transformation and execution strategy. The digital journey map is used as a model throughout the conversations. This tends to improve how an abstract and complex phenomenon like digital transformation can be delivered as something visual and simple to communicate.

Besides simplification, the digital journey map in the Activate Moment also helps describe an overview and gives a structure of various influencing categories and variables, as well as their relationship with each other, in the context of digital transformation.

Besides simplification, the digital journey map in the Activate Moment also helps describe an overview and gives a structure of various influencing categories and variables, as well as their relationship with each other in the context of digital transformation. It provides our customers guidance on certain considerations, and, of course, all the various possibilities of the application of technology in their business.

For example, at the edge, when we bring the digital journey map into the customer conversation in our Activate Moment, we don’t just talk about the edge generally. We refer to specific customer needs and what their edge might be.

In the financial industry, for example, we talk about branch offices as their edge. In manufacturing, we’re talking about production lines as their edges. If in retail, you have public customers, we talk about the venues as the edge and how — in times like this and the new normal — they can redefine experience and drive value there for their customers there.

Of course, this also serves as inspiration for internal stakeholders. They might say, “Okay, if I link these initiatives, or if I’m talking about this topic in the intelligence space, [how does that impact] the digitization of research and development? What does that mean in that context? And what else do I need to consider?”

Such inspiration means they can tie all of that together into a holistic and effective digital transformation strategy. The Activate Moment engages more innovation on the customer-centric side, too, by bringing insights into the different and various personas at a customer’s edge. They can have different digital ambitions and different digital aspirations that they want to prosper from and bring into the conversation.

Gardner: Thanks again, Yara. On the thinking around personas and the people, how does the issue of defining a new digital corporate culture fit into the Activate Moment?

Schuetz: It fits in pretty well because we are addressing various personas with our Activate Moment. For the chief digital officer (CDO), for example, the impact of the digital initiatives on the digital backbone are really key. She might ask, “Okay, what data will be captured and processed? And which insights will we drive? And how do we make these initiatives trusted?”

Gardner: We’re going to move on now to the next Moment, Align, and orchestrating initiatives with Aviviere. Tell us more about the orchestrating initiatives and the Align Moment, please.

Align with the new normal and beyond

Telang: The Align Moment is designed to help organizations orchestrate their broad catalog of digital transformation initiatives. These are the core initiatives that drive the digital agenda. Over the last few years, as we’ve engaged with customers in various industries, we have found that one of the most common challenges they encounter in this transformation journey is a lack of coordination and alignment between their most critical digital initiatives.

No alt text provided for this image

And, frankly, that slows their time-to-market and reduces the value realized from their transformation efforts. Especially now, with the new normal that we find ourselves in, organizations are rapidly scaling up and broadening out that their digital agenda.

As these organizations rapidly pivot to launching new digital experiences and business models, they need to rapidly coordinate their transformation agenda against an ever-increasing set of stakeholders — who sometimes have competing priorities. These stakeholders can be the various technology teams siting in an IT or digital office, or perhaps the business units responsible for delivering these new experience models to market. Or they can be the internal functions that support internal operations and supply chains of the organizations.

We have found that these groups are not always well-aligned to the digital agenda. They are not operating as a well-oiled machine in their pursuit of that singular digital vision. In this new normal, speed is critical. Organizations have to get aligned to the conversation and execute on all of the digital agenda quickly. That’s where the Align Moment comes in. It is designed to generate deep insights that help organizations evaluate a catalog of digital initiatives across organizational silos and to identify an execution strategy that speeds up their time-to-market.

No alt text provided for this image
Telang

So what does that actually look like? During the Align Moment, we bring together a diverse set of stakeholders that own or contribute to the digital agenda. Some of the stakeholders may sit in the business units, some may sit in internal functions, or maybe even on the digital office. But we bring them together to jointly capture and evaluate the most critical initiatives that drive the core of the digital agenda.

The objective is to jointly blend our own expertise and experience with that of our customers to jointly investigate and uncover the prerequisites and interdependencies that so often exist between these complex sets of enterprise-scale digital initiatives.

During the Align Moment, you might realize that the business units need to quickly recalibrate their business processes in order to meet the data security requirements coming in from the business unit or the digital team. For example, one of our customers found out during their own Align Moment that before they got too far down the path of developing their next generation of digital product, they needed to first build in data transparency and accessibility as a core design principle in their global data hub.

The methodology in the Align Moment significantly reduces execution risk as organizations embark on their multi-year transformation agendas. Quite frankly, these agendas are constantly evolving because the speed of the market today is so fast.

Our goal here is to drive a faster time-to-value for the entire digital agenda by coordinating the digital execution strategy across the organization. That’s what the Align Moment helps our customers with. That value has been brought to different stakeholders that we’ve engaged with.

The Align Moment has brought tremendous value to the CDO, for example. The CDO now has the ability to quickly make sense and — even in some cases — coordinate the complex web of digital initiatives running across their organizations, regardless of which silos they may be owned within. They can identify a path to execution that speeds up the realization of the entire digital agenda. I think of it as giving the CDO a dashboard through which they can now see their entire transformation on a singular framework.

We have found that the Align Moment delivers a lot of value for digital initiative owners. Because we work jointly across silos to de-risk, the execution pass implements that initiative whether it’s technology risk, process risk, or governance risk.

We’ve also found that the Align Moment delivers a lot of value for digital initiative owners. Because we jointly work across silos to de-risk, the execution pass implements that initiative whether it’s a technology risk, process risk, or governance risk. That helps to highlight the dependencies between these competing initiatives and competing priorities. And then, sequencing the work streams and efforts minimizes the risk of delays or mismatched deliverables, or mismatched outputs, between teams.

And then there is the chief information officer (CIO). This is a great tool for the CIO to take IT to the next level. They can elevate the impact of IT in the business, and in the various functions in the organization, by establishing agile, cross-functional work streams that can speed up the execution of the digital initiatives.

That’s in a nutshell what the Align Moment is about, helping our customers rapidly generate deep insights to help them orchestrate their digital agenda across silos, or break down silos, with the goal to speed up execution of their agendas.

Advance to the next big thing

Gardner: We’re now moving on to our next Moment, around stimulating differentiation, among other things. We now welcome back Christian to tell us about the Advance Moment.

Reichenbach: The train-of-thought here is that digital transformation is not only to optimize businesses by using technology. We also want to emphasize that technology is used to transform businesses by leveraging digital technology.

That means that we are using technology to differentiate the value propositions of our customers. And differentiation means, for example, new experiences for the customers of our customers, as well as new interactions with digital technology.

Further, it’s about establishing new digital business models, gaining new revenue streams, and expanding the ecosystem in a much broader sense. We want to leverage technology to differentiate the value propositions of our customers, and differentiation means you can’t do whatever one is doing by just copycatting, looking to your peers, and replicating what others are doing. That will not differentiate the value proposition.

Therefore, we specifically designed the Advance Moment where we co-innovate and co-ideate together with our customers to find their next big thing and driving technology to a much more differentiated value proposition.

Gardner: Christian, tell us more about the discreet steps that people need to do in order to get through that stimulating of differentiation.

Reichenbach: Differentiation comes from having new ideas and doing something different than in the past. That’s why we designed the Advance Moment to help our customers differentiate their unique value proposition.

No alt text provided for this image

The Advance Moment is designed as a thinking exercise that we do together with our customers across their diverse teams, meaning product owners, technology designers, engineers, and the CDO. This is a diverse team thinking about a specific problem they want to solve, but they shouldn’t think about it in isolation. They should think about what they do differently in the future to establish new revenue streams with maybe a new digital ecosystem to generate the new digital business models that we see all over the place in the annual reports from our customers.

Everyone is in the race to find the next big thing. We want to help them because we have the technology capabilities and experience to explain and discuss with our customers what is possible today with such leading technology as from HPE.

We can prove that we’ve done that. For example, we sit down with Continental, the second largest automotive part supplier in the world, and ideate about how we can redefine the experience of a driver who is driving along the road. We came up with a data exchange platform that helps our co-manufacturers to exchange data between each other so that the driver who’s sitting in the car gets new entertainment services that were not possible without a data exchange platform.

Our ideation and our Advance Moment are focused on redefining the experience and stimulating new ideas that are groundbreaking — and are not just copycatting what their peers are doing. And that, of course, will differentiate the value propositions from our customers in a unique way so that they can create new experiences and ultimately new revenue streams.

We’re addressing particular personas within our customer’s organization. That’s because today we see that the product owners in a company are powerful and are always asking themselves, “How can I bring my product to the next level?”

We’re addressing particular personas within our customer’s organization. That’s because today we see that the product owners in a company are powerful and are always asking themselves, “How can I bring my product to the next level? How can I differentiate my product so that it is not easily comparable with my peers?”

And, of course, the CDO in the customer organizations are looking to orchestrate these initiatives and support the product owners and engineers and build up the innovation engine with the right initiatives and right ideas. And, of course, when we’re talking about digital business transformation, we end up in the IT department because it has to operate somewhere.

So we bring in the experts from the IT department as well as the CIO to turn ideas quickly into realization. And for turning ideas quickly into something meaningful for our customers is what we designed the Accelerate Moment for.

Gardner: We will move on next to the Moment with Amos and learn about the Accelerate Moment, of moving toward the larger digital transformation value.

Accelerate from ideas into value

Ferrari: When it comes to realizing digital transformation, let me ask you a question, Dana. What do you think is the key problem our customers have?

Gardner: Probably finding ways to get started and then finding realization of value and benefits so that they can prove their initiative is worthwhile.

Ferrari: Yes. Absolutely. It’s a problem of prioritization of investment. They know that they need to invest, they need to do something, and they ask, “Where should I invest first? Should I invest in the big infrastructure first?”

But these decisions can slow things down. Yet time-to-market and speed are the keys today. We all know that this is what is driving the behavior of the people in their transformations. And so the key thing is the Accelerate Moment. It’s the Moment where we engage with our customers via workshops with them.

We enable them to extrapolate from their digital ambition and identify what will enable them to move into the realization of their digital transformation. “Where should I start? What is my journey’s path? What is my path to value?” These are the main questions that the Accelerate Moment answers.

No alt text provided for this image

As you can see, this is a part of the entire HPE Digital Next Advisory services, and it’s enabling the customer to move critically to the realization of benefits. In this engagement, you start with the decision about the use cases and the technology. There are a number of key elements and decisions that the customer is making. And this is where we’re helping them with the Accelerate Moment.

To deliver an Accelerate Moment, we use a number of steps. First, we frame the initiative by having a good discussion about their KPIs. How are you going to measure them? What are the benefits? Because the business is what is thriving. We know that. And we understand how the technology is the link to the business use case. So we frame the initiative and understand the use cases and scope out the use cases that advance the key KPIs that are the essential platform for the customer. That is a key step into the Moment.

Another important thing to understand is that in a digital transformation, a customer is not alone. No customer is really alone in that. It’s not successful if they don’t think holistically about their digital ecosystems. A customer is successful when they think about the complete ecosystem, including not only the key internal stakeholders but the other stakeholders surrounding them. Together they can enable them to build a new digital value and enable customer differentiation.

The next step is understanding the depth of technology across our digital journey map. And the digital journey map helps customers to see beyond just one angle. They may have started only from the IT point of view, or only from the developer point of view, or just the end user point of view. The reality is that IT now is becoming the value creator. But to be the value creator, they need to consider the entire technology of the entire company.

They need to consider edge-to-cloud, and data, as a full picture. This is where we can help them through a discussion about seeing the full technology that supports the value. How can you bring value to your full digital transformation?

The last step that we consider in the Accelerate Moment is to identify the elements surrounding your digital transformation that are the key building blocks and that will enable you to execute immediately. Those building blocks are key because they create what we call the minimal value product.

They should build up a minimum value product and surround it with the execution to realize the value immediately. They should do that without thinking, “Oh, maybe I need two or three years before realize that value.” They need to change to asking, “How can I do that in a very short time by creating something that is simple and straightforward to create by putting the key building blocks in place.”

This shows how everything is linked and how we need to best link them together. How? We link everything together with stories. And the stories are what help our key stakeholders realize what they needed to create. The stories are about the different stakeholders and how the different stakeholders see themselves in the future of digital transformation. This is the way we show them how this is going to be realized.

The end result is that we will deliver a number of stories that are used to assemble the key building blocks. We create a narrative to enable them to see how the applied technology enables them to create value for their company and achieve the key growth. This is the Accelerate Moment.

Gardner: Craig, as we’ve been discussing differentiation for your customers, what differentiates HPE Pointnext Services? Why are these four Moments the best way to obtain digital transformation?

No alt text provided for this image

Partridge: Differentiation is key for us, as well as for our customers across a complex and congested landscape of partners that the customers can choose. Some of the differentiation we’ve touched on here. There is no one else in the market, as far as I’m aware, that has the edge-to-cloud digital journey map, which is HPE’s fundamental model and allows us then to holistically paint the story of not only digital transformation and digital ambition — but also shows you how to do that at the initiative level and to how plug in those building blocks.

I’m not saying that anybody with just the maturity of an edge-to-cloud model can bring digital ambition to life, to visualize it through the Activate Moment, orchestrate it through the Align Moment, create differentiation through the Advance Moment, and then get to quicker value with the Accelerate Moment.

Gardner: Craig, for those organizations interested in learning more, how do they get started? Where can they go for resources to gain the ability to innovate and be differentiated?

Partridge: If anybody viewing this has seen something that they want to grab on to, that they think can accelerate their own digital ambition, then simply pick up the phone and call HPE and your sales rep. We have sales organizations from dedicated enterprise managers at some of that biggest customers around the world, on through to small- to medium-sized businesses with our inside-sales organization. Call your HPE sales rep and say the magic words “I want to engage with a digital adviser and I’m interested in Digital Next Advisory.” And that should be the flag that triggers a conversation with one of our digital advisers around the world.

Finally, there’s an email address, digitaladviser@hpe.com. If worse comes to worst, throw an email to that address and then we’d be able to get straight back to you. So, it should make it as easy as possible and just reach out to HPE advisors in advance.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video. Sponsor: Hewlett Packard Enterprise.

YOU MAY ALSO BE INTERESTED IN:

Posted in application transformation, artificial intelligence, Business intelligence, Cloud computing, containers, Data center transformation, DevOps, digital transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

How global data availability accelerates collaboration and delivers business insights

The next BriefingsDirect data strategy insights discussion explores the payoffs when enterprises overcome the hurdles of disjointed storage to obtain global data access.

By leveraging the latest in container and storage server technologies, the holy grail of inclusive, comprehensive, and actionable storage can be obtained. And such access extends across all deployment models – from hybrid cloud, to software-as-a-service (SaaS), to distributed data centers, and edge.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us here to examine the role that comprehensive data storage plays in delivering the rapid insights businesses need for digital business transformation with our guest, Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Denis, in our earlier discussions in this three-part series we learned about IBM’s vision for global consistent data, as well as the newest systems forming the foundation for these advances.

But let’s now explore the many value streams gained from obtaining global data access. We hear a lot about the rise of artificial intelligence (AI) adoption needed to support digital businesses. So what role does a modern storage capability — particularly with a global access function and value — play in that AI growth? 

Kennelly: As enterprises become increasingly digitally transformed, the amount of data they are generating is enormous. IDC predicts that something like 42 billion Internet of things (IoT) devices will be sold by 2025, and so the role of storage is not only centralized to data centers. It needs to be distributed across this entire hybrid cloud environment.

Discover and share AI data

For actionable AI, you want to build models on all of the data that’s been generated across this environment. Being able to discover and understand that data is critical, and that’s why it’s a key part of our storage capabilities. You need to run that storage on all of these highly distributed environments in a seamless fashion. You could be running anywhere — the data center, the public cloud, and at edge locations. But you want to have the same software and capabilities for all of these locations to allow for that essential seamless access.

That’s critical to enabling an AI journey because AI doesn’t just operate on the data sitting in a public cloud or data center. It needs to operate on all of the data if you want to get the best insights. You must get to the data from all of these locations and bring it together in a seamless manner.

Gardner: When we’re able to attain such global availability of data — particularly in a consistent context – how does that accelerate AI adoption? Are there particular use cases, perhaps around DevOps? How do people change their behavior when it comes to AI adoption, thanks to what the storage and data consistency can do for them?

Kennelly: First it’s about knowing where the data is and doing basic discovery. And that’s a non-trivial task because data is being generated across the enterprise. We are increasingly collaborating remotely and that generates a lot of extended data. Being able to access and share that data across environments is a critical requirement. It’s something that’s very important to us. 

Then — as you discover and share the data – you can also bring that data together into use by AI models. You can use it to actually generate better AI models across the various tiers of storage. But you don’t want to just end up saying, “Okay, I discovered all of the data. I’m going to move it to this certain location and then I’m going to run my analytics on it.”

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Instead, you want to do the analytics in real time and in a distributed fashion. And that’s what’s critical about the next level of storage.Coming back to what’s hindering AI adoption, number one is that data discovery because enterprises spent a huge amount of time just discovering the data. And when you get access, you need to have seamless access. And then, of course, as you build your AI models you need to infuse those analytics into the applications and capabilities that you’re developing.

And that leads to your question around DevOps, to be able to integrate the processes of generating and building AI models into the application development process so that we make sure the application developers can leverage those insights for the applications they are building.

Gardner: For many organizations, moving to hybrid cloud has been about application portability. But when it comes to the additional data mobility we gain from consistent global data access, there’s a potential greater value. Is there a second shoe to fall, if you will, Denis, when we can apply such data mobility in a hybrid cloud environment?

Access data across hybrid cloud 

Kennelly: Yes, and that second shoe is about to fall. The first part of our collective cloud journey was all about moving to the public cloud, moving everything to public clouds, and building applications with cloud-based data.

What we discovered in doing that is that life is not so simple, and we’re really now in a hybrid cloud world for many reasons. Because of that success, we now need the hybrid cloud approach.

The need for more cloud portability has led to technologies like containers to get portability across all of the environments — from data centers to clouds. As we roll out containers into production, however, the whole question of data becomes even more critical.

That need for more cloud portability has led to technologies like containers to get portability across all of these environments – from data centers to clouds. As we roll out containers and these workloads into production, the whole data question is more critical.

You can now build an application that runs in a certain environment, and containers allow you to move that application to other environments very quickly. But if the data doesn’t follow — if the data access doesn’t follow that application seamlessly — then you face some serious challenges and problems.

And that is the next shoe to drop, and it’s dropping right now. As we roll out these sophisticated applications into production, being able to copy data or get access to data across this hybrid cloud environment is the biggest challenge the industry is facing.

Gardner: When we envision such expansive data mobility, we often think about location, but it also impacts the type of data – be it file, block, and object storage, for example. Why must there be global access geographically — but also in terms of the storage type and across the underlying technology platforms? 

Kennelly: To the application developer, we really have to hide from them that layer of complexity of the storage type and platform. At the end of the day, the application developer is looking for a consistent API through which to access the data services, whether that’s file, block, or object. They shouldn’t have to care about that level of detail.

No alt text provided for this image

It’s important that there’s a focus on consistent access via APIs to the developer. And then the storage subsystem has to take care of the federated global access of the data. Also, as we generate data, the storage subsystem should scale horizontally.These are the design principles we have put into the IBM Storage platform. Number one, you get seamless actions and consistent access – be it file, object, or block storage. And we can scale horizontally as you generate data across that hybrid cloud environment.

Gardner: The good news is that global data access enablement can now be done with greater ease. The bad news is the global access enablement can be done anywhere, anytime, and with ease.

And so we have to also worry about access, security, permissions, and regulatory compliance issues. How do you open the floodgates, in a sense, for common access to distributed data, but at the same time put in the guardrails that allow for the management of that access in a responsible way?

Global data access opens doors

Kennelly: That’s a great question. As we introduce simplicity and ease of data access, we can’t just open it up to everybody. We have to make sure we have good authentication as part of the design, using things like two-factor authentication on the data-access APIs.

But that’s only half of the problem. In the security world, the unfortunate acceptance is that you probably are going to get breached. It’s in how you respond that really differentiates you and determines how quickly you can get the business back on its feet.

And so, when something bad happens, the third critical role for the storage subsystem to play is in the access control to the persistence storage. At the end of the day, that is what people are after. Being able to understand the typical behavior of those storage systems, and how data is usually being stored, forms a baseline against which you can understand when something out of the ordinary is happening.

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Clearly, if you’re under a malware or CryptoLocker attack, you see a very different input/output (IO) pattern than you would normally see. We can detect that in real time, understand when it happens, and make sure you have protected copies of the data so you can quickly access that and get back to business and back online quickly.Why is all of that important? Because we live in a world where it’s not a case of if it will happen, it’s really when it will happen. How we can respond is critical.

Gardner: Denis, throughout our three-part series we’ve been discussing what we can do, but we haven’t necessarily delved into specific use cases. I know you can’t always name businesses and reference customers, but how can we better understand the benefits of a global data access capability in the context of use cases?

In practice, when the rubber hits the road, how does global data storage access enable business transformation? Is there a key metric you look for to show how well your storage systems support business outcomes? 

Global data storage success

Kennelly: We’re at a point right now when customers are looking to drive new business models and to move much more quickly in their hybrid cloud environments.

There are enabling technologies right now facilitating that. There’s a lot of talk about edge with the advent of 5G networks, which enable a lot of this to happen. When you talk about seamless access and the capability to distribute data across these environments, you need the underlying network infrastructure to make that happen.

Customers are looking to drive new business models and to move much more quickly in their hybrid cloud deployments. There’s a lot of talk about edge with the advent of 5G networks. When you talk about seamless access and the capability to distribute data across these environments, you need the underlying network infrastructure to make that happen.

As we do that, we’re looking at a number of key business measures and metrics. We have done some independent surveys and analysis looking at the business value that we drive for our clients with a hybrid cloud platform and things like portability, agility, and seamless data access.

In terms of business value, we have four or five measures. For example, we can drive roughly 2.5 times more business value for our clients — everything from top-line growth to operational savings. And that’s something that we have tested with many clients independently.

One example that’s very relevant in the world we live in today is we have a cloud provider that needed to have more federated access to their global data. But they also wanted to distribute that through edge nodes in a consistent manner. And that’s just an example of why this is happening in action.

Gardner: You know, some of the major consumers of analytics in businesses these days are data scientists, and they don’t always want to know what’s going on underneath the covers. On the other hand, what goes on underneath the covers can greatly impact how well they can do their jobs, which are often essential to digital business transformation.

No alt text provided for this image

For you to address a data scientist specifically about why global access for data and storage modernization is key, what would you tell them? How do you describe the value that you’re providing to someone like a data scientist who plays such a key role in analytics?

Kennelly: Well, data scientists talk a lot about data sets. They want access to data sets so they can test their hypothesis very quickly. In a nutshell, we surface data sets quicker and faster than anybody else at a price performance that leads the industry — and that’s what we do every day to enable data scientists.

Gardner: Throughout our series of three storage strategy discussions, we’ve talked about how we got here and what we’re doing. But we haven’t yet talked about what comes next.

These enabling technologies not only satisfy business imperatives and requirements now but set up organizations to be even more intelligent over time. Let’s look to the future for the expanding values when you do data access globally and across hybrid clouds well. 

Insight-filled future drives growth

Kennelly: Yes, you get to critically look at current and new business models. At the end of the day, this is about driving business growth. As you start to look at these environments — and we’ve talked a lot about analytics and data – it becomes about getting competitive advantage through real-time insights about what’s going on in your environments.

You become able to better understand your supply chain, what’s happening in certain products, and in certain manufacturing lines. You’re able to respond accordingly. There’s a big operational benefit in terms of savings. You don’t have to have excess capacity in the environment.

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Also, in seeking new business opportunities, you will detect the patterns needed to have insights you hadn’t had before by doing analytics and machine learning into what’s critical in your systems and markets. If you move your IT environment and centralize everything in one cloud, for example, then that really hinders that progress.By being able to do that with all of the data as it’s generated in real time, you get very unique insights that provide competitive advantage.

Gardner: And lastly, why IBM? What sets you apart from the competition in the storage market for obtaining these larger goals of distributed analytics, intelligence, and competitiveness?

Kennelly: We have shown over the years that we have been at the forefront of many transformations of businesses and industries. Going back to the electronic typewriter, if we want to go back far enough, or now to our business-to-business (B2B) or business-to-employee (B2E) models in the hybrid cloud — IBM has helped businesses make these transformations. That includes everything from storage to data and AI through to hybrid cloud platforms, with Red Hat Enterprise Linux, and right out to our business service consulting.

IBM has the end-to-end capabilities to make that all happen. It positions us as an ideal partner who can do so much.

I love to talk about storage and the value of storage, and I spend a lot of time talking with people in our business consulting group to understand the business transformations that clients are trying to drive and the role that storage has in that. Likewise, with our data science and data analytics teams that are enabling those technologies.

The combination of all of those capabilities as one idea is a unique differentiator for us in the industry. And it’s why we are developing the leading edge capabilities, products, and technology to enable the next digital transformations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

YOU MAY ALSO BE INTERESTED IN:

Posted in AIOps, application transformation, big data, Business intelligence, Cloud computing, containers, Cyber security, data analysis, data center, Data center transformation, digital transformation, enterprise architecture, IBM, machine learning, Software, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How consistent storage services across all tiers and platforms attains data simplicity, compatibility, and lower cost

This BriefingsDirect Data Strategies Insights discussion series, Part 2, explores the latest technologies and products delivering common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

New advances in storage technologies, standards, and methods have changed the game when it comes to overcoming the obstacles businesses too often face when seeking pervasive analytics across their systems and services. 

Stay with us now as we examine how IBM Storage is leveraging containers and the latest storage advances to deliver inclusive, comprehensive, and actionable storage.  

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the future of storage strategies that accelerate digital transformation, please welcome Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: In our earlier discussion we learned about the business needs and IBM’s large-scale vision for global, consistent data. Let’s now delve beneath the covers into what enables this new era of data-driven business transformation. 

In our last discussion, we also talked about containers — how they had been typically relegated to application development. What should businesses know about the value of containers more broadly within the storage arena as well as across other elements of IT?

Containers for ease, efficiency

Kennelly: Sometimes we talk about containers as being unique to application development, but I think the real business value of containers is in the operational simplicity and cost savings. 

When you build applications on containers, they are container-aware. When you look at Kubernetes and the controls you have there as an operations IT person, you can scale up and scale down your applications seamlessly. 

As we think about that and about storage, we have to include storage under that umbrella. Traditionally, storage was independently doing of a lot of the work. Now we are in a much more integrated environment where you have cloud-like behaviors. And you want to deliver those cloud-like behaviors end-to-end — be it for the applications, for the data, for the storage, and even for the network — right across the board. That way you can have a much more seamless, easier, and operationally efficient way of running your environment. 

Containers are much more than just an application development tool; they are a key enabler to operational improvement across the board.

Gardner: Because hybrid cloud and multi-cloud environments are essential for digital business transformation, what does this container value bring to bridging the hybrid gap? How do containers lead to a consistent and actionable environment, without integrations and complexity thwarting wider use of assets around the globe?

Kennelly: Let’s talk about what a hybrid cloud is. To me, a hybrid cloud is the ability to run workloads on a public cloud and on a private cloud traditional data center. And even right out to edge locations in your enterprise where there are no IT people whatsoever. 

Being able to do that consistently across that environment — that’s what containers bring. They allow a layer of abstraction above the target environment, be it a bare-metal server, a virtual machine (VM), or a cloud service – and you can do that seamlessly across those environments.

That’s what a hybrid cloud platform is and what enables that are containers and being able to have a seamless runtime across this entire environment.

Today, as an enterprise, we still have assets sitting on a data center. Yet typical horizontal business processes, such as HR or sales, want to move to a SaaS model while still retaining core differentiating business processes. 

And that’s core to digital transformation, because when we start to think about where we are today as an enterprise, we still have assets sitting on the data center. Typically, what you see out there are horizontal business processes, such as human resources or sales, and you might want to move those more to a software as a service (SaaS) capability while still retaining your core, differentiating business processes.

For compliance or regulatory reasons, you may need to keep those assets in the data center. Maybe you can move some pieces. But at the same time, you want to have the level of efficiency you gain from cloud-like economics. You want to be able to respond to business needs, to scale up and scale down the environment, and not design the environment for a worst-case scenario. 

That’s why a hybrid cloud platform is so critical. And underneath that, why containers are a key enabler. Then, if you think about the data in storage, you want to seamlessly integrate that into a hybrid environment as well.

Gardner: Of course, the hybrid cloud environment extends these days more broadly with the connected edge included. For many organizations the edge increasingly allows real-time analytics capabilities by taking advantage of having compute in so many more environments and closer to so many more devices.

What is it about the IBM hybrid storage vision that allows for more data to reside at the edge without having to move it into a cloud, analyze it there, and move it back? How are containers enabling more data to stay local and still be part of a coordinated whole greater than the sum of the parts?

Data and analytics at the edge

Kennelly: As an industry, we go from being centralized to decentralized — what I call a pendulum movement every number of years. If you think back, we were in the mainframe, where everything was very centralized. Then we went to distributed systems and decentralized everything.

With cloud we began to recentralize everything again. And now we are moving our clouds back out to the edge for a lot of reasons, largely because of egress and ingress challenges and to seek efficiency in moving more and more of that data. 

No alt text provided for this image

When I think about edge, I am not necessarily thinking about Internet of things (IoT) devices or sensors, but in a lot of cases this is about branch and remote locations. That’s where a core part of the enterprise operates, but not necessarily with an IT team there. And that part of the enterprise is generating data from what’s happening in that facility, be it a manufacturing plant, a distribution center, or many others.

As you generate that data, you also want to generate the analytics that are key to understanding how the business is reacting and responding. Do you want to move all that data to a central cloud to run analytics, and then take the result back out to that distribution center? You can do that, but it’s highly inefficient — and very costly. 

What our clients are asking for is to keep the data out at these locations and to run the analytics locally. But, of course, with all of the analytics you still want to share some of that data with a central cloud.

So, what’s really important is that you can share across this entire environment, be it from a central data center or a central cloud out to an edge location and provide what we call seamless access across this environment. 

With our technology, with things like IBM Spectrum Scale, you gain that seamless access. We abstract the data access as if you are accessing the data locally — or it could be back in the cloud. But in terms of the applications, it really doesn’t care. That seamless access is core to what we are doing.

Gardner: The IBM Storage portfolio is broad and venerable. It includes flash, disk, and tape, which continues to have many viable use cases. So, let’s talk about the products and how they extend the consistency and commonality that we have talked about and how that portfolio then buttresses the larger hybrid storage vision.

Storage supports all environments 

Kennelly: One of the key design points of our portfolio, particularly our flash line, is being able to run in all environments. We have one software code base across our entire portfolio. That code runs on our disk subsystems and disk controllers, but it can also run on your platform of choice. So, we absolutely support all platforms across the board. So that’s one design principle. 

Secondly, we embrace containers very heavily. And being able to run on containers and provide data services across those containers provides that seamless access that I talked about. That’s a second major design principle.

Yet as we look at our storage portfolio, we also want to make sure we optimize the storage and optimize the spend by the customer by tiered storage and being able to move data across those different tiers of storage.

As we look at our storage portfolio, we also want to make sure we optimize the storage and optimize the spend by the customer by tiered storage and being able to move data across those different tiers of storage.

You mentioned tape storage. And so, for example, at times you may want to move from fast, online, always-on, and high-end storage to a lower tier of less expensive storage such as tape, maybe for data retention reasons. You’ll then need an air gap solution and you’ll want to move to cold storage, as we call it, i.e. on tape. We support that capability and we can manage your data across that environment. 

There are three core design principles to our IBM Storage portfolio. Number one is we can run seamlessly across these environments. Number two, we provide seamless access to the data across those environments. And number three, we support optimization of the storage for the use case needed, such being able to tier the storage to your economic and workload needs.

Gardner: Of course, what people are also interested in these days is the FlashSystem performance. Tell us about some of the latest and greatestwhen it comes to FlashSystem. You have the new 5200, the high-end 9200, and those also complement some of your other products like ESS 3200

Flash provides best performance

Kennelly: Yes, we continue to expand the portfolio. With the FlashSystems, and some of our recent launches, some things don’t change. We’re still able to run across these different environments.

But in terms of price-performance, especially with the work we have done around our flash technology, we have optimized our storage subsystems to use standard flash technologies. In terms of price for throughput, when we look at this against our competitors, we offer twice the performance for roughly half the price. And this has been proven as we look at our competitors’ technology. That’s due to leveraging our innovations around what we call the FlashCore Module, wherein we are able to use standard flash in those disk drives and enable compression on the fly. That’s driving the roadmap in terms of throughput and performance at a very, very competitive price point.

Gardner: Many of our readers and listeners, Denis, are focused on their digital business transformation. They might not be familiar with some of these underlying technological advances, particularly end-to-end Non-Volatile Memory Express (NVMe). So why are these systems doing things that just weren’t possible before?

No alt text provided for this image

Kennelly: A lot of it comes down to where the technology is today and the price points that we can get from flash from our vendors. And that’s why we are optimizing our flash roadmap and our flash drives within these systems. It’s really pushing the envelope in terms of performance and throughput across our flash platforms.

Gardner: The desired end-product for many organizations is better and pervasive analytics. And one of the great things about artificial intelligence (AI) and machine learning (ML) is it’s not only an output — it’s a feature of the process of enhancing storage and IT.

How are IT systems and storage using AI inside these devices and across these solutions? What is AI bringing to enable better storage performance at a lower price point?

Kennelly: We continue to optimize what we can do in our flash technology, as I said. But when you embark on an AI project, something like 70 to 80 percent of the spend is around discovery, gaining access to the data, and finding out where the data assets are. And we have capabilities like IBM Spectrum Discover that help catalog and understand where the data is and how to access that data. It’s a critical piece of our portfolio on that journey to AI.

We also have integrations with AI services like Cloudera out of the box so that we can seamlessly integrate with those platforms and help those platforms differentiate using our Spectrum Scale technology.

But in terms of AI, we have some really key enablers to help accelerate AI projects through discovery and integration with some of the big AI platforms.

Gardner: And these new storage platforms are knocking off some impressive numbers around high availability and low latency. We are also seeing a great deal of consolidation around storage arrays and managing storage as a single pool. 

On the economics of the IBM FlashSystem approach, these performance attributes are also being enhanced by reducing operational costs and moving from CapEx to OpEx purchasing.

Storage-as-a-service delivers

Kennelly: Yes, there is no question we are moving toward an OpEx model. When I talked about cloud economics and cloud-like flexibility behavior at a technology level, that’s only one side of the equation. 

On the business side, IT is demanding cloud consumption models, OpEx-type models, and pay-as-you-go. It’s not just a pure financial equation, it’s also how you consume the technology. And storage is no different. This is why we are doing a lot of innovation around storage-as-a-service. But what does that really mean? 

It means you ask for a service. “I need a certain type of storage with this type of availability, this type of performance, and this type of throughput.” Then we as a storage vendor take care of all the details behind that. We get the actual devices on the floor that meet those requirements and manage that. 

As those assets depreciate over a number of years, we replace and update those assets in a seamless manner to the client.

We already have the technology to support all environments. Now we want to make sure we have a seamless consumption model and the business processes of delivering storage-as-a-service and being able to replace and upgrade that storage over time — all seamless to the client.

As the storage sits in the data center, maybe the customer says, “I want to move some of that data to a cloud instance.” We also offer a seamless capability to move the data over to the cloud and run that service on the cloud. 

We already have all the technology to do that and the platform support for all of those environments. What we are working on now is making sure we have a seamless consumption model and the business processes of delivering that storage-as-a-service, and how to replace and upgrade that storage over time — while making it all seamless to the client. 

I see storage moving quickly to this new storage consumption model, a pure OpEx model. That’s where we as an industry will go over the next few years.

Gardner: Another big element of reducing your total cost of ownership over time is in how well systems can be managed. When you have a common pool approach, a comprehensive portfolio approach, you also gain visibility, a single pane of glass when it comes to managing these systems.

Intelligent insights via storage

Kennelly: That’s an area we continue to invest in heavily. Our IBM Storage Insights platform provides tremendous insights in how the storage subsystems are running operationally. It also provides insights within the storage in terms of where you have space constraints or where you may need to expand. 

But that’s not just a manual dashboard that we present to an operator. We are also infusing AI quite heavily into that platform and using AIOps to integrate with Storage Insights to run storage operations at much lower costs and with more automation.

And we can do that in a consistent manner right across the environments, whether it’s a flash storage array, mainframe attached, or a tape device. It’s all seamless across the environment. You can see those tiers and storage as one platform and so are able to respond quickly to events and understand events as they are happening.

Gardner: As we close out, Denis, for many organizations hybrid cloud means that they don’t always know what’s coming and lack control over predicting their IT requirements. Deciding in advance how things get deployed isn’t always an option.

How do the IBM FlashSystems, and your recent announcements in February 2021, provide a path to a crawl-walk-run adoption approach? How do people begin this journey regardless of the type of organization and the size of the organization?

Kennelly: We are introducing an update to our FlashSystem 5200platform, which is our entry point platform. Now, that consistent software platform runs our storage software, IBM Spectrum Virtualize. It’s the same software as in our high-end arrays at the very top of our pyramid of capabilities. 

No alt text provided for this image

As part of that announcement, we are also supporting other public cloud vendors. So you can run the software on our arrays, or you can move it out to run on a public cloud. You have tremendous flexibility and choice due to the consistent software platform.

And, as I said, it’s our entry point so the price is very, very competitive. This is a part of the market where we see tremendous growth. You can experience the best of the IBM Storage platform at a low-cost entry point, but also get the tremendous flexibility. You can scale up that environment within your data center and right out to your choice of how to use the same capabilities across the hybrid cloud.

There has been tremendous innovation by the IBM team to make sure that our software supports this myriad of platforms, but also at a price point that is the sweet spot of what customers are asking for now.

Gardner: It strikes me that we are on the vanguard of some major new advances in storage, but they are not just relegated to the largest enterprises. Even the smallest enterprises can take advantage and exploit these great technologies and storage benefits.

Kennelly: Absolutely. When we look at the storage market, the fastest growing part is at that lower price point — where it’s below $50K to $100K unit costs. That’s where we see tremendous growth in the market and we are serving it very well and very efficiently with our platforms. And, of course, as people want to scale and grow, they can do that in a consistent and predictable manner.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

You may also be interested in:

Posted in Cloud computing, data analysis, data center, Data center transformation, digital transformation, IBM, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How storage advances help businesses digitally transform across a hybrid cloud world

The next BriefingsDirect data strategies insights discussion explores how consistent and global storage models can best propel pervasive analytics and support digital business transformation.

Decades of disparate and uncoordinated storage solutions have hindered enterprises’ ability to gain common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

Yet only a comprehensive data storage model that includes all platforms, data types, and deployment architectures will deliver the rapid insights that businesses need.

Stay with us to examine how IBM Storage is leveraging containers and the latest storage advances to deliver the holy grail of inclusive, comprehensive, and actionable storage.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the future promise of the storage strategies that accelerate digital transformation, please welcome Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Clearly the world is transforming digitally. And hybrid cloud is helping in that transition. But what role specifically does storage play in allowing hybrid cloud to function in a way that bolsters and even accelerates digital transformation?

Kennelly: As you said, the world is undergoing a digital transformation, and that is accelerating in the current climate of a COVID-19 world. And, really, it comes down to having an IT infrastructure that is flexible, agile, has cloud-like attributes, is open, and delivers the economic value that we all need.

That is why we at IBM have a common hybrid cloud strategy. A hybrid cloud approach, we now know, is 2.5 times more economical than a public cloud-only strategy. And why is that? Because as customers transform — and transform their existing systems — the data and systems sit on-premises for a long time. As you move to the public cloud, the cost of transformation has to overcome other constraints such as data sovereignty and compliance. This is why hybrid cloud is a key enabler.

Hybrid cloud for transformation

Now, underpinning that, the core building block of the hybrid cloud platform, is containers and Kubernetesusing our OpenShifttechnology. That’s the key enabler to the hybrid cloud architecture and how we move applications and data within that environment.

As the customer starts to transform and looks at those applications and workloads as they move to this new world, being able to access the data is critical and being able to keep that access is a really important step in that journey. Integrating storage into that world of containers is therefore a key building block on which we are very focused today.

Storage is where you capture all that state, where all the data is stored. When you think about cloud, hybrid cloud, and containers — you think stateless. You think about cloud-like economics as you scale up and scale down. Our focus is bridging those two worlds and making sure that they come together seamlessly. To that end, we provide an end-to-end hybrid cloud architecture to help those customers in their transformation journeys.

Gardner: So often in this business, we’re standing on the shoulders of the giants of the past 30 years; the legacy. But sometimes legacy can lead to complexity and becomes a hindrance. What is it about the way storage has evolved up until now that people need to rethink? Why do we need something like containers, which seem like a fairly radical departure?

Kennelly: It comes back to the existing systems. You know, I think storage at the end of the day was all about the applications, the workloads that we ran. It was storage for storage’s sake. You know, we designed applications, we ran applications and servers, and we architected them in a certain fashion.

When you get to a hybrid cloud world … If you’re in a digitally transformed business, you can respond rapidly. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity.

And, of course, they generated data and we wanted access to that data. That’s just how the world happened. When you get to a hybrid cloud world — I mean, we talk about cloud-like behavior, cloud-like economics — it manifests itself in the ability to respond.

If you’re in a digitally transformed business, you can respond to needs in your supply chain rapidly, maybe to a surge in demand based on certain events. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity that would ever be needed. That’s the benefit cloud has brought to the industry, and why it’s so critically important.

Now, maybe traditionally storage was designed for the worst-case scenario. In this new world, we have to be able to scale up and scale down elastically like we do in these workloads in a cloud-like fashion. That’s what has fundamentally changed and what we need to change in those legacy infrastructures. Then we can deliver more of our analysis-services-consumption-type model to meet the needs of the businesses.

Gardner: And on that economic front, digitally transformed organizations need data very rapidly, and in greater volumes — with that scalability to easily go up and down. How will the hybrid cloud model supported by containers provide faster data in greater volumes, and with a managed and forecastable economic burden?

Disparate data delivers insights

Kennelly: In a digitally transformed world, data is the raw material to a competitive advantage. Access to data is critical. Based on that data, we can derive insights and unique competitive advantages using artificial intelligence (AI) and other tools. But therein lies the question, right?

When we look at things like AI, a lot of our time and effort is spent on getting access to the data and being able to assemble that data and move it to where it is needed to gain those insights.

Being able to do that rapidly and at a low cost is critical to the storage world. And so that’s what we are very focused on, being able to provide those data services — to discover and access the data seamlessly. And, as required, we can then move the data very rapidly to build on those insights and deliver competitive advantage to a digitally transformed enterprise.

Gardner: Denis, in order to have comprehensive data access and rapidly deliver analytics at an affordable cost, the storage needs to run consistently across a wide variety of different environments — bare-metal, virtual machines (VMs), containers — and then to and from both public and private clouds, as well as the edge.

What is it about the way that IBM is advancing storage that affords this common view, even across that great disparity of environments?

Kennelly: That’s a key design principle for our storage platform, what we call global access or a global file system. We’re going right back to our roots of IBM Research, decades ago where we invented a lot of that technology. And that’s the core of what we’re still talking about today — to be able to have seamless access across disparate environments.

A key design principle for our storage platform, what we call global access or a global file system, goes back to our roots at IBM Research. We invented a lot of that technology. And that’s at the core of what we’re talking about — seamless access across disparate environments.

Access is one issue, right? You can get read-access to the data, but you need to do that at high performance and at scale. At the same time, we are generating data at a phenomenal rate, so you need to scale out the storage infrastructure seamlessly. That’s another critical piece of it. We do that with products or capabilities we have today in things like IBM Spectrum Scale.

But another key design principle in our storage platforms is being to run in all of those environments — bare-metal servers, to VMs, to containers, and right out to the edge footprints. So we are making sure our storage platform is designed and capable of supporting all of those platforms. It has to run on them and as well as support the data services — the access services, the mobility services and the like, seamlessly across those environments. That’s what enables the hybrid cloud platform at the core of our transformation strategy.

Gardner: In addition to the focus on the data in production environments, we also should consider the development environment. What does your data vision include across a full life-cycle approach to data, if you will?

Be upfront with data in DevOps

Kennelly: It’s a great point because the business requirements drive the digital transformation strategy. But a lot of these efforts run into inertia when you have to change. The development processes teams within the organization have traditionally done things in a certain way. Now, all of a sudden, they’re building applications for a very different target environment — this hybrid cloud environment, from the public cloud, to the data center, and right out to the edge.

The economics we’re trying to drive require flexible platforms across the DevOpstool chain so you can innovate very quickly. That’s because digital transformation is all about how quickly you can innovate via such new services. The next question is about the data.

As you develop and build these transformed applications in a modern, DevOps cloud-like development process, you have to integrate your data assets early and make sure you know the data is available — both in that development cycle as well as when you move to production. It’s essential to use things like copy-data-management services to integrate that access into your tool chain in a seamless manner. If you build those applications and ignore the data, then it becomes a shock as you roll it into production.

This is the key issue. A lot of times we can get an application running in one scenario and it looks good, but as you start to extend those services across more environments — and haven’t thought through the data architecture — a lot of the cracks appear. A lot of the problems happen.

You have to design in the data access upfront in your development process and into your tool chains to make sure that’s part of your core development process.

Gardner: Denis, over the past several years we’ve learned that containers appear to be the gift that keeps on giving. One of the nice things about this storage transition, as you’ve described, is that containers were at first a facet of the development environment.

Developers leveraged containers first to solve many problems for runtimes. So it’s also important to understand the limits that containers had. Stateful and persistent storage hadn’t been part of the earlier container attributes.

How technically have we overcome some of the earlier limits of containers?

Containers create scalable benefits

Kennelly: You’re right, containers have roots in the open-source world. Developers picked up on containers to gain a layer of abstraction. In an operational context, it gives tremendous power because of that abstraction layer. You can quickly scale up and scale down pods and clusters, and you gain cloud-like behaviors very quickly. Even within IBM, we have containerized software and enabled traditional products to have cloud-like behaviors.

We were able to quickly move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs such as when there’s a spike in demand and you need to scale up the environment. Containers are amazing in how quickly and how simple that is.

We have been able to move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs to scale up and down. Containers are amazing in how quickly and how simple that is.

Now, with all of that power and the capability to scale up and scale down workloads, you also have a storage system sitting at the back end that has to respond accordingly. That’s because as you scale up more containers, you generate more input/output (IO) demands. How does the storage system respond?

Well, we have managed to integrate containers into the storage ecosystem. But, as an industry, we have some work to do. The integration of storage with containers is not just the simple IO channel to the storage. It also needs to be able to scale out accordingly, and to be managed. It’s an area we at IBM are focused on working closely with our friends at Red Hat to make sure that’s a very seamless integration and gives you consistent, global behavior.

Gardner: With security and cyber-attacks being so prominent in people’s minds in early 2021, what impacts do we get with a comprehensive data strategy when it comes to security? In the past, we had disparate silos of data. Sometimes, bad things could happen between the cracks.

So as we adopt containers consistently is there an overarching security benefit when it comes to having a common data strategy across all of your data and storage types?

Prevent angles of attack

Kennelly: Yes. It goes back to the hybrid cloud platform and having potentially multiple public clouds, data center workloads, edge workloads, and all of the combinations thereof. The new core is containers, but you know that with applications running across that hybrid environment that we’ve expanded the attack surface beyond the data center.

By expanding the attack surface, unfortunately, we’ve created more opportunities for people to do nefarious things, such as interrupt the applications and get access to the data. But when people attack a system, the cybercriminals are really after the data. Those are the crown jewels of any organization. That’s why this is so critical.

Data protection then requires understanding when somebody is tampering with the data or gaining access to data and doing something nefarious with that data. As we look at our data protection technologies, and as we protect our backups, we can detect if something is out of the ordinary. Integrating that capability into our backups and data protection processes is critical because that’s when we see at a very granular level what’s happening with the data. We can detect if behavioral attributes have changed from incremental backups or over time.

We can also integrate that into business process because, unfortunately, we have to plan for somebody attacking us. It’s really about how quickly we can detect and respond very quickly to get the systems back online. You have to plan for the worst-case scenario.

That’s why we have such a big focus on making sure we can detect in real time when something is happening as the blocks are literally being written to the disk. We can then also unwind to when we seek a good copy. That’s a huge focus for us right now.

Gardner: When you have a comprehensive data infrastructure, can go global and access data across all of these different environments, it seems to me that you have set yourself up for a pervasive analytics capability, which is the gorilla in the room when it comes to digital business transformation. Denis, how does the IBM Storage vision help bring more pervasive and powerful analytics to better drive a digital business?

Climb the AI Ladder

Kennelly: At the end of the day, that’s what this is all about. It’s about transforming businesses, to drive analytics, and provide unique insights that help grow your business and respond to the needs of the marketplace.

It’s all about enabling top-line growth. And that’s only possible when you can have seamless access to the data very quickly to generate insights literally in real time so you can respond accordingly to your customer needs and improve customer satisfaction.

This platform is all about discovering that data to drive the analytics. We have a phrase within IBM, we call it “The AI Ladder.” The first rung on that AI ladder is about discovering and accessing the data, and then being able to generate models from those analytics that you can use to respond in your business.

We’re all in a world based on data. AI has a major role to play where we can look at business processes and understand how they are operating and then drive greater automation.That’s a huge focus for us — optimizing and automating existing business processes.

We’re all in a world based on data. And we’re using it to not only look for new business opportunities but for optimizing and automating what we already have today. AI has a major role to play where we can look at business processes and understand how they are operating and then, based on analytics and AI, drive greater automation. That’s a huge focus for us as well: Not only looking at the new business opportunities but optimizing and automating existing business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cloud computing, containers, data analysis, Data center transformation, DevOps, digital transformation, IBM, Information management, machine learning, Security, storage | Leave a comment

The future of work is happening now thanks to Digital Workplace Services

Businesses, schools, and governments have all had to rethink the proper balance between in-person and remote work. And because that balance is a shifting variable — and may well continue to be for years after the pandemic — it remains essential that the underlying technology be especially agile.

The next BriefingsDirect worker strategies discussion explores how a partnership behind a digital workplace services solution delivers a sliding scale for blended work scenarios. We’ll learn how Unisys, Dell, and their partners provide the time-proof means to secure applications intelligently — regardless of location.

We’ll also hear how an increasingly powerful automation capability makes the digital workplace easier to attain and support.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in cloud-delivered desktop modernization, please welcome Weston Morris, Global Strategy, Digital Workplace Services, Enterprise Services, at Unisys, and Araceli Lewis, Global Alliance Lead for Unisys at Dell Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Weston, what are the trends, catalysts, and requirements transforming how desktops and apps are delivered these days?

Morris: We’ve all lived through the hype of virtual desktop infrastructure (VDI). Every year for the last eight or nine years has supposedly been the year of VDI. And this is the year it’s going to happen, right? It had been a slow burn. And VDI has certainly been an important part of the “bag of tricks” that IT brings to bear to provide workers with what they need to be productive.

COVID sends enterprises to cloud

But since the beginning of 2020, we’ve all seen — because of the COVID-19 pandemic — VDI brought to the forefront in the importance of having an alternative way of delivering a digital workplace to workers. This has been especially important in environments where enterprises had not invested in mobility, the cloud, or had not thought about making it possible for user data to reside outside of their desktop PCs.

Those enterprises had a very difficult time moving to a work-from-home (WFH) model — and they struggled with that. Their first instinct was, “Oh, I need to buy a bunch of laptops.” Well, everybody wanted laptops at the beginning of the pandemic, and secondly, they were being made in China mostly — and those factories were shut down. It was impossible to buy a laptop unless you had the foresight to do that ahead of time.

And that’s when the “aha” moment came for a lot of enterprises. They said, “Hey, cloud-based virtual desktops — that sounds like the answer, that’s the solution.” And it really is. They could set that up very quickly by spinning up essentially the digital workplace in the cloud and then having their apps and data stream down securely from the cloud to their end users anywhere. That’s been the big “aha” moment that we’ve had as we look at our customer base and enterprises across the world. We’ve done it for our own internal use.

Gardner: Araceli, it sounds like some verticals and in certain organizations they may have waited too long to get into the VDI mindset. But when the pandemic hit, they had to move quickly.

What is about the digital workplace services solution that you all are factoring together that makes this something that can be done quickly?

Lewis: It’s absolutely true that the pandemic elevated digital workplace technology from being a nice-to-have, or a luxury, to being an absolute must-have. We realized after the pandemic struck that public sector, education, and more parts of everyday work needed new and secure ways of working remotely. And it had to become instantaneously available for everyone.

You had every C-level executive across every industry in the United States shifting to the remote model within two weeks to 30 days, and it was also needed globally. Who better than Dell on laptops and these other endpoint devices to partner with Unisys globally to securely deliver digital workspaces to our joint customers? Unisys provided the security capabilities and wrapped those services around the delivery, whereas we at Dell have the end-user devices.

You had every C-level executive across every industry in the U.S. shifting to the remote model within two weeks to 30 days, and it was also needed globally. Unisys provided the security capabilities and wrapped those services around delivery, whereas Dell had the end-user devices.

What we’ve seen is that the digitalization of it all can be done in the comfort of everyone’s home. You’re seeing them looking at x-rays, or a nurse looking into someone’s throat via telemedicine, for example. These remote users are also able to troubleshoot something that might be across the world using embedded reality, virtual reality (VR) embedded, and wearables.

We merged and blended all of those technologies into this workspaces environment with the best alliance partners to deliver what the C-level executives wanted immediately.

Gardner: The pandemic has certainly been an accelerant, but many people anticipated more virtual delivery of desktops and apps as inevitable. That’s because when you do it, you get other timely benefits, such as flexible work habits. Millennials tend to prefer location-independence, for example, and there are other benefits during corporate mergers and acquisitions and for dynamic business environments.

So, Weston, what are some of the other drivers that reward people when they make the leap to virtual delivery of apps and desktops?

Take the virtual leap, reap rewards

Morris: I’m thinking back to a conversation I had with you, Araceli, back in March. You were excited and energized around the topic of business continuity, which obviously started with the pandemic.

But, Dana, there are other forces at work that preceded the pandemic and that we know will continue after the pandemic. And mergers and acquisition are a very big one. We see a tremendous amount of activity there in the healthcare space, for example, which was affected in multiple ways by the pandemic. Pharmaceuticals and life sciences as well, there are multiple merger activities going on there.

One of the big challenges in a merger or acquisition is how to quickly get the acquired employees working as first-class citizens as quickly as possible. That’s always been difficult. You either give them two laptops, or two desktops, and say, “Here’s how you do the work in the new company, and here’s where you do the work in the old company.” Or you just pull the plug and say, “Now, you have to figure out how to do everything in a new way in web time, including human resources and all of those procedures in a new environment — and hopefully you will figure it all out.”

But with a cloud-based, virtual desktop capability — especially with cloud-bursting — you can quickly spin up as much capacity as you need and build upon the on-premises capabilities you already have, such as on Dell EMC VxRail, and then explode that into the cloud as needed using VMware Horizon to the Microsoft Azure cloud.

That’s an example of providing a virtual desktop for all of the newly acquired employees for them do their new corporate-citizen stuff while they keep their existing environment and continue to be productive by doing the job you hired them to do when you made the acquisition. That’s a very big use case that we’re going to continue to see going forward.

Gardner: Now, there were number of hurdles historically toward everyone adopting VDI. One of the major use cases was, of course, security and being able to control content by having it centrally located on your servers or on your cloud — rather than stored out on every device. Is that still a driving consideration, Weston? Are people still looking for that added level of security, or has that become passé?

Morris: Security has become even more important throughout the pandemic. In the past, to a large extent, the corporate firewall-as-secure-the-perimeter model has worked fairly well. And we’ve been punching holes in the firewall for several years now.

But with the pandemic — with almost everyone working from home — your office network just exploded. It now extends everywhere. Now you have to worry about how well secured any one person’s home network is. Do they have their password changed or default password changed on their home router? Have they updated the firmware on it? And a lot of these things are beyond the average worker to worry about and to be thinking about.

But if we separate out the workload and put it into the cloud — so that you have the digital workplace sitting in the cloud — that is much more secure than a device sitting on somebody’s desk connected to a very questionable home network environment.

Gardner: Another challenge in working toward more modern desktop delivery has been cost, because it’s usually been capital-intensive and required upfront investment. But when you modernize via the cloud that can shift.

Araceli, what are some of the challenges that we’re now able to overcome when it comes to the economics of virtual desktop delivery?

Cost benefits of partnering

Lewis: The beautiful thing here is that in our partnership with Unisys and Dell Financial Services (DFS), we’re able to utilize different utility models when it comes to how we consume the technology.

We don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. So, that’s extremely flexible.

You don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. It’s extremely flexible.

And by partnering with Unisys, they secure those VDI solutions across all of the three core components: The VDI portion within the data center, the endpoint devices, and of course, the software. By partnering with Unisys in our alliance ecosystem, we get the best of DFS, Dell Technology, VMware software, and Unisys security capabilities.

Gardner: Weston, another issue that’s dogged VDI adoption is complexity for the IT department. When we think about VDI, we can’t only think about end users. What has changed for how the IT department deploys infrastructure, especially for a hybrid approach where VDI is delivered both from on-premises data centers as well as the cloud?

Intelligent virtual agents assist IT

Morris: Araceli and I have had several conversations about this. It’s an interesting topic. There has always been a lot of work to stand up VDI. If you’re starting from scratch, you’re thinking about storage, IOPS, and network capacity. Where are my apps? What’s the connectivity? How are we going to run it at optimal performance? After all, are the end users happy with the experience they’re getting? And how can I even know that what their experience is?

And now, all that’s changed thanks to the evolving technology. One is the advent of artificial intelligence (AI) and the use of personal intelligent virtual assistance. At home, we’re used to that, right? We ask AlexaSiri, or Cortana what’s going on with the weather? What’s happening in the news? We ask our virtual assistants all of these things and we expect to be able to get instant answers and help. Why is that not available in the enterprise for IT? Well, the answer is it is now available.

As you can imagine on the provisioning side, wouldn’t it be great if you were able to talk to a virtual assistant that understood the provisioning process? You simply answer questions posed by the assistant. What is it you need to provision? What is your load that you’re looking at? Do you have engineers that need to access virtual desktops? What types of apps might they need? What is the type of security?

Then the virtual assistant understands the business and IT processes to provision the infrastructure needed virtually in the cloud to make that all happen or to cloud-burst from your on-premises Dell VxRail into the cloud.

That is a very important game changer. The other aspect of the intelligent virtual agent is it now resides on the virtual desktop as well. I, as an at-home worker, may have never seen a virtual desktop before. And now, the virtual assistant pops up and guides the home worker through the process of connecting, explaining how their apps work, and saying, “I’m always here. I’m ready to give you help whenever possible.” But I think I’ll defer to the expert here.

Araceli, do you want to talk about the power of the hybrid environment and how that simplifies the infrastructure?

Multiple workloads managed

Lewis: Sure, absolutely. At Dell EMC, we are proud of the fact that Gartner rates us number one, as a leader in the category for pretty much all of the products that we’ve included in this VDI solution. When Unisys and my alliances team get the technology, it’s already been tested from a hyper-converged infrastructure (HCI) perspective. VxRail has been tested, tried-and-true as an automated system in which we combine servers, storage, network, and the software.

That way, Weston and I don’t have to worry about what size are we going to use. We actually have T-shirt sizes already for the number of VDI users that are needed that have been thought out. We have the graphics-intensive portion of it thought out. And we can basically deploy quickly and then put the workloads on them as we need to spin them up or spin them down or to add more.

We can adjust on the fly. That’s a true testament of our HCI being the backbone of the solution. And we don’t have to get into all of the testing, regression testing, and the automation and self-healing of it. Because a lot of that management would have had to be done by enterprise IT or by a managed services provider but it’s done instead via the lifecycle management of the Dell EMC VxRail HCI solution.

That is a huge benefit, the fact that we deliver a solution from the value line and the hypervisor on up. We can then focus on the end users’ services and we don’t have to be swapping out components or troubleshooting because all of the refinement that Dell has done in that technology today.

Morris: Araceli, the first time you and your team showed me the cloud-bursting capability, it just blew me away. I know in the past how hard it was to expand any infrastructure. You showed me where, you know, every industry and every enterprise are going to have a core base of assumptions. So, why not put that under Dell VxRail?

Then, as you need to expand, cloud-burst into, in this case, Horizonrunning on Azure. And that can all be done now through a single dashboard. I don’t have to be thinking, “Okay, now I have to have the separate workload, it’s in the cloud, this other workload that’s on my on-premises cloud with VxRail.” It’s all done through one, single dashboard that can be automated on the back end through a virtual agent, which is pretty cool.

Gardner: It sure seems in hindsight that the timing here was auspicious. Just as the virus was forcing people to rapidly find a virtual desktop solution, you had put together the intelligence and automation along with software-defined infrastructure like HCI. And then you also gained the ease in hybrid by bursting to the cloud.

And so, it seems that the way that you get to a solution like this has never been easier, just when it was needed to be easy for organizations such as small- to medium-sized businesses (SMBs) and verticals like public sector and education. So, was the alliance and partnering, in fact, a positive confluence of timing?

Greater than sum of parts

Morris: Yes. The perfect storm analogy certainly applies. It was great when I got the phone call from Araceli, saying, “Hey, we have this business continuity capability.” We at Unisys had been thinking about business continuity as well.

We looked at the different components that we each brought. Unisys with its security around Stealth or capability to proactively monitor infrastructure and desktops and see what’s going on and automatically fix them via the intelligent virtual agent and automation. And realizing that this was really a great solution, a much better solution than the individual parts.

We could not make this happen without all of the cool stuff that Dell brings in terms of the HCI, the clients, and, of course, the very powerful VMware-based virtual desktops. And we added to that some things that we have become very good at in our digital workplace transformation. The result is something that can make a real difference for enterprises. You mentioned the public sector and education. Those are great examples of industries that really can benefit from this.

Gardner: Araceli, anything more to offer on how your solution came together, the partners and the constituent parts?

Lewis: Consistent infrastructure, operations, and the help of our partner, Unisys, globally, delivers the services to the end users. This was just a partnership that had to come together.

We were getting so many requests early during the pandemic, an overwhelming amount of demand from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking.

We at Dell couldn’t do it alone. We needed those data center spaces. We needed the capabilities of their architects and teams to deliver for us. We were getting so many requests early during the pandemic, an overwhelming amount of demand from every C-level suite across the country, and from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking. But we knew if we partnered with them, we could give our community what they needed to get through the pandemic.

Gardner: And among those constituent parts, how is important part is Horizon? Why is it so important?

Lewis: VMware Horizon is the glue. It streamlines desktop and app delivery in various ways. The first would be by cloud-bursting. It actually gives us the capability to do that in a very simple fashion.

Secondly, it’s a single pane of glass. It delivers all of the business-critical apps to any device, anywhere on a single screen. So that makes it simple and comprehensive for the IT staff.

We can also deliver non-persistent virtual desktops. The advantage here is that it makes software patching and distribution a whole lot easier. We don’t have all the complexity. If there were ever a security concern or issue, we simply blow away that non-persistent virtual desktop and start all over. It gets us to our first phase, square one, and we would otherwise have to spend countless hours of backups and restores to get us to where we are safe again. So, it pulls everything together for us and being a user have a seamless interface for the IT staff who don’t have the complexity, and it gives us the best of our world while we get out to the cloud.

Gardner: Weston, on the intelligent agents and bots, do you have an example of how it works in practice? It’s really fascinating to me that you’re using AI-enabled robotic process collaboration (RPA) tools to help the IT department set this up. And you’re also using it to help the end-user learn how to onboard themselves, get going, and then get ongoing support.

Amelia AI ascertains answers

Morris: It’s an investment we began almost 24 months ago, branded as the Unisys InteliServe platform, which initially was intended to bring AI, automation, and analytics to the service desk. It was designed to improve the service desk experience and make it easier to use, make it scalable, and to learn over time what kinds of problems people needed help solving.

But we realized once we had it in place, “Wow, this intelligent virtual agent can almost be an enterprise personal assistant where it can be trained on anything, on any business process.” So, we’ve been training it on fixing common IT problems … password resets, can’t log in, can’t get to the virtual private network (VPN), Outlook crashes, those types of things. And it does very well at those sorts of activities.

But the core technology is also perfectly suited to be trained for IT processes as well as business processes inside of the enterprise. For example, for this particular scenario of supporting virtual desktops. If a customer has a specific process for provisioning virtual desktops, they may have specific pools of types of virtual desktops, certain capacities, and those can be created ahead of time, ready to go.

Then it’s just a matter of communicating with the intelligent virtual assistant to say, “I need to add more users to this pool,” or, “We need to remove users,” or, “We need to add a whole new pool.” The agent is branded as Amelia. It has a female voice, through it doesn’t have to be, but in most cases, it is.

When we speak with Amelia, she’s able to ask questions that guide the user through the process. They don’t have to know what the process is. They don’t do this very often, right? But she can be trained to be an expert on it.

Amelia collects the information needed, submits it to the RPA that communicates with Horizon, Azure, and the VxRail platforms to provision the virtual desktops as needed. And this can happen very quickly. Whereas in the past, it may have taken days or weeks to spin up a new environment for a new project, or for a merger and acquisition, or in this case, reacting to the pandemic, and getting people able to work from home.

By the same token, when the end users open up their virtual desktops, they connect to the Horizon workspace, and there is Amelia. She’s there ready to respond to totally different types of questions: “How do I use this?” “Where’s my apps?” “This is new to me, what do I do? How do I connect?” “What about working from home?” “What’s my VPN connection working like, and how do I get that connected properly?” “What about security issues?” There, she’s now able to help with the standard end-user types issues as well.

Gardner: Araceli, any examples of where this intelligent process automation has played out in the workplace? Do we have some ways of measuring the impact?

Simplify, then measure the impact

Lewis: We do. It’s given us, in certain use cases, the predictability and the benefit of a pay-as-you-grow linear scale, rather than the pay-by-the-seat type of solution. In the past, if we had a state or a government agency where they need, for example, 10,000 seats, we would measure them by the seat. If there’s a situation like a pandemic, or any other type of environment where we have to adjust quickly, how could we deliver 10,000 instances in the past?

Now, using Dell EMC ready-architectures with the technologies we’ve discussed — and with Unisys’ capabilities — we can provide such a rapid and large deployment in a pay-as-you-grow linear scale. We can predict what the pricing is going to be as they need to use it for these public sector agencies and financial firms. In the past, there was a lot of capital expenditures (CapEx). There was a lot of process, a lot of change, and there were just too many unknowns.

These modern platforms have simplified the management of the backends of the software and the delivery of it to create a true platform that we can quantify and measure — not only just financially, but from a time-to-delivery perspective as well.

Morris: I have an example of a particular customer where they had a manual process for onboarding. Such onboarding includes multiple steps, one of which is, “Give me my digital workplace.”

But there are other things, too. The training around gaining access to email, for example. That was taking almost 40 hours. Can you imagine a person starting their job, and 40 hours later they finally get the stuff they need to be productive? That’s a lot of downtime.

After using our automation, that transition was down to a little over eight hours. What that means is a person starts filling out their paperwork with HR on day one, gets oriented, and then the next day they have everything they need to be productive. What a big difference. And in the offboarding — it’s even more interesting. What happens when a person leaves the company? Maybe under unfavorable circumstances, we might say.

In the past, the manual processes for this customer took almost 24 hours before everything was turned off. What does that mean? That means that an unhappy, disgruntled employee has 24 hours. They can come in, download content, get access to materials or perhaps be disruptive, or even destructive, with the corporate intellectual property, which is very bad.

Through automation, this offboarding process is now down to six minutes. I mean that person hasn’t even walked out of the room and they’ve been locked out completely from that IT environment. And that can be even be done more quickly if we’re talking about a virtual desktop environment, in which the switch can be thrown immediately and completely. Access is completely and instantly removed from the virtual environment.

Gardner: Araceli, is there a best-of-breed, thin-client hardware approach that you’re using? What about use cases such as graphics-intense or computer-aided design (CAD) applications? What’s the end-point approach for some of these more intense applications?

Viable, virtual, and versatile solutions

Lewis: Being Dell Technologies, that was a perfect question for us, Dana. We understand the persona of the end users. As we roll out this technology, let’s say it’s for an engineering team where they do CAD drawings as an engineering group. If you look at the persona, and we partner with Unisys and look at what each end-user’s needs are, you can determine if they need more memory, more processing power, and if they need a more graphics-intensive device. We can do that. Our Wyseend-clients that can do that, the Wyse 3000s and the 5000s.

But I don’t want to pinpoint one specific type of device per user because we could be talking about a doctor, or we could be talking about a nurse in an intensive care unit. She is going to need something more mobile. We can also provide end-user devices that are ruggedized, maybe in an oil field or in a construction site. So, from an engineering perspective, we can adopt the end-user device to their persona and their needs and we can meet all of those requirements. It’s not a problem.

Gardner: Weston, anything from your vantage point on the diversity and agility of those endpoint devices and why this solution is so versatile?

Morris: There is diversity at both ends. Araceli, you talked about being able to on the backend provision and scale up and down the capacity and capability of a virtual desktop to meet the personas’ needs.

Millennials want choice on how they connect. Am I connecting from home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day. They don’t want to lose work in between. That all is entirely possible with this infrastructure.

And then on the end-user side, and you mentioned, Dana, Millennials. They may want choice of how they connect. Am I connecting in through my own personal laptop at home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day? And they don’t want to lose work in between. That is all entirely possible with this infrastructure.

Gardner: Let’s look to the future. We’ve been talking about what’s possible now. But it seems to me that we’ve focused on the very definition of agility: It scales, it’s fast, and it’s automated. It’s applicable across the globe.

What comes next? What can you do with this technology now that you have it in place? It seems to me that we have an opportunity to do even more.

Morris: We’re not backing down from AI and automation. That is here to stay, and it’s going to continue to expand. People have finally realized the power of cloud-based VDI. That is now a very important tool for IT to have in their bag of tricks. They can respond to very specific use cases in a very fast, scalable, and effective way.

In the future we will see that AI continues to provide guidance, not only in the provisioning that we’ve talked about, not only in startup and use on the end-user side — but in providing analytics as to how the entire ecosystem is working. That’s not just the virtual desktops, but the apps that are in the cloud as well and the identity protection. There’s a whole security component that AI has to play a role in. It almost sounds like a pipe dream, but it’s just going to make life better. AI absolutely will do that when it’s used appropriately.

Lewis: I’m looking to the future on how we’re going to live and work in the next five to 10 years. It’s going to be tough to go back to what we were used to. And I’m thinking forward to the Internet of Things (IoT). There’s going to be an explosion of edge devices, of wearables, and how we incorporate all of those technologies will be a part of a persona.

Typically, we’re going to be carrying our work everywhere we go. So, how are we going to integrate all of the wearables? How are we going to make voice recognition more adaptable? VR, AI, robotics, drones — how are we going to tie all of that together?

Nowadays, we tie our home systems and our cooling and heating to all of the things around us to interoperate. I think that’s going to go ahead and continue to grow exponentially. I’m really excited that we’ve partnered with Unisys because we wouldn’t want to do something like this without a partner who is just so deeply entrenched in the solutions. I’m looking forward to that.

Gardner: What advice would give to an organization that hasn’t bitten off the virtual desktop from the cloud and hybrid environment yet? What’s the best way to get started?

Morris: It’s really important to understand your users, your personas. What are they consuming? How do they want to consume it? What is their connectivity like? You need to understand that, if you’re going to make sure that you can deliver the right digital workplace to them and give them an experience that matters.

Lewis: At Dell Technologies, we know how important it is to retain our top and best talent. And because we’ve been one of the top places to work for the past few years, it’s extremely important to make sure that technology and access to technology help to enable our workforce.

I truly feel that any one of our customers or end users that hasn’t looked at VDI, and hasn’t realized the benefits across savings, and keeping a competitive advantage in this fast-paced world, that they also need to retain their talent, too. To do that they need to give their employees the best tools and the best capabilities to be the very best. They have to look at VDI in some way, shape, or form. As soon as we bring it to them — whether technically, financially, or for competitive factors — it really makes sense. It’s not a tough sell at all, Dana.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Dell Technologies.

Posted in Cloud computing, Virtualization, VMware | Tagged , , , , , , , , , , , | Leave a comment

Customer experience management has never been more important or impactful

The next BriefingsDirect digital business innovation discussion explores how companies need to better understand and respond to their markets one subscriber at a time. By better listening inside of their products, businesses can remove the daylight between their digital deliverables and their customers’ impressions.

Stay with us now as we hear from a customer experience (CX) management expert at SAP on the latest ways that discerning customers’ preferences informs digital business imperatives.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the business of best fulfilling customer wants and needs, please welcome Lisa Bianco, Global Vice President, Experience Management and Advocacy at SAP Procurement Solutions. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What was the catalyst about five years ago that led you there at SAP Procurement to invest in a team devoted specifically to CX innovation?

Bianco: As a business-to-business (B2B) organization, we recognized that B2B was changing and it was starting to look and feel more like business-to-consumer (B2C). The days of leaders dictating the solutions and products that their end users were going to be leveraging for day-to-day business stuff — like procurement or finance — we found we were competing with what an end-user’s experience would be with the products or applications they use in their personal life.

No alt text provided for this image
Bianco

We all know this; we’ve all been there. We would go to work to use the tools, and there used to be those times we would use the printer for our kids’ flyers for their birthday because it was a much better tool than what we had at home. And that had shifted.

But then business leaders were competing with rogue employees using tools like Amazon.com versus SAP Ariba’s solution for procurement to buy things for their businesses. And so with that maverick spend, companies weren’t having the same insights that they needed to make decisions. So, we knew that we had to ensure that that end-user experience at work replicated what they might feel at home. It reflected that shift in persona from a decision-maker to that of a user.

Gardner: Whether it’s B2B or B2C, there tends to be a group of people out there who are really good at productivity and will find ways to improve things if you only take the chance to listen and follow their lead, right?

Bianco: That’s exactly right.

Gardner: And what was it about B2B in the business environment that was plowing new ground when it came to listening rather than just coming up with a list of requirements, baking it into the software, and throwing it over the wall?

Leaders listen to customer experience

Bianco: The truth is, better listening to B2B resulted in a centralized shift for leaders. All of a sudden, a chief procurement officer (CPO) who made a decision on a procurement solution, or a chief information officer (CIO) who made a decision on an enterprise resource planning (ERP) solution, they were beginning to get flak from cross-functional leaders who were end-users and couldn’t actually do their functions.

In B2B we found that we had to start understanding the feelings of employees and the feelings of our customers. And that’s not really what you do in B2B, right? Marketing and branding at SAP now said that the future of business has feelings. And that’s a shock. I can’t tell you how many times I have talked to leaders who say, “I want to switch the word empathy in our mission statement because that’s not strong leadership in B2B.”

The truth is we had to shift. Society was shifting to that place and understanding that feelings allow us to understand the experiences because experiences were that of people. We can only make so many decisions based on our operational data.

But the truth is we had to shift. Society was shifting to that place and understanding that feelings allow us to understand the experiences because the experiences were that of people. We can only make so many decisions based on our operational data, right? You really have to understand the why.

We did have to carve out a new path, and it’s something we still do to this day. Many B2B companies haven’t evolved to an experience management program, because it’s tough. It’s really hard.

Gardner: If we can’t just follow the clicks, and we can’t discern feelings from the raw data, we need to do something more. What do we do? How do we understand why people feel good or bad about what they are doing?

Bianco: We get over that hurdle by having a corporate strategy that puts the customer at the center of all we do. I like to think of it as having a customer-centric decision-making platform. That’s not to say it’s a product. It’s really a shift in mindset that says, “We believe we will be a successful company if our customers’ feelings are positive, if their experiences are great.”

If you look at the disruptors such as Airbnb or Amazon, they prioritize CX over their own objectives as a business and their own business success, things like net-new software sales or renewal targets. They focus on the experiences that their customers have throughout their lifecycle.

No alt text provided for this image

That’s a big shift for corporate America because we are so ingrained in producing for the board and we are so ingrained in producing for the investors that oftentimes putting that customer first is secondary. It’s a systemic shift in culture and thinking that tends to be what we see in the emerging companies today as they grab such huge market share. It’s because they shifted that thinking.

Gardner: Right. And when you shift the thinking in the age of social media — and people can share what their impressions are — that becomes a channel and a marketing opportunity in itself. People aren’t in a bubble. They are able to say and even demonstrate in real time what their likes are, what their dislikes are, and that’s obvious to many other people around them.

Customer feedback ecosystem

Bianco: Dana, you are pointing out risk. And it’s so true. And this year, the disrupter that COVID-19 has created is a tectonic shift in our digitalization of customer feedback. And now, via social media and Twitter, if you are not at the forefront of understanding what your customers’ feelings are — and what they may or may not say — and you are not doing that in a proactive way, you run the risk of it playing out socially in a public forum. And the longer that goes unattended to, you start to lose trust.

When you start to lose trust, it is so much harder to fix than understanding in the lifecycle of a customer the problems that they face, fixing those and making that a priority.

Gardner: Why is this specifically important in procurement? Is there something about procurement, supply chain, and buying that this experience focus is important? Or does it cut across all functions in business?

Bianco: It’s across all functions in business. However, if you look at procurement in the world today, it incorporates a vast ecosystem. It’s one of those functions in business that includes buyers and suppliers. It includes logistics, and it’s complex. It is one of the core areas of a business. When that is disrupted it can have drastic effects on your business.

No alt text provided for this image

We saw that in spades this year. It affects your supply chain, where you can have alternative opportunities to regain your momentum after a disruption. It affects your workforce and all of the tools and materials necessary for your company to function when it shifts and moves home. And so with that, we look from SAP’s perspective at these personas that navigate through a multitude of products in your organization. And in procurement, because that ecosystem is there for our customers, understanding the experience of all of those parties allows for customers to make better decisions.

A really good example is one of the world’s largest consulting firms. They took 500,000 employees in offices around the world and found that they had to immediately put them in their homes. They had to make sure they had the products they needed, like computers, green screens, or leisure wear.

They learned what looks good enough on a virtual Zoom meeting. Procurement had to understand what their employees needed within a week’s time so that they didn’t lose revenue deploying the services that their customers had purchased and rely on them for.

Understanding that lifecycle really helps companies, especially now. Seeing the recent disruption made them able to understand exactly what they need to do and quickly make decisions to make experiences better to get their business back on track.

Gardner: Well, this is also the year or era of moving toward automation and using data and analytics more, even employing bots and robotic process automation (RPA). Is there something about that tack in our industry now that can be brought to CX management? Is there a synergy between not just doing this manually, but looking to automation and finding new insights using new tools?

Automate customer journeys

Bianco: It’s a really great insight into the future of understanding the experiences of a customer. A couple of things come to mind. As you look at operational data, we have all recognized the importance of having operational data; so usage data, seeing where the clicks are throughout your product. Really documenting customer journey maps.

If you automate the way you get feedback you don’t just have operational data; you need to get that feelings to come through with experience data … to help drive to where automation needs to happen.

But if you automate the way you get feedback you don’t just have operational data; you need to get the feelings to come through with experience data. And that experience data can help drive where automation needs to happen. You can then embed that kind of feedback-loop-process in typical survey-type tools or embed them right into your systems.

And so that helps you understand some areas where we can remove steps from in the process, especially as many companies look to procurement to create automation. And so the more we can understand where we have those repetitive flows and we can automate, the better.

Gardner: Is that what you mean by listening inside of the product or does that include other things, too?

Bianco: It includes other things. As you may know, SAP purchased a company called Qualtrics. They are experts in experience management, and we have been able to move from and evolve from traditional net promoter score (NPS) surveys into looking at micro moments to get customer feedback as they are doing a function. We have embedded certain moments inside of our product that allow us to capture feedback in real time.

Gardner: Lisa, a little earlier you alluded that there are elements of what happens in the B2C world as individual consumers and what we can then learn and take into the B2B world. Is there anything top of mind for you that you have experienced as a consumer that you said, “Aha, I want to be able to do that or bring that type of experience and insight to my B2B world?”

Customer service is king in B2B

Bianco: Yes, you know what happened to me just this week as a matter of fact? There is a show on TV right now about chess. With all of us being at home, many of us are consuming copious amounts of content. And I went and ordered a chess set, it came, it was beautiful, it was from Wayfair, and one of the pieces was broken.

I snapped a little picture of the piece that had broken and they had an amazing app that allowed me to say, “Look, I don’t need you to replace the whole thing, it’s just this one little piece, and if you can just send me that, that would be great.”

And they are like, “You know what? Don’t worry about sending it back. We are just going to send you a whole new set.” It was like a $100 set. So I now have two sets because they were gracious enough to see that I didn’t have a great experience. They didn’t want me to deal with sending it back. They immediately sent me the product that I wanted.

No alt text provided for this image

I am, like, where is that in B2B? Where is that in the complex area of procurement that I find myself? How can we get that same experience for our customers when something goes wrong?

When I began this program, we would try to figure out what is that chess set. Other organizations use garlic knots, like at pizza restaurants. While you and your kids wait 25 minutes for the pizza to be made, a lot of pizza shops offer garlic knots to make you happy so the wait doesn’t seem so long. What is that equivalent for B2B?

It’s hard. What we learned early on, and I am so grateful for, is that in B2B many end users and customers know how difficult it is to make some of their experiences better, because it’s complex. They have a lot of empathy for companies trying to go down such a path, in this case, for procurement.

But with that, what their garlic knot is, what their free product or chess set is, is when we tell them that their voice matters. It’s when we receive their feedback, understand their experience against our operational data, and let them know that we have the resources and budget to take action on their feedback and to make it better.

Either we show them that we have made it better or we tell them, “We hear what you are saying, but that doesn’t fit into our future.” You have to be able to have that complete feedback loop, otherwise you alienate your customer. They don’t want to feel like you are asking for their feedback but not doing anything with it.

And so that’s one of the most important things we learned here. That’s the thing that I witnessed from a B2C perspective and tried to replicate in B2B.

Gardner: Lisa, I’m sensing that there is an opportunity for the CX management function to become very important for overall digital business transformation. The way that Wayfair was able to help you with the chess set required integration, cooperation, and coordination between what were probably previously siloed parts of their organization.

That means the helpdesk, the ordering and delivering, exception management capabilities, and getting sign-off on doing this sort of thing. It had to mean breaking down those silos — both in process, data, and function. And that integration is often part of an all-important digital transformation journey.

So are you finding that people like yourself, who are spearheading the experience management for your customers, are in a catbird seat of identifying where silos, breakdowns, and gaps exist in the B2B supplier organizations?

Feedback fuels cross-training

Bianco: Absolutely. Here is what I have learned: I am going to focus on cloud, especially in companies that are either cloud companies or had been an on-premises company and are migrating to being a cloud company. SAP Ariba did this over the last 20 years. It has migrated from on-premises to cloud, so we have a great DNA understanding of that. SAP is out doing the same thing; many companies are.

And what’s important to realize, at least from my perspective — it was an “Aha” moment — is that there is a tendency in the B2C world leadership to say, “Look, I am looking at all this data and feedback around customers. Can’t we just go fix this particular customer issue, and they are going to be happy?”

Most of the issues our customers were facing were systemic. There was consistent feedback about something that wasn’t working. We had to recognize that these systemic issues needed to be solved by a cross-functional group of people.

What we found in the B2B data was that most of the issues our customers were facing were systemic. It was broad strokes of consistent feedback about something that wasn’t working. We had to recognize that these systemic issues needed to be solved by a cross-functional group of people.

That’s really hard because so many folks have their own budgets, and they lead only a particular function. To think about how they might fix something more broadly took our organization quite a bit of time to wrap our heads around. Because now you need a center of excellence, a governance model that says that CX is at the forefront, and that you are going to have accountability in the business to act on that feedback and those actions. And you are going to compose a cross-functional, multilevel team to get it done.

It was funny early on, in our receiving feedback that customer support is a problem. Support was the problem. The support function was awful. I remember the head of support was like, “Oh, my gosh. I am going to get fired. I just hate my job. I don’t know what to do.”

When you look at the root cause you find that quality is a root-cause issue, but quality wasn’t just in one or another product — it was across many products. That broader quality issue led to how we enabled our support teams to understand how to better support those products. That quality issue also impacted how we went to market and we showed the features and functions of the product.

We developed a team called the Top X Organization that aggregated cross-functional folks, held them accountable to a standard of a better outcome experience for our customers, and then led a program to hit certain milestones to transform that experience. But all that is a heavy lift for many companies.

Gardner: That’s fascinating. So, your CX advocates — by having that cross-functional perspective by nature — became advocates for better processes and higher quality at the organization level. They are not just advocating for the customer; they are actually advocating for the betterment of the business. Are you finding that and where do you find the people that can best do that?

Responsibility of active listening

Bianco: It’s not an easy task, it’s for few and far between. Again, it takes a corporate strategy. Dana, when you asked me the question earlier on, “What was the catalyst that brought you here?” I oftentimes chuckle. There isn’t a leader on the planet who isn’t going to have someone come to them, like I did at the time, and say, “Hey, I think we should listen to our customers.” Who wouldn’t want to do that? Everyone wants to do that. It sounds like a really good idea.

But, Dana, it’s about active listening. If you watch movies, there is often a scene where there is a husband and wife getting therapy. And the therapist says, “Hey, did you hear what she said?” or, “Did you hear what he said?” And the therapist has them repeat it back. Your marriage or a struggle you have with relationships is never going to get better just by going and sitting on the couch and talking to the therapist. It requires each of you to decide internally that you want this to be better, and that you are going to make the changes necessary to move that relationship forward.

It’s not dissimilar to the desire to have a CX organization, right? Everyone thinks it’s a great idea to show in their org chart that they have a leader of CX. But the truth is you have to really understand the responsibility of listening. And that responsibility sometimes devolves into just taking a survey. I’m all for sending a survey out to our customers, let’s do it. But that is the smallest part of a CX organization.

No alt text provided for this image

It’s really wrapped up in what the corporate strategy is going to be: A customer-centric, decision-making model. If we do that, are we prepared to have a governance structure that says we are going to fund and resource making experiences better? Are we going to acknowledge the feedback and act on it and make that a priority in business or not?

Oftentimes leaders get caught up in, “I just want to show I have a CX team and I am going to run a survey.” But they don’t realize the responsibility that gives them when now they have on paper all the things that they know they have an opportunity to make better for their customers.

Gardner: You have now had five years to make these changes. In theory this sounds very advantageous on a lot of levels and solves some larger strategic problems that you would have a hard time addressing otherwise.

So where’s the proof? Do you have qualitative, quantitative indicators? Maybe it’s one of those things that’s really hard to prove. But how do you rate customer advocacy and CX role? What does it get you when you do it well?

Feelings matter at all levels

Bianco: Really good point. We just came off of our five-year anniversary this week. We just had an NPS survey and we got some amazing trends. In five years, we have seen an even greater improvement in the last 18 months — an 11-point increase in our customer feedback. And that not only translates into the survey, as I mentioned, but it also translates with influencers and analysts.

Gartner has noted the increase in our ability to address CX issues and make them better. We can see that in terms of the 11-point increase. We can see that in terms of our reputation within our analyst community.

And we also see it in the data. Customers are saying, “Look, you are much more responsive to me.” We see a 35-percent decrease in customers complaining in their open text fields about support. We see customers mentioning less the challenges they have seen in the area of integration, which is so incredibly important.

We see a 35-percent decrease in customers complaining in their open text fields about support. We see customers less challenged by integration, which is so incredibly important.

And we also hear less from our own SAP leaders who felt like NPS just exposed the fact that they might not be doing their job well, which was initially the experience we got from leaders who were like, “Oh my gosh. I don’t want you to talk about anything that makes it look like I am not doing my job.” We created a culture where we have been more open to feedback. We now relish in that insight, versus feeling defensive.

And that’s a culture shift that took us five years to get to. Now you have leaders chomping at the bit to get those insights, get that data, and make the changes because we have proof. And that proof did start with an organizational change right in the beginning. It started with new leadership in certain areas like support. Those things translated into the success we have today. But now we have to evolve beyond that. What’s the next step for us?

Gardner: Before we talk about your next steps, for those organizations that are intrigued by this — that want to be more customer-centric and to understand why it’s important — what lessons have you learned? What advice do you have for organizations that are maybe just beginning on the CX path?

Bianco: How long is this show?

Gardner: Ten more minutes, tops.

Bianco: Just kidding. I mean gosh, I have learned a lot. If I look back — and I know some of my colleagues at IBM had a similar experience — the feedback is this. We started by deploying NPS. We just went out there and said we are going to do these NPS surveys and that’s going to shake the business into understanding how our customers are feeling.

We grew to understand that our customers came to SAP because of our products. And so I think I might have spent more time listening inside of the products. What does that mean? It certainly means embedding micro-moments, of aggregating feedback, in the product to help understand — and allows our developers to understand what they need to do. But that need to be done in a very strategic way.

It’s also about making sure that any time anyone in the company wants to listen to customers, you ensure that you have the budget and the resources necessary to make that change — because otherwise you will alienate your customers.

Another area is you have to have executive leadership. It has to be at the root of your corporate objectives. Anything less than that and you will struggle. It doesn’t mean you won’t have some success, but when you are looking at the root of making experience better, it’s about action. That action needs to be taken by the folks responsible for your products or services. Those folks have to be incented, or they have to be looped in and committed to the program. There has to be a governance model that measures the experience of the customer based on how the customer interprets it — not how you interpret it.

If, as a company, you interpret success as net-new software sales, you have to shift that mindset. That’s not how your customers view their own success.

Gardner: That’s very important and powerful. Before we sign off, five years in, where do you go now? Is there an acceleration benefit, a virtuous adoption pattern of sorts when you do this? How do you take what you have done and bring it to a step-change improvement or to an even more strategic level?

Turn feedback into action

Bianco: The next step for us is to embed the experience program in every phase of the customer’s journey. That includes every phase of our engagement journey inside of our organization.

So from start to finish, what are the teams providing that experience, whether it’s a service or product? That would be one. And, again, that requires the governance that I mentioned. Because action is where it’s at — regardless of the feedback you are getting and how many places you listen. Action is the most important piece to making their experience better.

This requires governance because action is where it’s at — regardless of the feedback. Taking action is the most important piece to making the customer experience better.

Another is to move beyond just NPS surveys. Again, it’s not that this is a new concept, but as I watched the impact of COVID-19 on accelerating digital feedback, social forums, and public forums, we measured that advocacy. It’s not just the, “Will you recommend this product to a friend or colleague?” In addition it’s about, “Will you promote this company or not?”

That is going to be more important than ever, because we are going to continue in a virtual environment next year. As much as we can help frame what that feedback might be — and be proactive — is where I see success for SAP in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

Posted in Ariba, artificial intelligence, Business intelligence, Business networks, digital transformation, ERP, Help desk, machine learning, managed services, marketing, procurement, retail, SAP, SAP Ariba, social media, supply chain, User experience | Tagged , , , , , , , , , | Leave a comment

How to industrialize data science to attain mastery of repeatable intelligence delivery

Businesses these days are quick to declare their intention to become data-driven, yet the deployment of analytics and the use of data science remains spotty, isolated, and often uncoordinated.

To fully reach their digital business transformation potential, businesses large and small need to make data science more of a repeatable assembly line — an industrialization, if you will — of end-to-end data exploitation.

The next BriefingsDirect Voice of Analytics Innovation discussion explores the latest methods, tools, and thinking around making data science an integral core function that both responds to business needs and scales to improve every aspect of productivity.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the ways that data and analytics behave more like a factory — and less like an Ivory Tower — please welcome Doug Cackett, EMEA Field Chief Technology Officer at Hewlett Packard Enterprise. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Doug, why is there a lingering gap — and really a gaping gap — between the amount of data available and the analytics that should be taking advantage of it?

Cackett: That’s such a big question to start with, Dana, to be honest. We probably need to accept that we’re not doing things the right way at the moment. Actually, Forrester suggests that something like 40 zettabytes of data are going to be under management by the end of this year, which is quite enormous.

And, significantly, more of that data is being generated at the edge through applications, Internet of Things (IoT), and all sorts of other things. This is where the customer meets your business. This is where you’re going to have to start making decisions as well.

So, the gap is two things. It’s the gap between the amount of data that’s being generated and the amount you can actually comprehend and create value from. In order to leverage that data from a business point of view, you need to make decisions at the edge.

You will need to operationalize those decisions and move that capability to the edge where your business meets your customer. That’s the challenge we’re all looking for machine learning (ML) — and the operationalization of all of those ML models into applications — to make the difference.

Gardner: Why does HPE think that moving more toward a factory model, industrializing data science, is part of the solution to compressing and removing this gap?

Data’s potential at the edge

Cackett: It’s a math problem, really, if you think about it. If there is exponential growth in data within your business, if you’re trying to optimize every step in every business process you have, then you’ll want to operationalize those insights by making your applications as smart as they can possibly be. You’ll want to embed ML into those applications.

Because, correspondingly, there’s exponential growth in the demand for analytics in your business, right? And yet, the number of data scientists you have in your organization — I mean, growing them exponentially isn’t really an option, is it? And, of course, budgets are also pretty much flat or declining.

There’s exponential growth in the demand for analytics in your business. And yet the number of data scientists in your organization, growing them, is not exponential. And budgets are pretty much flat or declining.

So, it’s a math problem because we need to somehow square away that equation. We somehow have to generate exponentially more models for more data, getting to the edge, but doing that with fewer data scientists and lower levels of budget.

Industrialization, we think, is the only way of doing that. Through industrialization, we can remove waste from the system and improve the quality and control of those models. All of those things are going to be key going forward.

Gardner: When we’re thinking about such industrialization, we shouldn’t necessarily be thinking about an assembly line of 50 years ago — where there are a lot of warm bodies lined up. I’m thinking about the Lucille Ball assembly line, where all that candy was coming down and she couldn’t keep up with it.

Perhaps we need more of an ultra-modern assembly line, where it’s a series of robots and with a few very capable people involved. Is that a fair analogy?

Industrialization of data science

Cackett: I think that’s right. Industrialization is about manufacturing where we replace manual labor with mechanical mass production. We are not talking about that. Because we’re not talking about replacing the data scientist. The data scientist is key to this. But we want to look more like a modern car plant, yes. We want to make sure that the data scientist is maximizing the value from the data science, if you like.

We don’t want to go hunting around for the right tools to use. We don’t want to wait for the production line to play catch up, or for the supply chain to catch up. In our case, of course, that’s mostly data or waiting for infrastructure or waiting for permission to do something. All of those things are a complete waste of their time.

As you look at the amount of productive time data scientists spend creating value, that can be pretty small compared to their non-productive time — and that’s a concern. Part of the non-productive time, of course, has been with those data scientists having to discover a model and optimize it. Then they would do the steps to operationalize it.

But maybe doing the data and operations engineering things to operationalize the model can be much more efficiently done with another team of people who have the skills to do that. We’re talking about specialization here, really.

But there are some other learnings as well. I recently wrote a blog about it. In it, I looked at the modern Toyota production system and started to ask questions around what we could learn about what they have learned, if you like, over the last 70 years or so.

It was not just about automation, but also how they went about doing research and development, how they approached tooling, and how they did continuous improvement. We have a lot to learn in those areas.

For an awful lot of organizations that I deal with, they haven’t had a lot of experience around such operationalization problems. They haven’t built that part of their assembly line yet. Automating supply chains and mistake-proofing things, what Toyota called jidoka, also really important. It’s a really interesting area to be involved with.

Gardner: Right, this is what US manufacturing, in the bricks and mortar sense, went through back in the 1980s when they moved to business process reengineering, adopted kaizen principles, and did what Demingand more quality-emphasis had done for the Japanese auto companies.

And so, back then there was a revolution, if you will, in physical manufacturing. And now it sounds like we’re at a watershed moment in how data and analytics are processed.

Cackett: Yes, that’s exactly right. To extend that analogy a little further, I recently saw a documentary about Morgan cars in the UK. They’re a hand-built kind of car company. Quite expensive, very hand-built, and very specialized.

And I ended up by almost throwing things at the TV because they were talking about the skills of this one individual. They only had one guy who could actually bend the metal to create the bonnet, the hood, of the car in the way that it needed to be done. And it took two or three years to train this guy, and I’m thinking, “Well, if you just automated the process, and the robot built it, you wouldn’t need to have that variability.” I mean, it’s just so annoying, right?

In the same way, with data science we’re talking about laying bricks — not Michelangelo hammering out the figure of David. What I’m really trying to say is a lot of the data science in our customer’s organizations are fairly mundane. To get that through the door, get it done and dusted, and give them time to do the other bits of finesse using more skills — that’s what we’re trying to achieve. Both [the basics and the finesse] are necessary and they can all be done on the same production line.

Gardner: Doug, if we are going to reinvent and increase the productivity generally of data science, it sounds like technology is going to be a big part of the solution. But technology can also be part of the problem.

What is it about the way that organizations are deploying technology now that needs to shift? How is HPE helping them adjust to the technology that supports a better data science approach?

Define and refine

Cackett: We can probably all agree that most of the tooling around MLOps is relatively young. The two types of company we see are either companies that haven’t yet gotten to the stage where they’re trying to operationalize more models. In other words, they don’t really understand what the problem is yet.

Forrester research suggests that only 14 percent of organizations that they surveyed said they had a robust and repeatable operationalization process. It’s clear that the other 86 percent of organizations just haven’t refined what they’re doing yet. And that’s often because it’s quite difficult.

Many of these organizations have only just linked their data science to their big data instances or their data lakes. And they’re using it both for the workloads and to develop the models. And therein lies the problem. Often they get stuck with simple things like trying to have everyone use a uniform environment. All of your data scientists are both sharing the data and sharing the computer environment as well.

Data scientists can be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating terabytes of data, which can take a long time. That also demands new resources, including new hardware.

And data scientists can often be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating the data. And if you’re going to replicate terabytes of data, that can take a long period of time. That also means you need new resources, maybe new more compute power and that means approvals, and it might mean new hardware, too.

Often the biggest challenge is in provisioning the environment for data scientists to work on, the data that they want, and the tools they want. That can all often lead to huge delays in the process. And, as we talked about, this is often a time-sensitive problem. You want to get through more tasks and so every delayed minute, hour, or day that you have becomes a real challenge.

The other thing that is key is that data science is very peaky. You’ll find that data scientists may need no resources or tools on Monday and Tuesday, but then they may burn every GPU you have in the building on Wednesday, Thursday, and Friday. So, managing that as a business is also really important. If you’re going to get the most out of the budget you have, and the infrastructure you have, you need to think differently about all of these things. Does that make sense, Dana?

Gardner: Yes. Doug how is HPE Ezmeral being designed to help give the data scientists more of what they need, how they need it, and that helps close the gap between the ad hoc approach and that right kind of assembly line approach?

Two assembly lines to start

Cackett: Look at it as two assembly lines, at the very minimum. That’s the way we want to look at it. And the first thing the data scientists are doing is the discovery.

The second is the MLOps processes. There will be a range of people operationalizing the models. Imagine that you’re a data scientist, Dana, and I’ve just given you a task. Let’s say there’s a high defection or churn rate from our business, and you need to investigate why.

First you want to find out more about the problem because you might have to break that problem down into a number of steps. And then, in order to do something with the data, you’re going to want an environment to work in. So, in the first step, you may simply want to define the project, determine how long you have, and develop a cost center.

You may next define the environment: Maybe you need CPUs or GPUs. Maybe you need them highly available and maybe not. So you’d select the appropriate-sized environment. You then might next go and open the tools catalog. We’re not forcing you to use a specific tool; we have a range of tools available. You select the tools you want. Maybe you’re going to use Python. I know you’re hardcore, so you’re going to code using Jupyter and Python.

And the next step, you then want to find the right data, maybe through the data catalog. So you locate the data that you want to use and you just want to push a button and get provisioned for that lot. You don’t want to have to wait months for that data. That should be provisioned straight away, right?

You can do your work, save all your work away into a virtual repository, and save the data so it’s reproducible. You can also then check the things like model drift and data drift and those sorts of things. You can save the code and model parameters and those sorts of things away. And then you can put that on the backlog for the MLOps team.

Then the MLOps team picks it up and goes through a similar data science process. They want to create their own production line now, right? And so, they’re going to seek a different set of tools. This time, they need continuous integration and continuous delivery (CICD), plus a whole bunch of data stuff they want to operationalize your model. They’re going to define the way that that model is going to be deployed. Let’s say, we’re going to use Kubeflow for that. They might decide on, say, an A/B testing process. So they’re going to configure that, do the rest of the work, and press the button again, right?

Clearly, this is an ongoing process. Fundamentally that requires workflow and automatic provisioning of the environment to eliminate wasted time, waiting for stuff to be available. It is fundamentally what we’re doing in our MLOps product.

But in the wider sense, we also have consulting teams helping customers get up to speed, define these processes, and build the skills around the tools. We can also do this as-a-service via our HPE GreenLake proposition as well. Those are the kinds of things that we’re helping customers with.

Gardner: Doug, what you’re describing as needed in data science operations is a lot like what was needed for application development with the advent of DevOps several years ago. Is there commonality between what we’re doing with the flow and nature of the process for data and analytics and what was done not too long ago with application development? Isn’t that also akin to more of a cattle approach than a pet approach?

Operationalize with agility

Cackett: Yes, I completely agree. That’s exactly what this is about and for an MLOps process. It’s exactly that. It’s analogous to the sort of CICD, DevOps, part of the IT business. But a lot of that tool chain is being taken care of by things like Kubeflow and MLflow Project, some of these newer, open source technologies.

I should say that this is all very new, the ancillary tooling that wraps around the CICD. The CICD set of tools are also pretty new. What we’re also attempting to do is allow you, as a business, to bring these new tools and on-board them so you can evaluate them and see how they might impact what you’re doing as your process settles down.

The way we’re doing MLOps and data science is progressing extremely quickly. So you don’t want to lock yourself into a corner where you’re trapped in a particular workflow. You want to have agility. It’s analogous to the DevOps movement.

The idea is to put them in a wrapper and make them available so we get a more dynamic feel to this. The way we’re doing MLOps and data science generally is progressing extremely quickly at the moment. So you don’t want to lock yourself into a corner where you’re trapped into a particular workflow. You want to be able to have agility. Yes, it’s very analogous to the DevOps movement as we seek to operationalize the ML model.

The other thing to pay attention to are the changes that need to happen to your operational applications. You’re going to have to change those so they can tool the ML model at the appropriate place, get the result back, and then render that result in whatever way is appropriate. So changes to the operational apps are also important.

Gardner: You really couldn’t operationalize ML as a process if you’re only a tools provider. You couldn’t really do it if you’re a cloud services provider alone. You couldn’t just do this if you were a professional services provider.

It seems to me that HPE is actually in a very advantageous place to allow the best-of-breed tools approach where it’s most impactful but to also start put some standard glue around this — the industrialization. How is HPE is an advantageous place to have a meaningful impact on this difficult problem?

Cackett: Hopefully, we’re in an advantageous place. As you say, it’s not just a tool, is it? Think about the breadth of decisions that you need to make in your organization, and how many of those could be optimized using some kind of ML model.

You’d understand that it’s very unlikely that it’s going to be a tool. It’s going to be a range of tools, and that range of tools is going to be changing almost constantly over the next 10 and 20 years.

This is much more to do with a platform approach because this area is relatively new. Like any other technology, when it’s new it almost inevitably to tends to be very technical in implementation. So using the early tools can be very difficult. Over time, the tools mature, with a mature UI and a well-defined process, and they become simple to use.

But at the moment, we’re way up at the other end. And so I think this is about platforms. And what we’re providing at HPE is the platform through which you can plug in these tools and integrate them together. You have the freedom to use whatever tools you want. But at the same time, you’re inheriting the back-end system. So, that’s Active Directory and Lightweight Directory Access Protocol (LDAP) integrations, and that’s linkage back to the data, your most precious asset in your business. Whether that be in a data lake or a data warehouse, in data marts or even streaming applications.

This is the melting point of the business at the moment. And HPE has had a lot of experience helping our customers deliver value through information technology investments over many years. And that’s certainly what we’re trying to do right now.

Gardner: It seems that HPE Ezmeral is moving toward industrialization of data science, as well as other essential functions. But is that where you should start, with operationalizing data science? Or is there a certain order by which this becomes more fruitful? Where do you start?

Machine learning leads change

Cackett: This is such a hard question to answer, Dana. It’s so dependent on where you are as a business and what you’re trying to achieve. Typically, to be honest, we find that the engagement is normally with some element of change in our customers. That’s often, for example, where there’s a new digital transformation initiative going on. And you’ll find that the digital transformation is being held back by an inability to do the data science that’s required.

There is another Forrester report that I’m sure you’ll find interesting. It suggests that 98 percent of business leaders feel that ML is key to their competitive advantage. It’s hardly surprising then that ML is so closely related to digital transformation, right? Because that’s about the stage at which organizations are competing after all.

So we often find that that’s the starting point, yes. Why can’t we develop these models and get them into production in time to meet our digital transformation initiative? And then it becomes, “Well, what bits do we have to change? How do we transform our MLOps capability to be able to do this and do this at scale?”

Often this shift is led by an individual in an organization. There develops a momentum in an organization to make these changes. But the changes can be really small at the start, of course. You might start off with just a single ML problem related to digital transformation.

We acquired MapR some time ago, which is now our HPE Ezmeral Data Fabric. And it underpins a lot of the work that we’re doing. And so, we will often start with the data, to be honest with you, because a lot of the challenges in many of our organizations has to do with the data. And as businesses become more real-time and want to connect more closely to the edge, really that’s where the strengths of the data fabric approach come into play.

So another starting point might be the data. A new application at the edge, for example, has new, very stringent requirements for data and so we start there with building these data systems using our data fabric. And that leads to a requirement to do the analytics and brings us obviously nicely to the HPE Ezmeral MLOps, the data science proposition that we have.

Gardner: Doug, is the COVID-19 pandemic prompting people to bite the bullet and operationalize data science because they need to be fleet and agile and to do things in new ways that they couldn’t have anticipated?

Cackett: Yes, I’m sure it is. We know it’s happening; we’ve seen all the research. McKinsey has pointed out that the pandemic has accelerated a digital transformation journey. And inevitably that means more data science going forward because, as we talked about already with that Forrester research, some 98 percent think that it’s about competitive advantage. And it is, frankly. The research goes back a long way to people like Tom Davenport, of course, in his famous Harvard Business Review article. We know that customers who do more with analytics, or better analytics, outperform their peers on any measure. And ML is the next incarnation of that journey.

Gardner: Do you have any use cases of organizations that have gone to the industrialization approach to data science? What is it done for them?

Financial services benefits

Cackett: I’m afraid names are going to have to be left out. But a good example is in financial services. They have a problem in the form of many regulatory requirements.

When HPE acquired BlueData it gained an underlying technology, which we’ve transformed into our MLOps and container platform. BlueData had a long history of containerizing very difficult, problematic workloads. In this case, this particular financial services organization had a real challenge. They wanted to bring on new data scientists. But the problem is, every time they wanted to bring a new data scientist on, they had to go and acquire a bunch of new hardware, because their process required them to replicate the data and completely isolate the new data scientist from the other ones. This was their process. That’s what they had to do.

So as a result, it took them almost six months to do anything. And there’s no way that was sustainable. It was a well-defined process, but it’s still involved a six-month wait each time.

So instead we containerized their Cloudera implementation and separated the compute and storage as well. That means we could now create environments on the fly within minutes effectively. But it also means that we can take read-only snapshots of data. So, the read-only snapshot is just a set of pointers. So, it’s instantaneous.

They scaled out their data science without scaling up their costs or the number of people required. They are now doing that in a hybrid cloud environment. And they only have to change two lines of code to push workloads into AWS, which is pretty magical, right?

They were able to scale-out their data science without scaling up their costs or the number of people required. Interestingly, recently, they’ve moved that on further as well. Now doing all of that in a hybrid cloud environment. And they only have to change two lines of code to allow them to push workloads into AWS, for example, which is pretty magical, right? And that’s where they’re doing the data science.

Another good example that I can name is GM Finance, a fantastic example of how having started in one area for business — all about risk and compliance — they’ve been able to extend the value to things like credit risk.

But doing credit risk and risk in terms of insurance also means that they can look at policy pricing based on dynamic risk. For example, for auto insurance based on the way you’re driving. How about you, Dana? I drive like a complete idiot. So I couldn’t possibly afford that, right? But you, I’m sure you drive very safely.

But in this use-case, because they have the data science in place it means they can know how a car is being driven. They are able to look at the value of the car, the end of that lease period, and create more value from it.

These are types of detailed business outcomes we’re talking about. This is about giving our customers the means to do more data science. And because the data science becomes better, you’re able to do even more data science and create momentum in the organization, which means you can do increasingly more data science. It’s really a very compelling proposition.

Gardner: Doug, if I were to come to you in three years and ask similarly, “Give me the example of a company that has done this right and has really reshaped itself.” Describe what you think a correctly analytically driven company will be able to do. What is the end state?

A data-science driven future

Cackett: I can answer that in two ways. One relates to talking to an ex-colleague who worked at Facebook. And I’m so taken with what they were doing there. Basically, he said, what originally happened at Facebook, in his very words, is that to create a new product in Facebook they had an engineer and a product owner. They sat together and they created a new product.

Sometime later, they would ask a data scientist to get involved, too. That person would look at the data and tell them the results.

Then they completely changed that around. What they now do is first find the data scientist and bring him or her on board as they’re creating a product. So they’re instrumenting up what they’re doing in a way that best serves the data scientist, which is really interesting.

The data science is built-in from the start. If you ask me what’s going to happen in three years’ time, as we move to this democratization of ML, that’s exactly what’s going to happen. I think we’ll end up genuinely being information-driven as an organization.

That will build the data science into the products and the applications from the start, not tack them on to the end.

Gardner: And when you do that, it seems to me the payoffs are expansive — and perhaps accelerating.

Cackett: Yes. That’s the competitive advantage and differentiation we started off talking about. But the technology has to underpin that. You can’t deliver the ML without the technology; you won’t get the competitive advantage in your business, and so your digital transformation will also fail.

This is about getting the right technology with the right people in place to deliver these kinds of results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in Cloud computing, storage | Tagged , , , , , , , , , , , , , , , | Leave a comment

How remote work promises to deliver new levels of engagement, productivity, and innovation

The way people work has changed more in 2020 than the previous 10 years combined — and that’s saying a lot. Even more than the major technological impacts of cloud, mobile, and big data, the COVID-19 pandemic has greatly accelerated and deepened global behavioral shifts.

The ways that people think about where and how to work may never be the same, and new technology alone could not have made such a rapid impact.

So now is the time to take advantage of a perhaps once-in-a-lifetime disruption for the better. Steps can be taken to make sure that such a sea change comes less with a price and more with a broad boon — to both workers and businesses.

The next BriefingsDirect work strategies panel discussion explores research into the future of work and how unprecedented innovation could very well mean a doubling of overall productivity in the coming years.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We’re joined by a panel to hear insights on how a remote-first strategy leads to a reinvention of work expectations and payoffs. Please welcoming our guests, Jeff Vincent, Chief Executive Officer at Lucid Technology ServicesRay Wolf, Chief Executive Officer at A2K Partners, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, you’ve done some new research at Citrix. You’ve looked into what’s going on with the nature of work and a shift from what seems to be from chaos to opportunity. Tell us about the research and why it fosters such optimism.

Minahan: Most of the world has been focused on the here-and-now, with how to get employees home safely, maintain business continuity, and keep employees engaged and productive in a prolonged work-from-home model. Yet we spent the bulk of the last year partnering with Oxford Analytica and Coleman Parkes to survey thousands of business and IT executives and to conduct qualitative interviews with C-level executives, academia, and futurists on what work is going to look like 15 years from now — in 2035 — and predict the role that technology will play.

Certainly, we’re already seeing an acceleration of the findings from the report. And if there’s any iota of a silver lining in this global crisis we’re all living through, it’s that it has caused many organizations to rethink their operating models, business models, and their work models and workforce strategies.

Work has no-doubt forever changed. We’re seeing an acceleration of companies embracing new workforce strategies, reaching to pools of talent in remote locales using technology, and opening up access to skill sets that were previously too costly near their office and work hubs.

Now they can access talent anywhere, enabling and elevating the skill sets of all employees by leveraging artificial intelligence (AI) and machine learning (ML) to help them perform as their best employees. They are ensuring that they can embrace entirely new work models, possibly even the Uber-fication of work by tapping into recent retirees, work-from-home parents, and caregivers who had opted-out of the workforce — not because they didn’t have the skills or expertise that folks needed — but because traditional work models didn’t support their home environment.

We’re seeing an acceleration of companies liberated by the fact that they realize work can happen outside of the office. Many executives across every industry have begun to rethink what the future of work is going to look like when we come out of this pandemic.

Gardner: Tim, one of the things that jumped out at me from your research was a majority feel that technology will make workers at least twice as productive by 2035. Why such a newfound opportunity for higher productivity, which had been fairly flat for quite a while? What has changed in behavior and technology that seems to be breaking us out of the doldrums when it comes to productivity?

Work 2035: Citrix Research

Reveals a More Intelligent Future

Minahan: Certainly, the doubling of employee productivity is a factor of a couple things. Number one, new more flexible work models allow employees to work wherever they can do their best work. But more importantly, it is the emergence of the augmented worker, using AI and ML to help not just offer up the right information at the right time, but help employees make more informed decisions and speed up the decision-making process, as well as automating menial tasks so employees can focus on the strategic aspects of driving creativity and innovation for the business. This is one of the areas we think is the most exciting as we look forward to the future.

Gardner: We’re going to dig into that research more in our discussion. But let’s go to Jeff at Lucid Technology Services. Tell us about Lucid, Jeff, and why a remote-first strategy has been a good fit for you.

Remote service keep SMBs safe

Vincent: Lucid Technology Services delivers what amounts to a fractional chief information officer (CIO) service. Small- to medium-sized businesses (SMBs) need CIOs but don’t generally have the working capital to afford a full-time, always-on, and always-there CIO or chief technology officer (CTO). That’s where we fill the gap.

We bring essentially an IT department to SMBs, everything from budgeting to documentation — and all points in between. And one of the big things that taught us to look forward is by looking backward. In 1908, Henry Ford gave us the modern assembly line, which promptly gave us the model T. And so horse-drawn buggy whip factories and buggy accessories suddenly became obsolete.

Something similar happened in the early 1990s. It was a fad called the Internetand it revolutionized work in ways that could not have been foreseen up to that point in time. We firmly believe that we’re on the precipice of another revolution of work just like then. The technology is mature at this point. We can move forward with it, using things like Citrix.

Gardner: Bringing a CIO-caliber function to SMBs sounds like it would be difficult to scale, if you had to do it in-person. So, by nature, you have been a pioneer in a remote-first strategy. Is it effective? Some people think you can’t be remote and be effective.

Vincent: Well, that’s not what we’ve been finding. This has been an evolution in my business for 20 years now. And the field has grown as the need has grown. Fortunately, the technology has kept pace with it. So, yes, I think we’re very effective.

Previously, let’s say a CPA firm of 15 providers, or a medical firm of three or four doctors with another 10 or so administrative and assistance staff on site all of the time, they had privileged information and data under regulation that needs safeguarding.

Well, if you are Arthur Andersen, a large, national firm, or Kaiser Permanente, or some really large corporation that has an entire team of IT staff on-site, then that isn’t really a problem. But when you’re under 25 to 50 employees, that’s a real problem because even if you were compromised, you wouldn’t necessarily know it.

If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do the work of a lot of people.

We leverage monitoring technology, such as next-generation firewalls, and a team of people looking after that network operation center (NOC) and help desk to head those problems off at the pass. If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do a lot of work for a lot of people. That is the secret sauce of our success.

Gardner: Jeff, from your experience, how often is it the CIO who is driving the remote work strategy?

Vincent: I don’t think remote work prior to the pandemic could have been driven from any other any other seat than the CIO/CTO. It’s his or her job. It’s their entire ethos to keep the finger on pulse of technology, where it’s going, and what it’s currently capable of doing.

In my experience, anybody else on the C-suite team has so much else going on. Everybody is wearing multiple hats and doing double-duty. So, the CTO is where that would have been driven.

But now, what I’ve seen in my own business, is that since the pandemic, as the CTO, I’m not generally leading the discussion — I’m answering the questions. That’s been very exciting and one of the silver linings I’ve seen through this very trying time. We’re not forcing the conversation anymore. We are responding to the questions. I certainly didn’t envision a pandemic shutting down businesses. But clearly, the possibility was there, and it’s been a lot easier conversation [about remote work] to have over the past several months.

The nomadic way of work

Gardner: Ray, tell us about A2K Partners. What do you have in common with Jeff Vincent at Lucid about the perceived value of a remote-first strategy?

Wolf: A2K Partners is a digital transformation company. Our secret sauce is we translate technology into the business applications, outcomes, and impacts that people care about.

Our company was founded by individuals who were previously in C-level business positions, running global organizations. We were the consumers of technology. And honestly, we didn’t want to spend a lot of time configuring the technologies. We wanted to speed things up, drive efficiency, and drive revenue and growth. So we essentially built the company around that.

We focus on work redesign, work orchestration, and employee engagement. We leverage platforms like Citrix for the future of work and for bringing in productivity enhancements to the actual processes of doing work. We ask, what’s the current state? What’s the future state? That’s where we spend a lot of our time.

As for a remote-first strategy, I want to highlight that our company is a nomadic company. We recruit people who want to live and work from anywhere. We think there’s a different mindset there. They are more apt to accept and embrace change. So untethered work is really key.

What we have been seeing with our clients — and the conversations that we’re having currently today — is the leaders of every organization, at every level, are trying to figure out how we come out of this pandemic better than when we went in. Some actually feel victims, and we’re encouraging them that this is really an opportunity.

Some statistics from the last three economic downturns: One very interesting one is that companies that started before the downturn in the bottom 20 percent emerged in the top 20 percent after the downturn. And you ask yourself, “How does a mediocre company all of a sudden rise to the top through a crisis?” This is where we’ve been spending time, in figuring out what plays they are running and how to better help them execute on it.

As Work Goes Virtual, Citrix Research Shows

Companies Need to Follow Talent Fleeing Cities

The companies that have decided to use this as a period to change the business model, change the services and products they’re offering, are doing it in stealth mode. They’re not noisy. There are no press releases. But I will tell you that next March, June, or September, what will come from them will create an Amazon-like experience for their customers and their employees.

Gardner: Tim, in listening to Jeff and Ray, it strikes me that they look at remote work not as the destination — but the starting point. Is that what you’re starting to see? Have people reconciled themselves with the notion that a significant portion of their workforce will probably be remote? And how do we use that as a starting point — and to what?

Minahan: As Jeff said, companies are rethinking their work models in ways they haven’t since Henry Ford. We just did OnePoll research polling with thousands of US-based knowledge workers. Some 47 percent have either relocated out of big metropolitan areas or are in the process of doing that right now. They can primarily because they’ve proven to themselves that they can be productive when not necessarily in the office.

No alt text provided for this image

Similarly, some 80 percent of companies are now looking at making remote work a more permanent part of their workforce strategy. And why is that? It is not just merely should Sam or Sally work in the office or work at home. No, they’re fundamentally rethinking the role of work, the workforce, the office, and what role the physical office should play.

And they’re seeing an opportunity, not just from real estate cost-reduction, but more so from access to talent. If we remember back nine months ago to before the great pandemic, we were having a different discussion. That discussion was the fact that there was a global talent shortage, according to McKinsey, of 95 million medium- to high-skilled workers.

That hasn’t changed. It was exacerbated at that time because we were organized around traditional work-hub models — where you build an office, build a call center, and you try like heck to hire people from around that area. Of course, if you happen to build in a metropolitan area right down the street from one of your top competitors — you can see the challenge.

In addition, there was a challenge around attaining the right skillsets to modernize and digitize your businesses. We’re also seeing an acceleration in the need for those skills because, candidly, very few businesses can continue to maintain their physical operations in light of the pandemic. They have had to go digital.

And so, as companies are rethinking all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere, just as Ray indicated. I like the nomadic work concept.

As companies rethink all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere. I like the nomadic work concept.

Now, how do I use technology to even further raise the skillsets of all of my employees so they perform like the very best. This is where that interesting angle of AI and ML comes in, of being able to offer up the right insights to guide employees to the right next step in a very simple way. At the same time, that approach removes the noise from their day and helps them focus on the tasks they need to get done to be productive. It gives them the space to be creative and innovative and to drive that next level of growth for their company.

Gardner: Jeff, it sounds like the remote work and the future of work that Tim is describing sets us up for a force-multiplier when it comes to addressable markets. And not just addressable markets in terms of your customers, who can be anywhere, but also that your workers can be anywhere. Is that one of the things that will lead to a doubling of productivity?

Workers and customers anywhere

Vincent: Certainly. And the thing about truth is that it’s where you find it. And if it’s true in one area of human operations, it’s going to at least have some application in every other. For example, I live in the Central Valley of California. Because of our climate, the geology, and the way this valley was carved out of the hillside, we have a disproportionately high ability to produce food. So one of the major industries here in the Central Valley is agriculture.

You can’t do what we do here just anywhere because of all those considerations: climate, soil, and rainfall, when it comes. The fact that we have one of the tallest mountain ranges right next to us gives us tons of water, even if it doesn’t rain a lot here in Fresno. But you can’t outsource any of those things. You can’t move any of those things — but that’s becoming a rarity.

If you focus on a remote-first workplace, you can source talent from anywhere; you can locate your business center anywhere. So you get a much greater recruiting tool both for clientele and for talent.

Another thing that has been driven by this pandemic is that people have been forced to go home, stay there, and work there. Either you’re going to figure out a way to get around the obstacles of not being able to go to the office or you’re going to have to close down, and nobody wants to do that. So they’ve learned to adapt, by and large.

And the benefits that we’re seeing are just manifold. They go into everything. Our business agility is much greater. The human considerations of your team members improve, too. They have had an artificial dichotomy between work responsibilities and home life. Think of a single parent trying to raise a family and put bread on the table.

Work Has Changed Forever, So That Experience

Must Be Empowered to Any Location

Now, with the remote-first workplace, it becomes much easier. Your son, your daughter, they have a medical appointment; they have a school need; they have something going on in the middle of the day. Previously you had to request time off, schedule around that, and move other team members into place. And now this person can go and be there for their child, or their aging parents, or any of the other hundreds of things that can go sideways for a family.

With a cloud-based workforce, that becomes much less of a problem. You have still got some challenges you’ve got to overcome, but there are fewer of them. I think everybody is reaping the benefits of that because with fewer people needing to be in the office, that means you can have a smaller office. Fewer people on the roads means less environmental impact of moving around and commuting for an hour twice a day.

Gardner: Ray Wolf, what is it about technology that is now enabling these people to be flexible and adaptive? What do you look for in technology platforms to give those people the tools they need?

Do more with less

Wolf: First, let’s talk about the current technology situation. The average worker out there has eight applications and 10 windows open. The way technology is provisioned to some of our remote workers is working against them. We have these technologies for all. Just because you give someone access to a customer relationship management (CRM) system or a human resources (HR) system doesn’t necessarily make them more productive. It doesn’t take into consideration how they like to do work. When you bring on new employees, it leaves it up to the individual to figure out how to get stuff done.

With the new platforms, Citrix Workspace with intelligence, for example, we’re able to take those mundane tasks and lock then into memory muscle through automation. And so, what we do is free-up time and energy using the Citrix platform. Then people can start moving and essentially upscaling, taking on higher cognitive tasks, and building new products and services.

No alt text provided for this image

That’s what we love about it. The other side is it’s no code and low code. The key here is just figuring out where to get started and making sure that the workers have their fingerprints on the plan because your worker today knows exactly where the inefficiencies are. They know where the frustration is. So we have a number of use cases that in the matter of six weeks, we were able to unlock almost a day per week worth of productivity gains, of which one of our customers in the sale spaces, a sales vice president, coined the word “proactivity.”

For them, they were taking that one extra day a week and starting to be proactive by pursuing new sales and leads and driving revenue where they just didn’t have the bandwidth before.

Through of our own polling of about 200 executives, we discovered that 50 percent of the companies are scaling down on their resources because they are unsure of the future. And that leaves them with the situation of doing more with less. That’s why the automation platforms are ideal for freeing up time and energy so they can deal with a reduced work force, but still gain the bandwidth to pursue new services and products. Then they can come out and be in that top 20 percent after the pandemic.

Gardner: Tim, I’m hearing Citrix Workspace referred to as an automation platform. How does Workspace not just help people connect, but helps them automate and accelerate productivity?

Keep talent optimized every day

Minahan: Ray put his finger on the pulse of the third dynamic we were seeing pre-pandemic, and it’s only been exacerbated. We talked first about the global shortage of medium- to high-skills talent. But then we talked about the acute shortage of digital skills that those folks need.

The third part is, if you’re lucky enough to have that talent, it’s likely they are very frustrated at work. A recent Gallup poll says 87 percent of employees are disengaged at work, and that’s being exacerbated by all of the things that Ray talked about. We’ve provided these workers with all of these tools, all these different channels, Teams and Slack and the like, and they’re meant to improve their performance in collaboration. But we have reached a tipping point of complexity that really has turned your top talent into task rabbits.

What Citrix does with our digital Workspace technology is it abstracts away all of that complexity. It provides unified access to everything an employee needs to be productive in one experience that travels with them. So, their work environment is this digital workspace — no matter what device they are on, no matter what location they are at, no matter what work channel they need to navigate across.

The second thing is it wrappers that in security, both secure access on the way in (I call it the bouncer at the front door), as well as ongoing contextual application of security policies. I call that the bodyguard who follows you around the club to make sure you stay out of trouble. And that gives IT the confidence that those employees can indeed work wherever they need to, and from whatever device they need to, with a level of comfort that their company’s information and assets are made secure.

What gets exciting now is the intelligence components. Infusing this with ML and AI automates away and guides an employee through their workday. It automates away those menial tasks so they can focus on what’s important.

But what gets exciting now is the intelligence components. Infusing this with ML and AI automates away and guides an employee through their work day. It automates away those menial tasks so they can focus on what’s important.

And that’s where folks like A2K come in. They can bring in their intellectual property and understanding of the business processes — using those low- to no-code tools — to actually develop extensions to the workspace that meet the needs of individual functions or individual industries and personalize the workspace experience for every individual employee.

Ray mentioned sales force productivity. They are also doing call center optimization. So, very, very discreet solutions that before required users to navigate across multiple different applications but are now handled through a micro app player that simplifies the engagement model for the employee, offering up the right insights and the right tasks at the right time so that they can do their very best work.

Gardner: Jeff Vincent, we have been talking about this in terms of worker productivity. But I’m wondering about leadership productivity. You are the CEO of a company that relies on remote work to a large degree. How do you find that tools like Citrix and remote-first culture works for you as a leader? Do you feel like you can lead a company remotely?

Workspace enhances leadership

Vincent: Absolutely. I’m trying to take a sip out of a fire hose, because everything I am hearing is exactly what we have been seeing — just put a bit more eloquently and with a bit more data behind it — for quite a long time now.

Leading a remote team really isn’t any different than leading a team that you look at. I mean, one of the aspects of leadership, as it pertains to this discussion, is having everybody know what is expected of them and when the due date is, enabling them with the tools they need to get the work done on time and on budget, right?

No alt text provided for this image

And with Citrix Workspace technology, the workflows automate expense report approvals, they automate calendar appointments, and automate the menial tasks that take up a lot of our time every single day. They now become seamless. They happen almost without effort. So that allows the leaders to focus on, “Okay, what does John need today to get done the task that’s going to be due in a month or in a quarter? Where are we at with this prospect or this leader or this project?”

And it allows everybody to take a moment, reflect on where they are, reflect on where they need to be, and then get more surgical with our people on getting there.

Gardner: Ray, also as a CEO, how do you see the intersection of technology, behavior, and culture coming together so that leaders like yourself are the ones going to be twice as productive?

Wolf: This goes to a human capital strategy, where you’re focusing on the numerator. So, the cost of your resources and the type of resource you need fit within a band. That’s the denominator.

The numerator is what productivity you get out of your workforce. There’s a number of things that have to come into play. It’s people, process, culture, and technology — but not independent or operating in a silo.

And that’s the big opportunity Jeff and Tim are talking about here. Imagine when we start to bring system-level thinking to how we do work both inside and outside of our company. It’s the ecosystem, like hiring Ray Wolf as the individual contributor, yet getting 13 Ray Wolfs; that’s great.

But what happens if we orchestrate the work between finance, HR, the supply chain, and procurement? And then we take it an even bigger step by applying this outside of our company with partners?

How Lucid Technology Services Adapts

To the Work-from-Home Revolution

We’re working with a very large distributor right know with hundreds of resellers. In order to close deals, they have to get into the other partner’s CRM system. Well, today, that happens with about eight emails over a number of days. And that’s just inefficient. But with Citrix Workspace you’re able to cross-integrate processes inside and outside of your company in a secure manner, so that entire ecosystems work seamlessly. As an example, just think about the travel reservation systems, which are not owned by the airlines, but are still a heart-lung function for them, and they have to work in unison.

We’re really jazzed about that. How did we discover this? Two things. One, I’m an aerospace engineer by first degree, so I saw this come together in complex machines, like jet engines. And then, second, by running a global company, I was spending 80 hours a week trying to reconcile disparate data: One data set says sales were up, another that productivity was up, and then my profit margins go down. I couldn’t figure it out without spending a lot of hours.

And then we started a new way of thinking, which is now accelerated with the Citrix Workspace. Disparate systems can work together. It makes clear what needs to be done, and then we can move to the next level, which is democratization of data. With that, you’re able to put information in front of people in synchronization. They can see complex supply chains complete, they can close sales quicker, et cetera. So, it’s really awesome.

I think we’re still at the tip of the iceberg. The innovation that I’m aware of on the product roadmap with Citrix is just awesome, and that’s why we’re here as a partner.

Gardner: Tim, we’re hearing about the importance of extended enterprise collaboration and democratization of data. Is there anything in your research that shows why that’s important and how you’re using that understanding of what’s important to help shape the direction of Citrix products?

Augmented workers arrive

Minahan: As Ray said, it’s about abstracting away that lower-level complexity, providing all the integrations, the source systems, the service security model, and providing the underlying workflow engines and tools. Then experts like Lucid and A2K can extend that to create new solutions for driving business outcomes.

From the research, we can expect the emergence of the augmented worker, number one. We’re already beginning to see it with bots and robotic process automation (RPA) systems. But at Citrix we’re going to be moving to a much higher level, where it will do things similar to what Ray and Jeff were saying, abstracting away a lot of the menial tasks that can be automated. But we can also perform at a higher level, tasks at a much more informed and rapid pace through use of AI, which can compress and analyze massive amounts of data that would take us a very long time individually. ML can adapt and personalize that experience for us.

The research indicates that while robots will replace some tasks and jobs, they will also create many new jobs. You’ll see a rise in demand for new roles, such as a bot or AI trainer, a virtual reality manager, and advanced data scientists.

Secondly, the research indicates that while robots will replace some tasks and jobs, they will also create many new jobs. And, according to our Work 2035 research, you’ll see a rise in demand for new roles, such as a bot or AI trainer, a virtual reality manager, advanced data scientists, privacy and trust managers, and design thinkers such as the folks at A2K and Lucid Technology Solutions are already doing. They are already working with clients to uncover the art of the possible and rethinking business process transformation.

Importantly, we also identified the need for flexibility of work. Shifting your mindset from thinking about a workforce in terms of full-time equivalents (FTEs)instead of pools of talent. And you understand the individual skillsets that you need and bring them together and assemble them rather quickly to address a certain project or issue that you have using digital Citrix Workspace technology, and then disassemble them just as quickly.

But you’ll also see a change in leadership. AI is going to take over a lot of those business decisions and possibly eliminate the need for some middle management teams. The bulk of our focus can be not so much managing as driving new creative ideas and innovation.

Gardner: I’d love to hear more from both Jeff and Ray about how businesses prepare themselves to best take advantage of the next stages of remote work. What do you tell businesses about thinking differently in order to take advantage of this opportunity?

Imagine what’s possible to work

Vincent: Probably the single biggest thing you can do to get prepared for the future of work is to rethink IT and your human capital, your team members. What do they need as a whole?

A business calls me up and says, “Our server is getting old, we need to get a new server.” And previously, I’d say, “Well, I don’t know if you actually need a server on-site, maybe we talk about the cloud.”

So educate yourself as a business leader on what out there is possible. Then take that step, listen to your IT staff, listen to your IT director, whoever that may be, and talk to them about what is out there and what’s really possible. The technology enabling remote work has grown exponentially, even in last few months, in its adoption and capabilities.

If you looked at the technology a year or two ago, that world doesn’t exist anymore. The technology has grown dramatically. The price point has come down dramatically. What is now possible wasn’t a few years ago.

So listen to your technology advisers, look at what’s possible, and prepare yourself for the next step. Take capital and reinvest it into the future of work.

Wolf: What we’re seeing that’s working the best is people are getting started anyway, anyhow. There really wasn’t a playbook set up for a pandemic, and it’s still evolving. We’re experiencing about 15 years’ worth of change in every three months of what’s going on. And there’s still plenty of uncertainty, but that can’t paralyze you.

No alt text provided for this image

We recommend that people fundamentally take a look at what your core business is. What do you do for a living? And then everything that enables you to do that is kind of ancillary or secondary.

When it comes to your workforce — whether it’s comprised of contractors or freelancers or permanent employees — no matter where they are, have a get stuff done mentality. It’s about what you are trying to get done. Don’t ask them about the systems yet. Just say, “What are you trying to get done?” And, “What will it take for you to double your speed and essentially only put half the effort into it?”

And listen. And then define, configure, and acquire the technologies that will enable that to happen. We need to think about what’s possible at the ground level, and not so much thinking about it all in terms of the systems and the applications. What are people trying to do every day and how do we make their work experience and their work life better so that they can thrive through this situation as well as the company?

Gardner: Tim, what did you find most surprising or unexpected in the research from the Work 2035 project? And is there a way for our audience to learn more about this Citrix research?

Minahan: One of the most alarming things to me from the Work 2035 project, the one where we’ve gotten the most visceral reaction, was the anticipation that, by 2035, in order to gain an advantage in the workplace, employees would literally be embedding microchips to help them process information and be far more productive in the workforce.

I’m interested to see whether that comes to bear or not, but certainly it’s very clear that the role of AI and ML — we’re only scratching the surface as we drive to new work models and new levels of productivity. We’re already seeing the beginnings of the augmented worker and just what’s possible when you have bots sitting — virtually and physically — alongside employees in the workplace.

We’re seeing the future of work accelerate much quicker than we anticipated. As we emerge out the other side of the pandemic, with the guidance of folks like Lucid and A2K, companies are beginning to rethink their work models and liberate their thinking in ways they hadn’t considered for decades. So it’s an incredibly exciting time.

Gardner: And where can people go to learn more about your research findings at Citrix?

Minahan: To view the Work 2035 project, you can find the foundational research at Citrix.com, but this is an ongoing dialogue that we want to continue to foster with thought leaders like Ray and Jeff, as well as academia and governments, as we all prepare not just technically but culturally for the future of work.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

Posted in application transformation, artificial intelligence, BYOD, Citrix, Cloud computing, Cyber security, digital transformation, enterprise architecture, Information management, machine learning, professional services, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

How the Journey to Modern Data Management is Paved with an Inclusive Edge-to-Cloud Data Fabric

The next BriefingsDirect Voice of Analytics Innovation discussion focuses on the latest insights into end-to-end data management strategies.

As businesses seek to gain insights for more elements of their physical edge — from factory sensors, myriad machinery, and across field operations — data remains fragmented. But a Data Fabric approach allows information and analytics to reside locally at the edge yet contribute to the global improvement in optimizing large-scale operations.

Stay with us now as we explore how edge-to-core-to-cloud dispersed data can be harmonized with a common fabric to make it accessible for use by more apps and across more analytics.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the ways all data can be managed for today’s data-rich but too often insights-poor organizations, we’re joined by Chad Smykay, Field Chief Technology Officer for Data Fabric at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chad, why are companies still flooded with data? It seems like they have the data, but they’re still thirsty for actionable insights. If you have the data, why shouldn’t you also have the insights readily available?

Smykay: There are a couple reasons for that. We still see today challenges for our customers. One is just having a common data governance methodology. That’s not just to govern the security and audits, and the techniques around that — but determining just what your data is.

I’ve gone into so many projects where they don’t even know where their data lives; just a simple matrix of where the data is, where it lives, and how it’s important to the business. This is really the first step that most companies just don’t do.

Gardner: What’s happening with managing data access when they do decide they want to find it? What’s been happening with managing the explosive growth of unstructured data from all corners of the enterprise?

Tame your data

Smykay: Five years ago, it was still the Wild West of data access. But we’re finally seeing some great standards being deployed and application programming interfaces (APIs) for that data access. Companies are now realizing there’s power in having one API to rule them all. In this case, we see mostly Amazon S3

There are some other great APIs for data access out there, but just having more standardized API access into multiple datatypes has been great for our customers. It allows for APIs to gain access across many different use cases. For example, business intelligence (BI) tools can come in via an API. Or an application developer can access the same API. So that approach really cuts down on my access methodologies, my security domains, and just how I manage that data for API access.

Gardner: And when we look to get buy-in from the very top levels of businesses, why are leaders now rethinking data management and exploitation of analytics? What are the business drivers that are helping technologists get the resources they need to improve data access and management?

Smykay: The business drivers gain when data access methods are as reusable as possible across the different use cases. It used to be that you’d have different point solutions, or different open source tools, needed to solve a business use-case. That was great for the short-term, maybe with some quarterly project or something for the year you did it in.

But then, down the road, say three years out, they would say, “My gosh, we have 10 different tools across the many different use cases we’re using.” It makes it really hard to standardize for the next set of use cases.

Gaining a common, secure access layer that can access different types of data is the biggest driver of our HPE Data Fabric. And the business drivers gain when the data access methods are as reusable as possible. 

So that’s been a big business driver, gaining a common, secure access layer that can access different types of data. That’s been the biggest driver for our HPE Data Fabric. That and having common API access definitely reduces the management layer cost, as well as the security cost.

Gardner: It seems to me that such data access commonality, when you attain it, becomes a gift that keeps giving. The many different types of data often need to go from the edge to dispersed data centers and sometimes dispersed in the cloud. Doesn’t data access commonality also help solve issues about managing access across disparate architectures and deployment models?

Smykay: You just hit the nail on the head. Having commonality for that API layer really gives you the ability to deploy anywhere. When I have the same API set, it makes it very easy to go from one cloud provider, or one solution, to another. But that can also create issues in terms of where my data lives. You still have data gravity issues, for example. And if you don’t have portability of the APIs and the data, you start to see some lock-in with the either the point solution you went with or the cloud provider that’s providing that data access for you.

Gardner: Following through on the gift that keeps giving idea, what is it about the Data Fabric approach that also makes analytics easier? Does it help attain a common method for applying analytics?

Data Fabric deployment options

Smykay: There are a couple of things there. One, it allows you to keep the data where it may need to stay. That could be for regulatory reasons or just depend on where you build and deploy the analytics models. A Data Fabric helps you to start separating out your computing and storage capabilities, but also keeps them coupled for wherever the deployment location is.

No alt text provided for this image

For example, a lot of our customers today have the flexibility to deploy IT resources out in the edge. That could be a small cluster or system that pre-processes data. They may typically slowly trickle all the data back to one location, a core data center or a cloud location. Having these systems at the edge gives them the benefit of both pushing information out, as well as continuing to process at the edge. They can choose to deploy as they want, and to make the data analytics solutions deployed at the core even better for reporting or modeling.

Gardner: It gets to the idea of act locally and learn globally. How is that important, and why are organizations interested in doing that?

Smykay: It’s just-in-time, right? We want everything to be faster, and that’s what this Data Fabric approach gets for you.

In the past, we’ve seen edge solutions deployed, but you weren’t processing a whole lot at the edge. You were pushing along all the data back to a central, core location — and then doing something with that data. But we don’t have the time to do that anymore.

Unless you can change the laws of physics — last time I checked, they haven’t done that yet — we’re bound by the speed of light for these networks. And so we need to keep as much data and systems as we can out locally at the edge. Yet we need to still take some of that information back to one central location so we can understand what’s happening across all the different locations. We still want to make the rearview reporting better globally for our business, as well as allow for more global model management.

Gardner: Let’s look at some of the hurdles organizations have to overcome to make use of such a Data Fabric. What is it about the way that data and information exist today that makes it hard to get the most out of it? Why is it hard to put advanced data access and management in place quickly and easily?

Track the data journey

Smykay: It’s tough for most organizations because they can’t take the wings off the airplane while flying. We get that. You have to begin by creating some new standards within your organization, whether that’s standardizing on an API set for different datatypes, multiple datatypes, a single datatype.

Then you need to standardize the deployment mechanisms within your organization for that data. With the HPE Data Fabric, we give the ability to just say, “Hey, it doesn’t matter where you deploy. We just need some x86 servers and we can help you standardize either on one API or multiple APIs.”

We now support more than 10 APIs, as well as the many different data types that these organizations may have.

We see a lot of data silos out there today with customers — and they’re getting worse. They’re now all over the place between multiple cloud providers. And there’s all the networking in the middle. I call it silo sprawl.

Typically, we see a lot of data silos still out there today with customers – and they’re getting worse. By worse, I mean they’re now all over the place between multiple cloud providers. I may use some of these cloud storage bucket systems from cloud vendor A, but I may use somebody else’s SQL databases from cloud vendor B, and those may end up having their own access methodologies and their own software development kits (SDKs).

Next you have to consider all the networking in the middle. And let’s not even bring up security and authorization to all of them. So we find that the silos still exist, but they’ve just gotten worse and they’ve just sprawled out more. I call it the silo sprawl.

Gardner: Wow. So, if we have that silo sprawl now, and that complexity is becoming a hurdle, the estimates are that we’re going to just keep getting more and more data from more and more devices. So, if you don’t get a handle on this now, you’re never going to be able to scale, right?

Smykay: Yes, absolutely. If you’re going to have diversity of your data, the right way to manage it is to make it use-case-driven. Don’t boil the ocean. That’s where we’ve seen all of our successes. Focus on a couple of different use cases to start, especially if you’re getting into newer predictive model management and using machine learning (ML) techniques.

But, you also have to look a little further out to say, “Okay, what’s next?” Right? “What’s coming?” When you go down that data engineering and data science journey, you must understand that, “Oh, I’m going to complete use case A, that’s going to lead to use case B, which means I’m going to have to go grab from other data sources to either enrich the model or create a whole other project or application for the business.”

You should create a data journey and understand where you’re going so you don’t just end up with silo sprawl.

Gardner: Another challenge for organizations is their legacy installations. When we talk about zettabytes of data coming, what is it about the legacy solutions — and even the cloud storage legacy — that organizations need to rethink to be able to scale?

Zettabytes of data coming

Smykay: It’s a very important point. Can we just have a moment of silence? Because now we’re talking about zettabytes of data. Okay, I’m in.

Some 20 years ago, we were talking about petabytes of data. We thought that was a lot of data, but if you look out to the future, we’re talking about some studies showing connected Internet of Things (IoT) devices generating this zettabytes amount of data.

No alt text provided for this image

If you don’t get a handle on where your data points are going to be generated, how they’re going to be stored, and how they’re going to be accessed now, this problem is just going to get worse and worse for organizations.

Look, Data Fabric is a great solution. We have it, and it can solve a ton of these problems. But as a consultant, if you don’t get ahead of these issues right now, you’re going to be under the umbrella of probably 20 different cloud solutions for the next 10 years. So, really, we need to look at the datatypes that you’re going to have to support, the access methodologies, and where those need to be located and supported for your organization.

Gardner: Chad, it wasn’t that long ago that we were talking about how to manage big data, and Hadoop was a big part of that. NoSQL and other open source databases in particular became popular. What is it about the legacy of the big data approach that also needs to be rethought?

Smykay: One common issue we often see is the tendency to go either/or. By that I mean saying, “Okay, we can do real-time analytics, but that’s a separate data deployment. Or we can do batch, rearview reporting analytics, and that’s a separate data deployment.” But one thing that our HPE Data Fabric has always been able to support is both — at the same time — and that’s still true.

So if you’re going down a big data or data lake journey — I think now the term now is a data lakehouse, that’s a new one. For these, basically I need to be able to do my real-time analytics, as well as my traditional BI reporting or rearview mirror reporting — and that’s what we’ve been doing for over 10 years. That’s probably one of the biggest limitations we have seen.

But it’s a heavy lift to get that data from one location to another, just because of the metadata layer of Hadoop. And then you had dependencies with some of these NoSQL databases out there on Hadoop, it caused some performance issues. You can only get so much performance out of those databases, which is why we have NoSQL databases just out of the box of our Data Fabric — and we’ve never run into any of those issues.

Gardner: Of course, we can’t talk about end-to-end data without thinking about end-to-end security. So, how do we think about the HPE Data Fabric approach helping when it comes to security from the edge to the core?

Secure data from edge to core

Smykay: This is near-and-dear to my heart because everyone always talks about these great solutions out there to do edge computing. But I always ask, “Well, how do you secure it? How do you authorize it? How does my application authorization happen all the way back from the edge application to the data store in the core or in the cloud somewhere?”

That’s what I call off-sprawl, where those issues just add up. If we don’t have one way to secure and manage all of our different data types, then what happens is, “Okay, well, I have this object-based system out there, and it has its own authorization techniques.” It has its own authentication techniques. By the way, it has its own way of enforcing security in terms of who has access to what, unless … I haven’t talked about monitoring, right? How do we monitor this solution?

So, now imagine doing that for each type of data that you have in your organization — whether it’s a SQL database, because that application is just a driving requirement for that, or a file-based workload, or a block-based workload. You can see where this starts to steamroll and build up to be a huge problem within an organization, and we see that all the time.

We’re seeing a ton of issues today in the security space. We’re seeing people getting hacked. It happens all the way down to the application layer, as you often have security sprawl that makes it very hard to manage all of the different systems.

And, by the way, when it comes to your application developers, that becomes the biggest annoyance for them. Why? Because when they want to go and create an application, they have to go and say, “Okay, wait. How do I access this data? Oh, it’s different. Okay. I’ll use a different key.” And then, “Oh, that’s a different authorization system. It’s a completely different way to authenticate with my app.”

I honestly think that’s why we’re seeing a ton of issues today in the security space. It’s why we’re seeing people get hacked. It happens all the way down to the application layer, as you often have this security sprawl that makes it very hard to manage all of these different systems.

Gardner: We’ve come up in this word sprawl several times now. We’re sprawling with this, we’re sprawling with that; there’s complexity and then there’s going to be even more scale demanded.

The bad news is there is quite a bit to consider when you want end-to-end data management that takes the edge into consideration and has all these other anti-sprawl requirements. The good news is a platform and standards approach with a Data Fabric forms the best, single way to satisfy these many requirements.

So let’s talk about the solutions. How does HPE Ezmeral generally — and the Ezmeral Data Fabric specifically — provide a common means to solve many of these thorny problems?

Smykay: We were just talking about security. We provide the same security domain across all deployments. That means having one web-based user interface (UI), or one REST API call, to manage all of those different datatypes.

No alt text provided for this image

We can be deployed across any x86 system. And having that multi-API access — we have more than 10 – allows for multi-data access. It includes everything from storing data into files and storing data in blocks. We’re soon going to be able to support blocks in our solution. And then we’ll be storing data into bit streams such as Kafka, and then into a NoSQL database as well.

Gardner: It’s important for people to understand that HPE Ezmeral is a larger family and that the Data Fabric is a subset. But the whole seems to be greater than the sum of the parts. Why is that the case? How has what HPE is doing in architecting Ezmeral been a lot more than data management?

Smykay: Whenever you have this “whole is greater than the sum of the parts,” you start reducing so many things across the chain. When we talk about deploying a solution, that includes, “How do I manage it? How do I update it? How do I monitor it?” And then back to securing it.

Honestly, there is a great report from IDC that says it best. We show a 567-percent, five-year return on investment (ROI). That’s not from us, that’s IDC talking to our customers. I don’t know of a better business value from a solution than that. The report speaks for itself, but it comes down to these paper cuts of managing a solution. When you start to have multiple paper cuts, across multiple arms, it starts to add up in an organization.

Gardner: Chad, what is it about the HPE Ezmeral portfolio and the way the Data Fabric fits in that provides a catalyst to more improvement?

All data put to future use

Smykay: One, the HPE Data Fabric can be deployed anywhere. It can be deployed independently. We have hundreds and hundreds of customers. We have to continue supporting them on their journey of compute and storage, but today we are already shipping a solution where we can containerize the Data Fabric as a part of our HPE Ezmeral Container Platform and also provide persistent storage for your containers.

The HPE Ezmeral Container Platform comes with the Data Fabric, it’s a part of the persistent storage. That gives you full end-to-end management of the containers, not only the application APIs. That means the management and the data portability.

So, now imagine being able to ship the data by containers from your location, as it makes sense for your use case. That’s the powerful message. We have already been on the compute and storage journey; been down that road. That road is not going away. We have many customers for that, and it makes sense for many use cases. We’ve already been on the journey of separating out compute and storage. And we’re in general availability today. There are some other solutions out there that are still on a road map as far as we know, but at HPE we’re there today. Customers have this deployed. They’re going down their compute and storage separation journey with us.

Gardner: One of the things that gets me excited about the potential for Ezmeral is when you do this right, it puts you in a position to be able to do advanced analytics in ways that hadn’t been done before. Where do you see the HPE Ezmeral Data Fabric helping when it comes to broader use of analytics across global operations?

Smykay: One of our CMOs used to say it best, and which Jack Morris has said: “If it’s going to be about the data, it better be all about the data.”

No alt text provided for this image

When you improve automating data management across multiple deployments — managing it, monitoring it, keeping it secure — you can then focus on those actual use cases. You can focus on the data itself, right? That’s living in the HPE Data Fabric. That is the higher-level takeaway. Our users are not spending all their time and money worrying about the data lifecycle. Instead, they can now go use that data for their organizations and for future use cases.

HPE Ezmeral sets your organization up to use your data instead of worrying about your data. We are set up to start using the Data Fabric for newer use cases and separating out compute and storage, and having it run in containers. We’ve been doing that for years. The high-level takeaway is you can go focus on using your data and not worrying about your data.

Gardner: How about some of the technical ways that you’re doing this? Things like global namespaces, analytics-ready fabrics, and native multi-temperature management. Why are they important specifically for getting to where we can capitalize on those new use cases?

Smykay: Global namespaces is probably the top feature we hear back from our customers on. It allows them to gain one view of the data with the same common security model. Imagine you’re a lawyer sitting at your computer and you double-click on a Data Fabric drive, you can literally then see all of your deployments globally. That helps with discovery. That helps with bringing onboard your data engineers and data scientists. Over the years that’s been one of the biggest challenges, they spend a lot of time building up their data science and data engineering groups and on just discovering the data.

Global namespace means I’m reducing my discovery time to figure out where the data is. A lot of this analytics-ready value we’ve been supporting in the open source community for more than 10 years. There’s a ton of Apache open source projects out there, like PrestoHive, and Drill. Of course there’s also Spark-ready, and we have been supporting Spark for many years. That’s pretty much the de facto standard we’re seeing when it comes to doing any kind of real-time processing or analytics on data.

As for multi-temperature, that feature allows you to decrease your cost of your deployment, but still allows managing all your data in one location. There are a lot of different ways we do that. We use erasure coding. We can tear off to Amazon S3-compliant devices to reduce the overall cost of deployment.

These features contribute to making it still easier. You gain a common Data Fabric, common security layer, and common API layer.

Gardner: Chad, we talked about much more data at the edge, how that’s created a number of requirements, and the benefits of a comprehensive approach to data management. We talked about the HPE Data Fabric solution, what it brings, and how it works. But we’ve been talking in the abstract.

What about on the ground? Do you have any examples of organizations that have bitten off and made Data Fabric core for them? As an adopter, what do they get? What are the business outcomes?

Central view benefits businesses 

Smykay: We’ve been talking a lot about edge-to-core-to-cloud, and the one example that’s just top-of-mind is a big, tier-1 telecoms provider. This provider makes the equipment for your AT&Ts and your Vodafones. That equipment sits out on the cell towers. And they have many Data Fabric use cases, more than 30 with us. 

But the one I love most is real-time antenna tuning. They’re able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antenna. They do it via real-time data collection on the antennas and then aggregating that across all of the different layers that they have in their deployments.

One example is real-time antennae tuning. They’re able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antennae. They do it instead via real-time data collection and aggregating that across all of their deployments.

They gain a central view of all of the data using a modern API for the DevOps needs. They still centrally process data, but they also process it at the edge today. We replicate all of that data for them. We manage that for them and take a lot of the traditional data management tasks off the table for them, so they can focus on the use case of the best way to tune antennas.

Gardner: They have the local benefit of tuning the antenna. But what’s the global payback? Do we have a business quantitative or qualitative returns for them in doing that?

Smykay: Yes, but they’re pretty secretive. We’ve heard that they’ve gotten a payback in the millions of dollars, but an immediate, direct payback for them is in reducing the application development spend everywhere across the layer. That reduction is because they can use the same type of API to publish that data as a stream, and then use the same API semantics to secure and manage it all. They can then take that same application, which is deployed in a container today, and easily deploy it to any remote location around the world.

Gardner: There’s that key aspect of the application portability that we’ve danced around a bit. Any other examples that demonstrate the adoption of the HPE Data Fabric and the business pay-offs?

Smykay: Another one off the top of my head is a midstream oil and gas customer in the Houston area. This one’s not so much about edge-to-core-to-cloud. This is more about consolidation of use cases.

We discussed earlier that we can support both rearview reporting analytics as well as real-time reporting use cases. And in this case, they actually have multiple use cases, up to about five or six right now. Among them, they are able to do predictive failure reports for heat exchangers. These heat exchangers are deployed regionally and they are really temperamental. You have to monitor them all the time.

But now they have a proactive model where they can do a predictive failure monitor on those heat exchangers just by checking the temperatures on the floor cameras. They bring in all real-time camera data and they can predict, “Oh, we think we’re having an issue with this heat exchanger on this time and this day.” So that decreases management cost for them.

They also gain a dynamic parts management capability for all of their inventory in their warehouses. They can deliver faster, not only on parts, but reduce their capital expenditure (CapEx) costs, too. They have gained material measurement balances. When you push oil across a pipeline, they can detect where that balance is off across the pipeline and detect where they’re losing money, because if they are not pushing oil across the pipe at x amount of psi, they’re losing money.

So they’re able to dynamically detect that and fix it along the pipe. They also have a pipeline leak detection that they have been working on, which is modeled to detect corrosion and decay.

The point is there are multiple use cases. But because they’re able to start putting those data types together and continue to build off of it, every use case gets stronger and stronger.

Gardner: It becomes a virtuous adoption cycle; the more you can use the data generally, then the more value, then the more you invest in getting a standard fabric approach, and then the more use cases pop up. It can become very powerful.

This last example also shows the intersection of operational technology (OT) and IT. Together they can start to discover high-level, end-to-end business operational efficiencies. Is that what you’re seeing?

Data science teams work together

Smykay: Yes, absolutely. A Data Fabric is kind of the Kumbaya set among these different groups. If they’re able to standardize on the IT and developer side, it makes it easier for them to talk the same language. I’ve seen this with the oil and gas customer. Now those data science and data engineering teams work hand in hand, which is where you want to get in your organization. You want those IT teams working with the teams managing your solutions today. That’s what I’m seeing. As you get a better, more common data model or fabric, you get faster and you get better management savings by having your people working better together.

Gardner: And, of course, when you’re able to do data-driven operations, procurement, logistics, and transportation you get to what we’re referring generally as digital business transformation.

Chad, how does a Data Fabric approach then contribute to the larger goal of business transformation?

Depending on size of the organization, you’re talking to three to five different groups, and sometimes 10 different people, just to put a use case together. But as you create a common data access method, you see an organization where it’s easier and easier for not only your use cases, but your businesses to work together on the goal of whatever you’re trying to do and use your data for.

Listen to the podcast. Find it on iTunes. Read a full transcript or downloada copy. Sponsor:Hewlett Packard Enterprise.

Posted in big data, Business intelligence, Cloud computing, Cyber security, data analysis, data center, Data center transformation, digital transformation, Enterprise architect, Hadoop, Hewlett Packard Enterprise, Security, storage | Leave a comment

How COVID-19 teaches higher education institutes to embrace latest IT to advance remote learning

Like many businesses, innovators in higher education have been transforming themselves for the digital age for years, but the COVID-19 pandemic nearly overnight accelerated the need for flexible new learning models.

As a result, colleges and universities must rapidly redefine and implement a new and dynamic balance between in-person and remote interactions. This new normal amounts to more than a repaving of centuries-old, in-class traditions of higher education with a digital wrapper. It requires re-invention — and perhaps new ways of redefining — of the very act of learning itself.

The next BriefingsDirect panel discussion explores how such innovation today in remote learning may also hold lessons for how businesses and governments interact with and enlighten their workers, customers, and ultimately citizens.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share recent experiences in finding new ways to learn and work during a global pandemic are Chris Foward, Head of Services for IT Services at The University of Northampton in the UK; Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix, and Dr. Scott Ralls, President of Wake Tech Community Collegein Raleigh, North Carolina. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Scott, tell us about Wake Tech Community College and why you’ve been able to accelerate your path to broader remote learning?

Ralls: Wake Tech is the largest college in North Carolina, one of the largest community colleges in the United States. We have 75,000 total students across all of our different program areas spread over six different campuses.

In mid-March, we took an early step in moving completely online because of the COVID-19 pandemic. But if we had just started our planning at that point, I think we would have been in trouble; it would have been a big challenge for us, as it has been for much of higher education.

The journey really began six years earlier with a plan to move to a more online-supported, virtual-blended world. For us, the last six months have been about sprinting. We are on a journey that hasn’t been so much about changing our direction or changing our efficacy, but really sprinting the last one-fourth of the race. And that’s been difficult and challenging.

But it’s not been as challenging as if you were trying to figure out the directions from the very beginning. I’ve been very proud of our team, and I think things are going remarkably well here despite a very challenging situation.

Education sprints online

Gardner: Chris, please tell us about The University of Northampton and how the pandemic has accelerated change for you.

Foward: The University of Northampton has invested very heavily in its campus. A number of years ago and we built a new one called Waterside campus. The Waterside campus was designed to work with active blended learning (ABL) as an approach to delivering all course works, and — similar to Wake Tech — we’ve faced challenges around how we deliver online teaching.

We were in a fortunate position because during the building of our new campus we implemented all-new technology from the ground up — from our plant-based systems right through to our backend infrastructure. We aimed at taking on new technologies that were either cloud-based or that allowed us to deliver teaching in a remote manner. That was done predominantly to support our ABL approach to delivery of education. But certainly the COVID-19 pandemic has sped up the uptake of those services.

Gardner: Chris, what was the impetus to the pre-pandemic blended learning? Why were you doing it? How did technology help support it?

Foward: The University of Northampton since 2014 has been moving toward its current institutional approach to learning and teaching. We never perceived of this as a large-scale online learning or a distance learning solution. But ABL does rely on fluent and thoughtful use of technologies for learning.

Our teachers found that the work they’ve done since 2014 really did stand us in good stead as we were able to very quickly change from an on-campus-taught environment to a digital experience for our students.

And this has stood the university in good stead in terms of how we actually deliver to our students. What our lecturers and teachers found is that the work they’ve done since 2014 really did stand us in a good stead as we were able to very quickly change from an on-campus-taught environment to a digital experience for our students.

Gardner: Scott, has technology enabled you to seek remote learning, or was remote learning the goal and then you had to go find the technology? What’s the relationship between remote learning and technology?

Ralls: For us, particularly in community colleges, it was more the second in that remote learning is an important priority for us because a majority of our students work. So the issues of just having the convenience of remote learning started community colleges in the United States down the path of remote learning much more quickly than for other forms of higher education. And so that helped us years ago to start thinking about what technologies are required.

Our college has been very thoughtful about the equity issues in remote learning. Some students succeed in more remote learning platforms, while others struggle with what those solutions may be. It was much more about the need for remote learning to allow working students with the capacities and conveniences, and then looking at what the technologies are and the best practices to achieve those goals.

Businesses learn from schools’ success

Gardner: Tim, when you hear Chris and Scott describing what they are doing in higher education, does it strike you that they are leaders and innovators compared generally to businesses? Should businesses pay attention to what’s going on in higher education these days, particularly around remote, balanced, and blended interactions?

Minahan: Yes, I certainly think they are leading, Dana. That leadership comes from having been prepared for this in advance. If there’s any silver lining to this global crisis we are all living through, it’s that it’s caused organizations and participants in all industries to rethink how they work, school, and live.

Employers, having seen that work can now actually happen outside of an office, are catching up similarly. They’re rethinking their long-term workforce strategies and work models. They’re embracing more flexible and hybrid work approaches for the long-term.

And lower costs and improved productivity and engagement are giving them access to new pools of talent that were previously inaccessible to them in the traditional work-hub model, where you build a big office or call center and then you hire folks to fill them. Now, they can remotely reach talent in any location, including retirees or stay-at-home parents, and caretakers. They can be reactivated into the workforce.

As Kids Do More Remote School,

Managers Have Extra Homework, Too

Similarly to the diversity of the student body you’re seeing at Wake Tech, to do this they need a foundation, a digital workspace platform, that allows them to deliver consistent and secure access to the resources that employees or staff — or in this case, students — need to do their very best work across any channel or location. That can be in the classroom, on the road, or as we’ve seen recently in the home.

I think going forward, you’re going to see not just higher education, which we are hearing about here, but all industries begin to embrace this blended model for some very real benefits, both to their employees and their constituents, but to their own organizations as well.

Gardner: Chris, because Northampton put an emphasis on technology to accomplish blended learning, was the technology typical a few years back — traditional, stack-based enterprise IT — a hindrance? Did you need to rethink technology as you were trying to accomplish your education goals?

Tech learning advances agility

Foward: Yes, we did. When we built our new campus, we looked at what new technologies were coming onto the market. We then moved toward a couple of key suppliers to ensure that we received best-in-class services as well as easy-to-use products. We chose partners like Microsoft for our software programs, like Office, and those sorts of productivity products.

We chose Cisco for networking and servers, and we also pulled in Citrix for delivery of our virtual applications and desktops from any location, anywhere, anytime. It allows flexibility for our students to access the systems from a smartphone and see a specific CAB-type models if we join those through solutions we have. It allows our factor of business and law to be able to present some of this bespoke software that they use. We can tailor the solutions that they see within these environments to meet the educational needs and courses that they are attending.

Gardner: Scott, at Wake Tech, as president of the university, you’re probably not necessarily a technologist. But how do you not be a technologist nowadays when you’re delivering everything as remote learning? How has your relationship with technology evolved? Have you had to learn a lot more tech?

Ralls: Oh, absolutely, yes. And even my own use of technology has evolved quite a bit. I was always aware and had broad goals. But, as I mentioned, we started sprinting very quickly, and when you are sprinting you want to know what’s happening.

We are very fortunate to have a great IT team that is both thoughtful in its direction and very urgent in their movement. So those two things gave me a lot of confidence. It’s also allowed us to sprint to places that we wouldn’t have been able to had these circumstances not come along.

We are very fortunate to have a great IT team that is both thoughtful in its direction and very urgent in their movement. Those two things gave me a lot of confidence. It also allowed us to sprint to places that we wouldn’t have been able to.

I will use an example. We have six campuses. I would do face-to-face forums with faculty, staff, and students, so three meetings on every campus but once a semester. Now, I do those kinds of forums most days with students, faculty, or staff using the technology. Many of us have found that with the directions we were going that there are greater efficiencies to be achieved in many ways that we would not have tried had it not been for the [pandemic] circumstances.

And I think after we get past the issues we are facing with the pandemic; our world will be completely changed because this has accelerated our movement in this direction and accelerated our utility of the usage as well.

Gardner: Tim, we have seen over the years that the intersection between business and technology is not always the easiest relationship. Is what we’re seeing now as a result of the pandemic helping organizations attain the agility that they perhaps struggled to find before?

Minahan: Yes, indeed, Dana. As you just heard, another thing the pandemic has taught us is that agility is key. Fixed infrastructure — whether it’s real estate, the work-hub-centric models, data centers with loads of servers, and on-premise applications — has proven to be an anchor during the pandemic. Organizations that rely heavily on such fixed infrastructure have had a much more difficult time shifting to a remote work or remote learning model to keep their employees and students safe and productive.

In fact, by an anecdote, we had one financial services customer, a CIO, recently say, “Hey, we can’t add servers and capacity fast enough.” And so, similar to Scott and Chris, we’re seeing an increasing number of our customers moving to adopt more variable operating models in everything they do. They are rethinking the real estate, staffing, and their IT infrastructure. As a result, we’re seeing customers take their measured plans for a one- to three-year transition to the cloud and accelerated that to three months, or even a few weeks.

They’re also increasing adoption of digital workspaces so that they can provide a consistent and secure work or learning experience for employees or students across any channel or location. It really boils down to organizations building agility into their operations so they can scale up quickly in the face of the next inevitable, unplanned crisis — or opportunity.

Gardner: We’ve been talking about this through the lens of the higher education institute and the technology provider. But what’s been the experience over the past several months for the user? How are your students at Northampton adjusting to this, Chris? Is this rapid shift a burden or is there a silver lining to more blended and remote learning?

Easy-to-use options for student adoption

Foward: I’ll be honest, I think our students have yet to adopt it fully.

There are always challenges with new technology when it comes in. The uptake will be mainly driven in October when we see our mainstream student cohorts come onboard. I do think the types of technologies we have chosen are key, because making technology simple to use and easy to access will drive further adoption of those products.

What we have seen is that our staff’s uptake on our Citrix environment was phenomenal. And if there’s one positive to take from the COVID-19 situation it is the adoption of technology. Our staff has taken to it like ducks to water. Our IT team has delivered something exceptional, and I think our students will also see a massive benefit from these products, and especially the ease of use of these products.

So, yes, the key thing is making the products easily accessible and easy to use. If we overcomplicate it, you won’t get adoption and you won’t get an experience that customers need when they come to our education institutions.

Gardner: Dr. Ralls, have the students adjusted to these changes in a way that gives them agility as they absorb education?

Ralls: They have. All of us — whether we work, teach, or are students at Wake Tech — have gained more confidence in these environments than we had before. I have regular conversations with these students. There was a lot of uncertainty, just like for many of us working remotely. How would that all work?

And we’ve now seen that we can do it. Things will still change around the notions of making the adjustments we need to. And for many of our students, it isn’t just how things will it change in the class, but in all of the things that they need around that class. For example, we have tutoring centers in our libraries. How do we make those work remotely and by appointment? We all wondered how that would work. And now we’ve seen that it can work, and it does work; and there’s an ease of doing that.

Reimagining Education

In a Remote World

Because we are a community college, we’re an open-admissions college. Many of our students haven’t had the level of academic preparation or opportunity that others have had. And so for some of our students who have a sense of uncertainty or anxiety, we have found that there is a challenge for them to move to remote learning and to have confidence initially.

Sometimes we can see that in withdrawals, but we’ve also found that we can rally around our students using different tools. We have found the value of different types of remote learning that are effective. For example, we’re doing a lot of the HyFlex model now, which is a combination of hybrid and remote, online-based education.

Over time we have seen in many of our classes that where classes started as hybrid, students then shifted to more fully remote and online. So you see the confidence grow over time.

Gardner: Scott, another benefit of doing more online is that you gain a data trail. When it comes to retention, and seeing how your programs are working, you have a better sense of participation — and many other metrics. Does the data that comes along with remote learning help you identify students at risk, and are there other benefits?

Remote learning delivers data

Ralls: We’re a very data-focused college. For instance, even before we moved to more remote learning, every one of our courses had an online shell. We had already moved to where every course was available online. So we knew when our students were interacting.

One of the shifts we’ve seen at Wake Tech with more remote services is the expansion of those hours, as well as the ability to access counseling — and all of our services remotely — and through answer centers and other things.

But that means we had to change our way of thinking. Before, we knew when students took our courses, because they took them when you scheduled the courses. Now, as they are working remotely, we can also tell when they are working. And we know from many of our students that they are more likely to be online and engaged in our coursework between the hours of 5 pm and 10 pm, as opposed to 8 am and noon. Most of when we had been operating, from just having physical sites, was 8 am to 5 pm. Consequently, we have had to move the hours, and I think that’s something that will always be different about us and so that does give us that indication.

We had to change our way of thinking. Before, we knew when students took our courses because they took them when you scheduled the courses. Now, remotely we can also tell when they are working. We have had to move the hours to when they are actually operating.

One other thing about us that has been unique is because of who we are, because we do so much technical education — that’s why we are called Wake Tech — and much of that is hands-on. You can’t do it fully remotely. But every one of our programs has found out the value of remote-based access through the support.

For example, we have a remarkable baking and pastry program. They have figured out how help the students get all of their hands-on resources at home in their own kitchens. They no longer have to come into the labs for what they do. Every program has found that value, the best aspects of their program being remote, even if their full program cannot be remote because of the hands-on matrix.

Gardner: Chris, is the capability to use the data that you get along the way at Northampton a benefit to you, and how?

Foward: Data is key for us in IT Services. We like to try and understand how people are using our systems and which applications they are using. It allows us to then fix the delivery of our applications more effectively. Our courses are also very data-driven. In our games art courses, for example, data allows us to design the materials more effectively for our students.

Gardner: Tim, when you are providing more value back through your technology, the data seems to be key as well. It’s about optimization and even reducing costs with better business and education outcomes. How does the data equation benefit Citrix’s customers, and how do you expect to improve on that?

Data enhances experiences

Minahan: Dana, data plays a major role in every aspect of what we do. When you think about the need to deliver digital workspaces by providing consistent and secure access to the resources — whether it’s employees or students — they need to be able to perform at their best wherever that work needs to get done. The data that we are gathering is applied in a number of different ways.

Number one is around the security model. I use the analogy of not just having security access in — the bouncer at the front door to make sure you have authenticated and are on the list to be access the resources you need — but also having the bodyguard that follows you around the club, if you will, to constantly monitor your behavior and apply additional security policies.

The data is valuable for that because we understand the behavior of the individual user, whether they are typically accessing from a particular device or location or via the types of information or applications they access.

The second area is around performance. If we move to a much more distributed model, or a flexible or a blended model, vital to that is ensuring that those employees or students have reliable access to the applications and information they need to perform at their best. Being able to constantly monitor that environment allows for increasing bandwidth, or moving to a different channel as needed, so they get the best experience.

And then the last one gets very exciting. It is literally about productivity. Being able to push the right information or the right tasks, or even automate a particular task or remove it from their work stream in real time is vital to ensuring that we are not drowning in this cacophony of different apps and alerts — and all the noise that gets in the way of us actually doing our best work or learning. And so data is actually vital to our overall digital workspace strategy at Citrix.

Gardner: Chris, to attain an improved posture around ABL, that can mean helping students pick up wherever they left off — whether in a classroom, their workplace, at a bakery or in a kitchen at home. It requires a seamless transition regardless of their network and end device. How important is it to allow students to not have to start from scratch or find themselves lost in this collaboration environment? How is Citrix an important part of that?

Foward: With our ABL approach, we have small collaborative groups that work together to deliver or gain their learning.

We also ensure that the students have face-to-face contact with tutors, other distance learning, or while on campus. And with the technology, we store all of the academic materials in one location, called our mail site, which allows students to be able to access and learn as and when they need to.

Citrix plays a key part in that because we can deliver applications into that state quickly and seamlessly. It allows students to always be able to understand and see the applications they need for their specific courses. It allows them to experiment, discuss ideas, and get more feedback from our lecturers because they understand what materials are being stored and how to access them.

Gardner: Dr. Ralls, how do you at Wake Tech prevent learning gaps from occurring? How does the technology help students move seamlessly throughout their education process, regardless of the location or device?

Seamless tracking lets students thrive

Ralls: There are different types of gaps. In terms of courses, one of the things we found recently is our students are looking for different types of access. Many of our students are looking for additional types of access — perhaps replicating our seated courses to gain the value of synchronous experiences. We have had to make sure that all of our courses have that capacity, and that it works well.

Then, because many of our students are also in a work environment, they want an asynchronous capability. And so we are now working on making sure students know the difference and how to match those expectations.

Also, because we are an open access college — and as I like to say, we take the top 100 percent of our applicant students — for many of our students, gaps come not just within a course, but between courses or toward their goals. For many of our students who are first-generation students, higher education is new. They may have also been away from education for a period of time.

We have to be much more intrusive and to help students and monitor to make sure our students are making it from one place to the next. We need to make sure that learning makes sense to them.

So we have to be much more intrusive and to help students and monitor to make sure our students are making it from one place to the next. We need to make sure that learning makes sense to them and that they are making it to whatever their ultimate goals are.

We use technology to track that and to know when our students are getting close to leaving. We call that being like rumble strips on the side of the road. There are gaps that we are looking at, not just within courses, but between courses, on the way to our students’ academic goals.

Gardner: Tim, when I hear Chris and Scott describe these challenges in education, I think how impactful this can be for other businesses in general as they increasingly have blended workforces. They are going to face similar gaps too. What, from Citrix’s point of view, should businesses be learning from the experiences at University of Northampton and Wake Tech?

Minahan: I think Winston Churchill summed it up best: “Never let a good crisis go to waste.” Smart organizations are using the current crisis — not just to survive, but to thrive. They are using the opportunity to accelerate their digital transformation and rethink long-held work and operating models in ways they probably hadn’t before.

So as demonstrated both at Wake Tech and Northampton, and as Scott and Chris both said, for both school and work the future is definitely going to be blended.

We have, for example, another higher education customer, the University of Sydney that was able to get 20,000 students and faculty transition to an online learning environment last March, literally within a week. But that’s not the real story, it’s where they are going next with this.

As they entered the new school year in Sydney, they now have 100 core and software as a service (SaaS) applications that students can access through the digital workspace regardless of the type of device or their location. And they can ensure they have that consistent and secure and reliable experience with those apps. They say the student experience is as good, and sometimes even better, than what a student would have when using a locally installed app on a physical computer.

And now the university, most importantly, has used this remote learning model as an opportunity to reach new students — and even new faculty — in locations that they couldn’t have supported before due to geographic limitations of largely classroom-based models.

These are the types of things that businesses also have to think through. And as we hear from Wake Tech and Northampton, businesses can take a page from the courseware from many forward-thinking higher education organizations that are already in a blended learning model and see how that applies to their own business.

Gardner: Dr. Ralls, when you look to the future, what comes next? What would you like to see happen around remote learning, and what can the technologists like Citrix do to help?

Blended learning without walls

Ralls: Right now, there is so much greater efficiency than we had before. I think there is a way to bring that greater efficiency even more into our classrooms. For years we have talked about a flipped classroom, which really means those things that are better accomplished outside in a lab or in a shop, to do those outside of the classroom.

We have to all get to a place where the learning process just doesn’t happen within the walls of the classrooms. So the ability for students to go back and review work, to pick up on work, to use multiple different tools to add and supplement what they are getting through a classroom-based experience, a shop-based experience — I think that’s what we are moving to.

See How Leading Institutes Use

Technology to Transform Education Delivery

For Wake Tech, this really hit us about March 15, 2020 when we went fully remote. We don’t want to go back to the way we were in April. We don’t want to be a fully remote, online college. But we also don’t want to be where we were in February.

This pandemic crisis has presented to us a greater acceleration of where we want to be, of where we can be. It’s what we aspire to be in terms of better education — not just more convenient access of education — but better educational opportunities through the multiple different opportunities that are brought to us by technology to supplement the core work that we have always done through our seat-based environment.

Gardner: Chris, at Northampton, what’s the next step for the technology enabling these higher goals that Dr. Ralls just described? Where would you like to see the technology take Northampton students next?

Foward: The technology is definitely key to what we are trying to do as education providers, to provide the right skill sets wherein students move from higher education into business. Certainly, with the likes of Citrix, with what was originally a commercial-focused application, and bringing it into our institution, we have allowed our students to gain access and understand how the system works — and understand how to use it.

And that’s similar with most of our technologies that we have brought in. It gives students more of a commercial feel for how operations should be running, how systems should be accessed, and the ways to use those systems.

Gardner: Tim, graduates from Wake Tech and from University of Northampton a year or two from now, they are going to be well-versed in these technologies, and this level of collaboration and seamless transitions between blended approaches. How are the companies they go to going to anticipate these new mindsets? What should businesses be doing to take full advantage of what these students have already been doing in these universities?

Students become empowered employees

Minahan: That’s a great point, and it is certainly something that business is grappling with now as we move beyond hiring Millennials to the next generation of highly educated, grown-up-on-the-Internet students with high expectations who are coming out of universities today.

For the next few years, it all boils down to the need to deliver a superior employee experience, to empower employees to perform at their best, and to do the jobs they were hired to do. We should not burden them, as we have in a lot of corporate America, with a host of different distractions, apps, and rules and regulations that keep them away from doing their core jobs.

We need to deliver a superior employee experience. We should not burden them with a host of different distractions, apps, and rules that keep them from doing their core jobs.

And key to that, not surprisingly, is going to require a digital workspace environment that empowers and provides unified access to all of the resources and information that the employee needs to perform at their best across any work channel or location. They need a behind-the-scenes security model that ensures the security of the corporate assets, applications, and information — as well as the privacy of individuals — without getting in the way of work.

And then, at a higher level, as we talked about earlier, we need an intelligence model with more analytics built into that environment. It will then not just offer up a launch pad to access the resources you need, but will actually guide you through your day, presenting the right tasks and insights as you need them, and allowing you to get the noise out of your day so you can really create, innovate, and do your best work. And that will be whether work is in an office, on the road, or work as we have seen recently, in the home.

Gardner: I wouldn’t be surprised if the students coming out of these innovative institutes of higher learning are going to be the instigators of change and innovation in their employment environments. So a point on the arrow from education into the business realm.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

Posted in application transformation, Citrix, Cloud computing, digital transformation, Enterprise transformation, Security, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

The path to a digital-first enterprise is paved with an Emergence Model and Digital Transformation Playbook

The next BriefingsDirect digital business optimization discussion explores how open standards help support a playbook approach for organizations to improve and accelerate their digital transformation.

As companies chart a critical journey to become digital-first enterprises, they need new forms of structure to make rapid adaptation a regular recurring core competency. Stay with us as we explore how standards, resources, and playbooks around digital best practices can guide organizations through unprecedented challenges — and allow them to emerge even stronger as a result. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.  

Here to explain how to architect for ongoing disruptive innovation is our panel, Jim Doss, Managing Director at IT Management and Governance, LLC, and Vice Chair of the Digital Practitioner Work Group (DPWG) at The Open Group; Mike Fulton, Associate Vice President of Technology Strategy and Innovation at Nationwide and Academic Director of Digital Education at The Ohio State University, and Dave Lounsbury, Chief Technical Officer at The Open Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. 

Here are some excerpts: 

Gardner: Dave, the pressure from the COVID-19 pandemic response has focused a large, 40-year gathering of knowledge into a new digitization need. What is that new digitization need, and why are standards a crucial part of it?

Lounsbury: It’s not just digitization, but also the need to move to digital. That’s what we’re seeing here. The sad fact of this terrible pandemic is that it has forced us all to live a more no-contact, touch-free, and virtual life.

We’ve all experienced having to be on Zoom, or not going into work, or even when you’re out doing take-out at a restaurant. You don’t sign a piece of paper anymore; you scan something on your phone, and all of that is based on having the skills and the business processes to actually deliver some part of your business’s value digitally. 

This was always an evolution, and we’ve been working on it for years. But now, this pandemic has forced us to face the reality that you have to adopt digital in order to survive. And there’s a lot of evidence for that. I can cite McKinsey studies where the companies that realized this early and pivoted to digital delivery are reaping the business benefits. And, of course, that means you have to have both the technology, the digitization part, but also embrace the idea that you have to conduct some part of your business, or deliver your value, digitally. This has now become crystal clear in the focus of everyone’s mind.

Gardner: And what is the value in adopting standards? How do they help organizations from going off the rails or succumbing to complexity and chaos?

Lounsbury: There’s classically been a split between information technology (IT) in an organization and the people who are in the business. And, something I picked up at one of the Center for Information Research meetings was, the minute an IT person talks about “the business” you’ve gone off the rails.

If you’re going to deliver your business value digitally — even if it’s something simple like contactless payments or an integrated take-out order system — that knowledge might have been previously in an IT shop or something that you outsourced. Now it has to be in the line of business. 

Pandemic survival is digital

There has to be some awareness of these digital fundamentals at almost all levels of the business. And, of course, to do that quickly, people need a structure and a guidebook for what digital skills they need at different points of their organizational evolution. And that is where standards, complemented by education and training, play a big role.

Fulton: I want to hit on this idea of digitization versus digital. Dave made that point and I think it’s a good one. But in the context of the pandemic, it’s incredibly critical that we understand the value that digitization brings — as well as the value that digital brings.

When we talk about digitization, typically what we’re talking about is the application of technology inside of a company to drive productivity and improve the operating model of the company. In the context of the pandemic, that value becomes much more important. Driving internal productivity is absolutely critical.

We’re seeing that here at Nationwide. We are taking steps to apply digitization internally to increase the productivity of our organization and help us drive the cost down of the insurance that we provide to our customers very specifically. This is in response to the adjustment in the value equation in the context of the pandemic.

But then, the digital context is more about looking externally. Digital in this context is applying those technologies to the customer experience and to the business model. And that’s where the contact list, as Dave was talking about, is so critically important.

There are so many ways now to interact with our customers, and in ways that don’t involve human beings. How to get things done in this pandemic, or to involve human beings in a different way — in a digital fashion — that’s where both digitization and digital are so critically important in this current context.

Gardner: Jim Doss, as organizations face a survival-of-the-fittest environment, how do we keep this a business transformation with technology pieces — and not the other way around?

Project management to product journey

Doss: The days of architecting IT and the business separately, or as a pure cascade or top-down thing; those days are going. Instead of those “inside-out” approaches, “outside-in” architectural thinking now keenly focuses on customer experiences and the value streams aligned to those experiences. Agile Architecture promotes enterprise segmentation to facilitate concurrent development and architecture refactoring, guided by architectural guardrails, a kind of lightweight governance structure that facilitates interoperability and keeps people from straying into dangerous territory.

If you read books like Team Topologies, The Open Group Digital Practitioner Body of Knowledge™️ (DPBoK), and Open Agile Architecture™️ Standards, they are designed for team cognitive load, whether they are IT teams or business teams. And doing things like the Inverse Conway Maneuver segments the organization into teams that deliver a product, a product feature, a journey, or a sub-journey.

Those are some really huge trends and the project-to-product shift is going on in business and IT. These trends have been going on for a few years. But when it comes to undoing 30 or 40 years of project management mentality in IT — we’re still at the beginning of the project-to-product shift. It’s massive. 

To summarize what David was saying, the business can no longer outsource digital transformation. As matter of fact, by definition, you can’t outsource digital transformation to IT anymore. This is a joint-effort going forward.

Gardner: Dave, as we’re further defining digital transformation, this goes beyond just improving IT operations and systems optimization. Isn’t digital transformation also about redefining their total value proposition?

Lounsbury: Yes, that’s a very good point. We may have brushed over this point, but when we say and use the word digital, at The Open Group we really mean a change in the mindset of how you deliver your business.

This is not something that the technology team does. It’s a reorientation of your business focus and how you think about your interactions with the customer, as well as how you deliver value to the customer. How do you give them more ways of interacting with you? How do you give them more ways of personalizing their experience and doing what they want to do?

This goes very deep into the organization, to how you think about your value chains, in business model leverage, and things like that.

One of the things we see a lot of is people thinking about is trying to do old processes faster. We have been doing that incremental improvement and efficiency forever and applying machines to do part of the value-delivery job. But the essential decision now is thinking about the customers’ view as being primarily a digital interaction, and to give them customization, web access, and let them do the whole value chain in digital. That goes right to the top of the company and to how structure your business model or value delivery.

Balanced structure for flexibility

Gardner: Mike Fulton, more structure comes with great value in that you can manage complexity and keep things from going off of the rails. But some people think that too much structure slows you down. How do you reach the right balance? And does that balance vary from company to company, or there are general rules about finding that Nirvana between enough structure and too little?

Fulton: If we want to provide flexibility and speed, we have to move away from rules and start thinking more about guardrails, guidelines, and about driving things from a principled perspective.

That’s one of the biggest shifts we’re seeing in the digital space related to enterprise architecture (EA). Whereas, historically, architecture played a directional, governance role, what we’re seeing now is that architecture in a digital context provides guardrails for development teams to work within. And that way, it provides more room for flexibility and for choice at the lower levels of an organization as you’re building out your new digital products.

Historically, architecture played a directional, governance role. Now architecture in a digital context provides guardrails for development teams to work within. It provides more room for flexibility and for choice at the lower levels of an organization as you’re building out your new digital products.

Those digital products still need to work in the context of a broader EA, and an architecture that’s been developed leveraging potentially new techniques, like what’s coming out of The Open Group with the Open Agile Architecture standard. That’s new, different, and critically important for thinking about architecture in a different way. But, I think, that’s where we provide flexibility — through the creation of guardrails.

Doss: The days are over for “Ivory Tower” EA – the top-down, highly centralized EA. Today, EA is responding to right-to-left and outside-in versus inside-out pressures. It has to be more about responding, as Mike said, to the customer-centric needs using market data, customer data, and continuous feedback.

EA is really different now. It responds to product needs, market needs, and all of the domain-driven design and other things that go along with that. 

Lounsbury: Sometimes we use the term agile, and it’s almost like a religious term. But agile essentially means you’re structured to respond to changes quickly and you learn from your mistakes through repeatedly refining your concepts. That’s actually a key part of what’s in the Open Agile Architecture Standard that Mike referred to.

The reason for this is fundamental to why people need to worry about digital right now. With digital, your customer interface is no longer your fancy storefront. It’s that black mirror on your phone, right? You have exactly the same six-by-two-and-a-half-inch screen that everybody else has to get your message across.

And so, the side effect of that, is that the customer has much more power to select among competitors than they did in the past. There’s been plenty of evidence that customers will pick convenience or safety over brand loyalty in a heartbeat these days.

Internally that means as a business that you have to have your team structured so they can quickly respond to the marketplace, and not have to go all the way up the management chain for some big decision and then bring it all way back down again. You’ll be out-competed if you do it that way. There is a hyper-acceleration to “survival of the fittest” in business and IT; this has been called the Red Queen effect.

That’s why it’s essential to have agile not as a religion, but as the organizational agility to respond to outside-in customer pressures as a competitive factor in how you run your business. And, of course, that then pulls along the need to be agile in your business practices and in how you empower your agile teams. How do you give them the guardrails? How do you give them the infrastructure they need to succeed at all of those things?

It’s almost as if the pyramid has been turned on its head. It’s not a pyramid that comes down from the top of some high-level business decisions, but the pyramid grows backward from a point of interaction with the customers.

Gardner: Before we drill down on how to attain that organizational agility, let’s dwell on the challenges. What’s holding up organizations from attaining digital transformation now that they face an existential need for it?

Digital delivers agile advantage

Doss: We see a lot of companies try to bring in digital technologies but really aren’t employing the needed digital practices to bring the fuller intended value, so there’s a cultural lag. 

The digital technologies are often used in combination and mashed up in amazing ways to bring out new products and business models. But you need digital practices along with those digital technologies. There’s a growing body of evidence that the difference between companies that actually get that are not just outperforming their industry peers by percentages — it’s almost exponential.

The findings from the “State of DevOps” Reports for the last few years gives us clear evidence on this. Product teams are really driving a lot of the work and functionality across the silos, and increasingly into operations.

And this is why the standards and bodies knowledge are so important — because you need these ideas. With The Open Group DPBoK, we’ve woven all of this together in one Emergence Model and kept these digital practices connected. That’s the “P” in DPBoK, the practitioner. It’s those digital practices that bring in the value.

Fulton: Jim makes a great point here. But in my context with Digital Executive Education at Ohio State, when we look at that journey to a digital enterprise we think of it in three parts: The vision, the transformation, and the execution.

The piece that Jim was just talking about talks to execution. Once you’re in a digital enterprise, how do you have the right capabilities and practices to create new digital products day to day?  And that’s absolutely critical.

But you also have to set the vision upfront. You have to be able to envision, as a leadership team of an organization, what a digital enterprise looks like. What is your blueprint for that digital enterprise? And so, you have to be able to figure that out. Then, once you have aligned that blueprint with your leadership team, you have to lead that digital transformation journey.

You have to be able to envision, as a leadership team of an organization, what a digital enterprise looks like. What is your blueprint for that digital enterprise? Once you have aligned that blueprint with your leadership team, you have to lead that digital transformation journey.

And that transformation takes you from the vision to the execution. And that’s what I really love about The Open Group and the new direction around an open digital portfolio, the portfolio digital standards that work together in concert to take you across that entire journey. 

These are the standards help you envision the future. Standards that help you drive that digital transformation like the Open Agile Architecture Standard. Standards that help you with digital delivery such as IT4IT. A critically important part of this journey is rethinking your digital delivery because the vast majority of products that companies produce today are digital products.

But then, how do you actually deliver the capabilities and practices, and uplift the organization with the new skills to function in this digital enterprise once you get there? And you can’t wait. You have to bring people along that journey from the very start. The entire organization needs to think differently, and it needs to act differently, once you become a digital enterprise.

Lounsbury: Right. And that’s an important point, Mike, and one that’s come out of the digital thinking going on at The Open Group. A part of the digital portfolio is understanding the difference between “what a company is” and “what a company does” — that vision that you talked about – and then how we operate to deliver on that vision.

Dana, you began this with a question about the barriers and what’s slowing progress down. Those things used to be vertically aligned. What the business is and does used to be decomposed through some top-down, reductionist, refactor or delegate, decompose and delegate of all of the responsibilities. And if everybody does their job at the edge, then the vision will be realized. That’s not true anymore because of the outside-in digital reality.

A big part of the challenge for most organizations is the old idea that, “Well, if we do that all faster, we’ll somehow be able to compete.” That is gone, right? That fundamental change and challenge for top- and middle-management is, “How do we make the transition to the structure that matches the new competitive environment of outside-in?”

“What does it mean to empower our team? What is the culture we need in our company to actually have a productive team at the edge?” Things like, “Are you escalating every decision up to a higher level of management?” You just don’t have time for that anymore.

Are people free to choose the tools and interfaces with the customers that they believe will maximize the customer experience? And if it doesn’t work out, how do you move on to the next step without being punished for the failure of your experiment? If it reflects negatively on you, that’s going to inhibit your ability to respond, too.

All of these techniques, all of these digital ways of working, to use Jim’s term, have to be brought into the organization. And, as Mike said, that’s where the power of standards comes in. That’s where the playbooks that The Open Group has created in the DPBoK Standard, the Open Agile Architecture Standard, and the IT4IT Reference Architecture actually give you the guidance on how to do that.

Part of the Emergence Model is knowing when to do what, at the right stage in your organization’s growth or transformation.

Gardner: And leading up to the Emergence Model, we’ve been talking about standards and playbooks. But what is a “playbook” when it comes to standards?  And why is The Open Group ahead of the curve to extend the value when you have multiple open standards and playbooks?

Teams need playbook to win

Lounsbury: I’ll be honest, Dana, The Open Group is at a very exciting time. We’re in a bit of a transition. When there was a clear division between IT and business, there were different standards and different bodies of knowledge for how you adapt to each of those. A big part of the role of the enterprise architect was in bridging those two worlds.

The world has changed, and The Open Group is in the process of adapting to that. We’re looking to build on the robust and proven standards and build those into a much more coherent and unified digital playbook, where there is easy discoverability and navigability between the different standards. 

People today want to have quick access. They want to say, “Oh, what does it mean to have an agile team? What does it mean to have an outside-in mindset?” They want to quickly discover that and then drill in deeper. And that’s what we pioneered with the DPBoK, with the architecture of the document called the Emergence Model, and that’s been picked up by other standards of The Open Group. It’s clearly the direction we need to do more in.

Gardner: Mike, why are multiple standards acting in concert good?

Fulton: For me, when I think about why you need multiple standards, it’s because if you were to try to create a single standard that covered everything, that standard would become incomprehensible.

If you want an industry standard, you need to bring the right subject matter experts together, the best of the best, the right thought leaders — and that’s what The Open Group does. It brings thought leaders from across the world together to talk about specific topics to develop the best information that we have as an industry and to put that into our standards.

The Open Group, with the digital portfolio, is intentionally bringing the standards together to make sure that the standards align. That brings the standards together to make sure we’re thinking about big, broad concepts in the same way and then dig down into the details with the right subject matter experts.

But it’s a rare bird, indeed, that can do that across multiple parts of an organization, or multiple capabilities, or multiple practices. And so by building these standards up individually, it allows us to tap into the right subject matter experts, the right passions, and the right areas of expertise.

But then, what The Open Group is now doing with the digital portfolio is intentionally bringing those standards together to make sure that the standards align. It brings the standards together to make sure that they have the same messaging, that we’re all working on the same definitions, and that we’re all thinking about big, broad concepts together in the same way and then allow us to dig down into the details with the right subject matter experts at the level of granularity needed to provide the appropriate levels of benefits for industry.

Gardner: And how does the Emergence Model help harmonize multiple standards, particularly around the Digital Practitioner’s Workgroup?

Emergence Model scales

Lounsbury: We talked about outside-in, and there are a couple of ways you can approach how you organize such a topic. As Mike just said, there’s a lot of detail that you need to understand to fully grasp it.

But you don’t always have to fully grasp everything at the start. And there are different ways you can look at organizations. You can look at the typical stack, decomposition, and the top-down view. You can look at lifecycles, that when you start at the left and you go to the right, what are all the steps in-between?

And the third dimension, which we’re picking up on inside The Open Group, is the concept of scale through the Emergence Model. And that’s what we’ve tried to do, particularly in the DPBoK Standard. It’s the best example we have right now. And that approach is coming into other parts of our standards. The idea comes out of lean startup thinking, which comes out of lean manufacturing.

When you’re a startup, or starting a new initiative, there are a few critical things you have to know. What is your concept of digital value? What do you need to deliver that value? Things like that.

Then you ideally succeed and grow and then, “Wow, I need more people.” So now you have a team. Well, that brings in the idea of, “What does team management mean? What do I have to do to make a team productive? What infrastructure does it need?”

And then, with that, the success goes on because of the steps you’ve taken from the beginning. As you get into more complexity, you get into multiple teams, which brings in budgeting. You soon have large-scale enterprises, which means you have all sorts of compliance, accounting, and auditing. These things go on and on.

But you don’t know those things at the start. You do have to know them at the end. What you need to know at the start is that you have a map as to how to get there. And that’s the architecture, and the process to that is what we call the Emergence Model.

It is how you map to scale. And I should say, people think of this quite often in terms of, “Oh it’s just for a startup. I’m not a startup, I’m in a big company.” But many big companies — Mike, I think you’ve had some experience with this – have many internal innovation centers. You do entrepreneurial funding for a small group of people and, depending on their success, feed them more resources. 

So you have the need for an Emergence Model even inside of big companies. And, by the way, there are many use cases for using a pattern for success in how to do digital transformation. Don’t start from the top-down; start with some experiments and grow from the inside-out.

Doss: I refer to that as downscale digital piloting. You may be a massive enterprise, but if you’re going to adapt and adopt new business models, like your competitors and smaller innovators who are in your space, you need to think more like them.

Though I’m in a huge enterprise, I’m going to start some smaller initiatives and fence them off from governance and other things that slow those teams down. I’m going to bring in only lean aspects for those initiatives.

You may be a massive enterprise, but if you’re going to adapt and adopt new business models, like your competitors and smaller innovators, you need to think more like them. In a huge enterprise, you need to start some smaller initiatives and fence them off from the governance that could slow them down and bring in lean aspects. 

And then, you amplify what works and scale that to the enterprise. As David said, you have the smaller organizations that have a great guidebook now for what’s right around the corner. They’re growing now, they don’t have just one product anymore, they have two or three products and so the original product owner can’t be in every product meeting.

So, all of those things are happening as a company grows and the DPBoK and Emergence Model is great for, “Hey, this is what’s around the corner.”

With a lot of other frameworks, you’d have to spend a lot of time extracting for scale-specific guidance on digital practices. So, you’d have to extract all that scale-specific stuff and it’s a lot of work, to be honest, and it’s hard to get right. So, in the DPBoK, we built the guidance so it’s much easier to move in either direction — going up- and down-scale digital piloting as well.

Gardner: Mike, you’re on the pointy end of this, I think, in one of your jobs. 

Intentional innovation

Fulton: Yes, at Nationwide, in our technology innovation team, we are doing exactly what Dave and Jim have described. We create new digital products for the organization and we leverage a combination of lean startup methodologies, agile methodologies, and the Emergence Model from The Open Group DPBoK to help us think about what we need at different points in time in that lifecycle of a digital product.

And that’s been really effective for us as we have brought new products to market. I shared the full story at The Open Group presentation about six months ago. But it is something that I believe is a really valuable tool for big enterprises trying to innovate. It helps you think about being very intentional about what are you using. What capabilities and components are you using that are lean versus more robust? What capabilities are you using that are implicit versus explicit, and what point in time do you actually need to start writing things down?

At what point in time do you absolutely need to start leveraging those slightly bigger, more robust enterprise processes to be able to effectively bring a digital product to market versus using processes that might be more appropriate in a startup world? And I found the DPBoK to be incredibly helpful and instructive as we went through that process at Nationwide. 

Gardner: Are there any other examples of what’s working, perhaps even in the public sector? This is not just for private sector corporations. A lot of organizations of all stripes are trying to align, become more agile, more digital, and be more responsive to their end-users through digital channels. Any examples of what is working when it comes to the Emergence Model, rapid digitization, and leveraging of multiple standards appropriately?

Good governance digitally 

Doss: We’re really still in the early days with digital in the US federal government. I do a lot of work in the federal space, and I’ve done a lot of commercial work as well. 

They’re still struggling in the federal space with the project-to-product shift. 

There is still a huge focus on the legacy project management mentality. When you think about the legacy definition of a deliverable, the project is done at the deliverable. So, that supports “throw it over the wall and run the other way.”

Various forms of the plan-build-operate (PBO) IT organization structure still dominate in the federal space. Orgs that are PBO-aligned tend to push work from left to right across the P, B & O silos, and the space between these siloes are heavily stage-gated. So, this inside-out thinking and the stage-gating also supports “throw it over the wall and run the other way.” In the federal space, waterfall is baked into nearly everything.

These are two huge digital anti-patterns that the federal space is really struggling with. 

Product management, for example, employs a single persistent team that remains with the work across the lifecycle and ties together those dysfunctional silos. Such “full product lifecycle teams” eliminate a lot of the communication and hand-off problems associated with such legacy structures.

The other problem in the federal space with the PBO IT org structure is that the real power resides in these silos and these silos’ management focus is downward into their silo….not as much across the silos; so there are a lot of cross functional initiatives such as EA, service ownership, product ownership or digital initiative that might get some traction for a while but such initiatives of functions have no real buying power or “go/no-go” decision authority so they get squashed eventually by the silo heads, where the real power resides in such organizations. 

In the US, I look over time for Congressional, via new laws or Office of Management and Budget (OMB) via policy, to bring in some needed changes and governance about how IT orgs get structured and governed. 

Ironically, these two digital anti-patterns also lead to the creation of lots of over-baked governance over decades to try to assure that the intended value was still captured, which is like chasing more bad money after that other bad money.

This is not just true in federal this is also true in the commercial world. Such over-baked governance just happens to be really, really bad in the federal space.

For federal IT, you have laws like Clinger-CohenFederal Information Technology Acquisition Reform Act (FITARA), policies and required checks by the OMB, Capital Planning and Investment controlAcquisition RegulationsDoD Architecture Framework, and I could go on — all which require tons of artifacts and evidence of sound decision making.

The problem is nobody is rationalizing these together… like figuring out what supersedes what when something new comes out. So, the governance just gets more and more un-lean, over-bloated and what you have at the end is agencies are either misguided by out-of-date guidance or overburdened by over-bloated governance.

Fulton: I don’t have nearly the level of depth in the government space that Jim does, but I do have a couple examples I want to point people to if they are trying to look for more government-related examples. I point you to a couple here in Ohio, both Doug McColloughand his work with the City of Dublin in Ohio. He’s done a lot of work with digital technologies; digital transformation at the city level. 

And then again here in Ohio – and I’m just using Ohio references because I live in Ohio and I know a little bit more intimately what some of these folks are doing — Ervan Rodgers, CEO of the State of Ohio, has done a really nice job of focusing on digital capabilities and practices to build up across state employees.

The third I’ll point to is the work going on in India. There’s been a tremendous amount of really great work in India related to government, architecture, and getting to the digital transformation conversation at the government level. So, if folks are interested in more examples, more stories, I’d recommend you look into those three as places to start.

Lounsbury: The thing, I think, you’re referring to there, Mike, is the IndEA India Enterprise Architecture initiative and the pivot to digital that several of the Indian provinces are making. We can certainly talk about that more on a different podcast.

Transformation is almost always driven by a Darwinian force. Something has changed in your environment that causes you to evolve, and we’ve seen that in the federal and defense sectors in things like avionics where the cost of software is unaffordable. They then turned to modular, decomposable systems based on standards just to stay in business.

I will toss in one ray of light to what Jim said. Transformation is almost always driven by an almost Darwinian force. There’s something changed in your environment that causes you to evolve and we’ve seen that in the federal sector and the defense sector in particular where things like in avionics, the cost of software is becoming unaffordable. They turned to modular, decomposable systems based on standards in order to achieve the necessary cost savings to just stay in business.

Similarly, in India, the utter need to deliver to a very diverse, large rural population, and grow that needed digitization. And certainly, the U.S. federal sector and the defense sector are very aware of the disparity. And I think, things like, the defense budget changes or changes in mission will drive some of these changes that we’ve talked about that are driven by the pandemic urgently in the commercial sector.

So, it will happen, but it is, I’ll agree with Jim, probably the most challenging ultimate top-down environment that you could possibly imagine doing a transformation.

Gardner: In closing, what’s coming next from The Open Group, particularly around digital practitioner resources? How can organizations best exploit these resources?

Harmony on the horizon

Lounsbury: We’ve talked about the evolution The Open Group is going through, about the digital portfolio and the digital playbooks having all of our standards speak common language and working together.

A first step in that is to develop a set of principles by which we’re going to do that evolution and the documents is called, Principles for Open Digital Standards. You can get that from The Open Group bookstore and if you want to find it quickly, you go to The Open Group’s The Digital-First Enterprise page that links to all of these standards.

Looking forward, there are activities going on in all of the forums of The Open Group and the forums are voluntary organizations. But certainly, the IT4IT Forum, the Digital Practitioner Workgroup, in these large swaths of our architecture activity they are working on how we can harmonize the language and bring common knowledge to our standards.

And then, to look beyond that, I think we need to address the problems of discoverability and navigability that I mentioned earlier to give that coherent and an easy-to-access picture of where a person can find out what they need when they need it.

Fulton: Dave, I think probably one of the most important pieces of work that will be delivered soon by The Open Group is putting a stake in the ground around what it means to be a digital product. And that’s something that I don’t think we’ve seen anywhere else in the industry. I think it will really move the ball forward and be a unifying document for the entire open digital portfolio.

And so, we have some great work that’s already gone on in the DPBoK and the Open Agile Architecture standard, but I think that digital product will be a rallying cry that will make all of the standards even more cohesive going forward.

Doss: And I’ll just add my final two cents here. I think a lot of it, Dana, is just awareness. People need to just understand that there’s a DPBoK Standard out there for digital practitioners. 

If you’re in IT, you’re not just an IT practitioner anymore, you’re using digital technology and digital practices to bring lean, user-centric value to your business or mission. So, digital is the new best practice. So, there’s a framework in a body of knowledge out there now that supports and helps people transform in their careers. The same thing with Agile Architecture. And so it’s just the awareness that these things are out there. The most powerful thing to me is, both of these works that I just mentioned have more than 500 references from most of the last 10 years of leading digital thinkers. So, again, the way these are structured, the way these are built, bringing in just the scale-specific guidance and that sort of stuff is hugely powerful. There needs to be an increasing awareness that this stuff is out there.

Lounsbury: And if I can pick up on that awareness point, I do want to mention, as always, The Open Group publishes the standards as freely available to all. You can go to that digital enterprise page or The Open Group Library to find these. We also have an active training ecosystem that you can find these days. Everybody does that digital training. 


There are ways of learning the standards in depth and getting certified that you’re proficient in the knowledge of that. But I also should mention, we have at least two U.S. universities and more interest on the international sector for graduate work in executive-level education. And Mike has mentioned his executive teaching at Ohio State, and there are others as well.

Gardner: Right, and many of these resources are available at The Open Group website. There are also many events, many of them now virtual, as well as certification processes and resources. There’s always something new, it’s a very active place.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

Posted in Cloud computing, enterprise architecture, Enterprise transformation, Platform 3.0, Security, The Open Group | Tagged , , , , , , , , , , , , , , | Leave a comment

How The Open Group enterprise architecture portfolio enables an agile digital enterprise

The next BriefingsDirect agile business enablement discussion explores how a portfolio approach to standards has emerged as a key way to grapple with digital transformation.

As businesses seek to make agility a key differentiator in a rapidly changing world, applying enterprise architecture (EA) in concert with many other standards has never been more powerful. Stay with us here to explore how to define and corral a comprehensive standards resources approach for making businesses intrinsically agile and competitive. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about attaining agility via an embrace of a broad toolkit of standards, we are joined by our panel, Chris Frost, Principal Enterprise Architect and Distinguished Engineer, Application Technology Consulting Division, at FujitsuSonia Gonzalez, The Open Group TOGAF® Product Manager, and Paul Homan, Distinguished Engineer and Chief Technology Officer, Industrial, at IBM Services.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Sonia, why is it critical to modernize businesses in a more comprehensive and structured fashion? How do standards help best compete in this digital-intensive era?

Gonzalez: The question is more important than ever. We need to be very quickly responding to changes in the market.

It’s not only that we have more technology trends and competitors. Organizations are also changing their business models — the way they offer products and services. And there’s much more uncertainty in the business environment.

The current situation with COVID-19 has made for a very unpredictable environment. So we need to be faster in the ways we respond. We need to make better use of our resources and to be able to innovate in how we offer our products and services. And since everybody else is also doing that, we must be agile and respond quickly. 

Gardner: Chris, how are things different now than a year ago? Is speed all that we’re dealing with when it comes to agility? Or is there something more to it?

Frost: Speed is clearly a very important part of it, and market trends are driving that need for speed and agility. But this has been building for a lot more than a year.

We now have, with some of the hyperscale cloud providers, the capability to deploy new systems and new business processes more quickly than ever before. And with some of the new technologies — like artificial intelligence (AI), data analytics, and 5G – there are new technological innovations that enable us to do things that we couldn’t do before.

Faster, better, more agile

A combination of these things has come together in the last few years that has produced a unique need now for speed. That’s what I seek in the market, Dana.

Gardner: Paul, when it comes to manufacturing and industrial organizations, how do things change for them in particular? Is there something about the data, the complexity? Why are standards more important than ever in certain verticals?

Homan: The industrial world in particular, focusing on engineering and manufacturing, has brought together the physical and digital worlds. And whilst these industries have not been as quick to embrace the technologies as other sectors have, we can now see how they are connected. That means connected products, connected factories and places of work, and connected ecosystems.

There are still so many more things that need to be integrated, and fundamentally EA comes back to the how – how do you integrate all of these things? A great deal of the connectivity we’re now seeing around the world needs a higher level of integration.

Gardner: Sonia, to follow this point on broader integration, does applying standards across different parts of any organization now make more sense than in the past? Why does one part of the business need to be in concert with the others? And how does The Open Group portfolio help produce a more comprehensive and coordinated approach to integration?

Integrate with standards

Gonzalez: Yes, what Paul mentioned about being able to integrate and interconnect is paramount for us. Our portfolio of standards, which is more than just [The Open Group Architectural Forum (TOGAF®)]  Standard, is like having a toolkit of different open standards that you can use to address different needs, depending upon your particular situation.

For example, there may be cases in which we need to build physical products across an extended industrial environment. In that case, certain kinds of standards will apply. Also critical is how the different standards will be used together and pursue interoperability. Therefore, borderless information flow is one of our trademarks at The Open Group.

 Other more intangible cases, such as digital services, need standards. For example, the Digital Practitioner Body of Knowledge (DPBoK™) supports a scale model to support the digital enterprise.

Other standards are coming around agile enterprises and best practices. They support how to make interconnections and interoperability faster — but at the same time having the proper consistency and integration to align with the overall strategy. At the end of the day, it’s not enough to integrate for just a technical point of view. You need bring new value to your businesses. You need to be aligned with your business model, and with your business view, to your strategy.

Therefore, the change is not only to integrate technical platforms, even though that is paramount, but also to change your business and operational model and to go deeper to cover your partners and the way your company is put together.

So, therefore, we have different standards that cover all of those different areas. As I said at the beginning, these form a toolkit with which you can choose different standards and make them work together conforming a portfolio of standards.

Gardner: So, whether we look to standards individually or together as a toolkit, it’s important that they have a real-world applicability and benefits. I’m curious, Paul and Chris, what’s holding organizations back from using more standards to help them?

Homan: When we use the term traditional enterprise architecture, it always needs to be adapted to suit the environment and the context. TOGAF, for example, has to be tailored to the organization and for the individual assignment.

But I’ve been around in the industry long enough to be familiar with a number of what I call anti-patterns that have grown up around EA practices and which are not helping with the need for agility. This comes from the idea that EA has heavy governance.

We have all witnessed such core practices — and I will confess to having being part of some of them. And these obviously fly in the face of the agility, flexibility, of being able to push decisions out to the edge and pivot quickly, and to make mistakes and be allowed to learn from them. So kind of an experimental attitude.

And so gaining such adaptation is more than just promoting good architectural decision-making within a set of guide rails — it allows decision-making to happen at the point of need. So that’s the needed adaption that I see.

Gardner: Chris, what challenges do you see organizations dealing with, and why are standards be so important to helping them attain a higher level of agility?

Frost: The standards are important, not so much because they are a standard but because they represent industry best practices. The way standards are developed in The Open Group are not some sort of theoretical exercise. It’s very much member-driven and brought together by the members drawing on their practical experiences.

Automation of business workflows and processes with a businessman in background touching a button

To me, the point is more about industry best practice, and not so much the standard. There are good things about standard ways of working, being able to share things, and everybody having a common understanding about what things mean. But that aspect of the standard that represents industry best practices — that’s the real value right now.

Coming back to what Paul said, there is a certain historical perspective here that we have to acknowledge. EA projects in the past — and certainly things I have been personally involved in — were often delivered in a very waterfall fashion. That created a certain perception that somehow EA means big-design-upfront-waterfall-style projects — and that absolutely isn’t the case.

That is one of the reasons why a certain adaptation is needed. Guidance about how to adapt is needed. The word adapt is very important because it’s not as if all of the knowledge and fundamental techniques that we have learned over the past few years are being thrown away. It’s a question of how we adapt to agile delivery, and the things we have been doing recently in The Open Group demonstrate exactly how to do that.

Gardner: And does this concept of a minimum viable architecture fit in to that? Does that help people move past the notion of the older waterfall structure to EA?

Reach minimum viable architecture

Frost: Yes, very much it does. It’s something that you might regard as reaching first base. In architectural terms, that minimum viable architecture is like reaching first base, and that emphasizes a notion of rapidly getting to something that you can take forward to the next stage. You can get feedback and also an acknowledgment that you will improve and iterate in the future. Those are fundamental about agile working. So, yes, that minimum viable architecture concept is a really important one. 

Gardner: Sonia, if we are thinking about a minimum viable architecture we are probably also working toward a maximum value standards portfolio. How do standards like TOGAF work in concert with other open standards, standards not in The Open Group? How do we get to that maximum value when it comes to a portfolio of standards?

Gonzalez: That’s very important. First, it has to do with adapting the practice, and not only the standard. In order to face new challenges, especially ones with agile and digital, the practices need to evolve and therefore, the standards – including the whole portfolio of The Open Group standards which are constantly in evolution and improvement. Our members are the ones contributing with the content that follows the new trends, best practices, and uses for all of those practices.

The standards need to evolve to cover areas like digital and agile. And with the concept of minimal viable architecture, the standards are evolving to provide guidance on how EA as a practice supports agile. Actually, nothing in the standard says it has to be used in the waterfall way, even though some people may say that.

TOGAF is now building guidance for how people can use the standards supporting the agile enterprise, delivering that in an agile way, and also supporting an agile approach, which is having a different view of how the practice is applied following this new shift and this new adaption.

Adapt to sector-specific needs

The practice needs to be adapted, the standards need to evolve to fulfill that, and need to be applied to specific situations. For example, it’s not the same to architect organizations in which you have ground processes, especially in a back office than other ones that are more customer facing. For the first ones, their processes are heavier, they don’t need to be that agile. That agile architecture is for upfront customers that need to support a faster pace.

So, you might have cases in which you need to mix different ways to apply the practices and standards. Less agile approach for the back office and a more agile approach for customer facing applications such as, for example, online banking.

Adaptation also depends on the nature of companies. The healthcare industry is one example. We cannot experiment that much in that area because that’s more risk assessment and less subject to experimentation. For these kinds of organizations a different approach is needed. 

There is work in progress in different sectors. For example, we have a very good guide and case study about how to use the TOGAF standard along with the ArchiMate® modeling notation in the banking industry using the BIAN®  Reference Model. That’s a very good use case in The Open Group library. We also have a work in progress in the forum around how governments architect. The IndEA Reference Model is another example of a reference model for that government and has been put together based on open standards.

We also have work in progress around security, such as with the SABSA [framework for Business Security Architecture], for example. We have developed guidance about standards and security along with SABSA. We also have a partnership with the Object Management Group (OMG), in which we are pioneers and have a liaison to build products that will go to market to help practitioners use external standards along with our own portfolio.

Gardner: When we look at standards as promoting greater business agility, there might be people who look to the past and say, “Well, yes, but it was associated with a structured waterfall approach for so long.”

But what happens if you don’t have architecture and you try to be agile? What’s the downside if you don’t have enough structure; you don’t put in these best practices? What can happen if you try to be agile without a necessary amount of architectural integrity?

Guardrails required

Homan: I’m glad that you asked, because I have a number of organizations that I have worked with that have experienced the results of diminishing their architectural governance. I won’t name who they are for obvious reasons, but I know of organizations that have embraced agility. They had great responses to being able to do things quickly, find things out, move fleet-of-foot, and then combined with that cloud computing capabilities. They had great freedom to exercise where they choose to source commodity cloud services.

And, as an enterprise architect, if I look in, that freedom created a massive amount of mini-silos. As soon as those need to come together and scale — and scale is the big word — that’s where the problems started. I’ve seen, for example, around common use of information and standards, processes and workflows that don’t cross between one cloud vendor and another. And these are end-customer-facing services and deliveries that frankly clash from the same organization, from the same brand.

And those sorts of things came about because they weren’t using common reference architectures. There wasn’t a common understanding of the value propositions that were being worked toward, and they manifested because you could rapidly spin stuff out.

When you have a small, agile model of everybody co-located in a relatively contained space — where they can readily connect and communicate — great. But unfortunately as soon as you go and disperse the model, have a round of additional development, distribute to more geographies and markets, with lots of different products, you behave like a large organization. It’s inevitable that people are going to plough their own furrow and go in different directions. And so, you need to have a way of bringing it back together again.

And that’s typically where people come in and start asking how to reintegrate. They love the freedom and we want to keep the freedom, but they need to combine that with a way of having some gentle guardrails that allow them to exercise freedom of speed but not diverge too much.

Frost: The word guardrails is really important because that is very much the emphasis of how agile architectures need to work. My observation is that, without some amount of architecture and planning, what tends to go wrong is some of the foundational things – such as using common descriptions of data or common underlying platforms. If you don’t get those right, different aspects of an overall solution can diverge and fail to integrate. 

Some of those things may include what we generally refer to as non-functional requirements, things like capacity, performance, and possibly safety or regulatory compliance. These rules are often things that easily tend to get overlooked unless there is some degree of planning and architecture, surrounding architecture definitions that think through how to incorporate some of those really important features.

A really important judgment point is what’s just enough architecture upfront to set down those important guardrails without going too far and going back into the big design upfront approach, which we want to avoid to still create the most freedom that we can.

Gardner: Sonia, a big part of the COVID-19 response has been rapidly reorganizing or refactoring supply chains. This requires extended enterprise cooperation and ultimately integration. How are standards like TOGAF and the toolkit from The Open Group important to allow organizations to enjoy agility across organizational boundaries, perhaps under dire circumstances?

COVID-19 necessitates holistic view

Gonzalez: That is precisely when more architecture is needed, because you need to be able to put together a landscape, a whole view of your organization, which is now a standard organization. Your partners, customers, customer alliances, all of your liaisons, are a part of your value chain and you need to have visibility over this.

You mentioned suppliers and providers. These are changing due to the current situation. The way they work, everything is going more digital and virtual, with less face-to-face. So we need to change processes. We need to change value streams. And we need to be sure that we have the right capabilities. Having standards, it’s spot-on, because one of the advantages of having standards, and open standards especially, is that you facilitate communication with other parties. If you are talking the same language it will be easier to integrate and get people together.

Now that most people are working virtually, that implies the need for very good management or your whole portfolio of products and lifecycle. For addressing all this complexity and to gain a holistic view of your capabilities you need to have an architecture focus. Therefore, there are different standards that can fit together in those different areas.

For example, you may need to deliver more digital capabilities to work virtually. You may need to change your whole process view to become more efficient and allow such remote work, and to do that you use standards. In the TOGAF standard we have a set of very good guidance for our business architecture, business models, business capabilities, and value streams; all of them are providing guidance on how to do that.

Another very good guide under the TOGAF standard umbrella for their organization is called Organization Map Guide. It’s much more than having a formal organizational chart to your company. It’s how you map to different resources to respond quickly to changes in your landscape. So, having a more dynamic view, having a cross-coding view of your working teams, is required to be agile and to have interdisciplinary teams work together. So you need to have architecture, and you need to have open standards to address those challenges.

Gardner: And, of course, The Open Group is not standing still, along with many other organizations, in trying to react to the environment and help organizations become more digital and enhance their customer and end-user experiences. What are some of the latest developments at The Open Group?

Standards evolve steadily

Gonzalez: First, we are evolving our standards constantly. The TOGAF standard is evolving to address more of these agile-digital trends, how to adopt new technology trends in a way that they will be adopted in accord with your business model for your strategy and organizational culture. That’s an improvement that is coming. Also, the structure of the standard has evolved to be easier to use and more agile. It has been designed to evolve through new and improved versions more frequently than in the past.

We also have other components coming into the portfolio. One of them is the Agile Architecture Standard, which is going to be released soon. That one is going straight into the agile space. It’s proposing a holistic view of the organization. This coupling between agile and digital is addressed in that standard. It is also suitable to be used along with the TOGAF standard. Both complement each other. The DPBoK is also evolving to address new trends in the market.

We also have other standards. The Microservice Architecture is a very active working group that is delivering guidance on microservices delivered using the TOGAF standard. Another important one is the Zero Trust Architecturein the security space. Now more than ever, as we go virtual and rely on platforms, we need to be sure that we are having proper consistency in security and compliance. We have, for example, the General Data Protection Regulation (GDPR) considerations, which are stronger than ever. Those kinds of security breaches are addressed in that specific context.

The IT4IT standard, which is another reference architecture, is evolving toward becoming more oriented to a digital product concept to precisely address all of those changes.

All of these standards, all the pieces, are moving together. There are other things coming, for example, delivering standards to serve specific areas like oil, gas, and electricity, which are more facility-oriented, more physically-oriented. We are also working toward those to be sure that we are addressing all of the different possibilities.

Another very important thing here is we are aiming for every standard we deliver into the market to have a certification program along with it. We have that for the TOGAF standard, ArchiMate standard, IT4IT, and DPBoK. So the idea is to continue increasing our portfolio of certification along with the portfolio of standards.

Furthermore, we have more credentials as part of the TOGAF certification to allow people to go into specializations. For example, I’m TOGAF-certified but I also wanted to go for a Solution Architect Practitioner or a Digital Architect. So, we are combining the different products that we have, different standards, to have these building blocks we’re putting together for this learning curve around certifications, which is an important part of our offering.

Gardner: I think it’s important to illustrate where these standards are put to work and how organizations find the right balance between a minimum viable architecture and a maximum value portfolio for agility.

So let’s go through our panel for some examples. Are there organizations you are working with that come to mind that have found and struck