How HPE Pointnext ‘Moments’ provide a proven critical approach to digital business transformation

The next edition of the BriefingsDirect Voice of Innovation video podcast series explores new and innovative paths for businesses to attain digital transformation.

Even as a vast majority of companies profess to be seeking digital business transformation, few proven standards or broadly accepted methods stand out as the best paths to take.

And now, the COVID-19 pandemic has accelerated the need for bold initiativesto make customer engagement and experience optimization an increasingly data-driven and wholly digital affair.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video.

Stay with us here to welcome a panel of experts as they detail a multi-step series of “Moments” that guide organizations on their transformations. Here to share the Hewlett Packard Enterprise (HPE) view on helping businesses effectively innovate for a new era of pervasive digital business are:

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Craig, while some 80 percent of CEOs say that digital transformation initiatives are under way, and they’re actively involved, how much actual standardization — or proven methods — are available to them? Is everyone going at this completely differently? Or is there some way that we can help people attain a more consistent level of success?

Partridge: A few things have emerged that are becoming commonly agreed upon, if not commonly executed upon. So, let’s look at those things that have been commonly agreed-upon and that we see consistently in most of our customers’ digital transformation agendas.

The first principle would be — and no shock here — focusing on data and moving toward being a data-driven organization to gain insights and intelligence. That leads to being able to act upon those insights for differentiation and innovation.

It’s true to say that data is the currency of the digital economy. Such a hyper-focus on data implies all sorts of things, not least of all, making sure you’re trusted to handle that data securely, with cybersecurity for all of the good things come that out of that data.

Another thing we’re seeing now as common in the way people think about digital transformation is that it’s a lot more about being at the edge. It’s about using technology to create an exchange value as they transact value from business-to-business (B2B) or business-to-consumer (B2C) activities in a variety of different environments. Sometimes those environments can be digitized themselves, the idea of physical digitization and using technology to address people and personalities as well. So edge-centric thinking is another common ingredient.

These may not form an exact science, in terms of a standardized method or industry standard benchmark, but we are seeing these common themes now iterate as customers go through digital transformation.

Gardner: It certainly seems that if you want to scale digital transformation across organizations that there needs to be consistency, structure, and common understanding. On the other hand, if everyone does it the same way, you don’t necessarily generate differentiation.

How do you best attain a balance between standardization and innovation?

Partridge: It’s a really good question because there are components of what I just described that can be much more standardized to deliver the desired outcomes from these three pillars. If you look, for example, at cloud-use-enablement, increasingly there are ways to become highly standardized and mobilized around a cloud agenda.

Moving toward containerization and leveraging microservices, or developing with an open API mindset, these are now pervasive principles in almost every industry. IT has to bring its legacy environment to play in all of that at high velocity and high agility.

And that doesn’t vary much from industry to industry. Moving toward containerization, for example, and leveraging microservices or developing with an open API mindset — these principles are pervasive in almost every industry. IT has to bring its legacy environment to play in that discussion at high velocity and high agility. So there is standardized on that side of it.

The variation kicks in as you pivot toward the edge and in thinking about how to create differentiated digital products and services, as well as how you generate new digital revenue streams and how you use digital channels to reach your customers, citizens, and partners. That’s where we’re seeing a high degree of variability. A lot of that is driven by the industry. For example, if you’re in manufacturing you’re probably looking at how technology can help pinpoint pain or constraints in key performance indicators (KPIs), like overall equipment effectiveness, and in addressing technology use across the manufacturing floor.

If you’re in retail, however, you might be looking at how digital channels can accelerate and outpace the four-walled retail experiences that companies may have relied on pre-pandemic.

Gardner: Craig, before we drill down into the actual Moments, were there any visuals that you wanted to share to help us appreciate the bigger picture of a digital transformation journey?

Partridge: Yes, let me share a couple of observations. As a team, we engage in thousands of customer conversations around the world. And what we’re hearing is exactly what we saw from a recent McKinsey report.

No alt text provided for this image

There are number of reasons why seven out of 10 respondents in this particular survey say they are stalled in attaining digital execution and gaining digital business value. Those are centered around four key areas. First of all, communication. It sounds like such a simple problem statement, but it is so hard to sometimes communicate what is a quite complex agenda in a way that is simple enough for as many people as possible — key stakeholders — to rally behind and to make real inside the organization. Sometimes it’s a simple thing of, “How do I visualize and communicate my digital vision?” If you can’t communicate really clearly, then you can’t build that guiding coalition behind you to help execute.

A second barrier to progress centers on complexity, so having a lot of suspended, spinning plates at the same time and trying to figure out what’s the relationship and dependencies between all of the initiatives that are running. Can I de-duplicate or de-risk some of what I’m doing to get that done quicker? That tends to be major barrier.

The third one you mentioned, Dana, which is, “Am I doing something different? Am I really trying to unlock the business models and value that are uniquely mine? Am I changing or reshaping my business and my market norms?” The differentiation challenge is really hard.

No alt text provided for this image

The fourth barrier is when you do have an idea or initiative agenda, then how to lay out the key building blocks in a way that’s going to get results quickly. That’s a prioritization question. Customers can get stuck in a paralysis-by-analysis mode. They’re not quite sure what to establish first in order to make progress and get to that minimum valuable product as quickly as possible. Those are the top four things we see.

To get over those things, you need a clear transformation strategy and clarity on what it is you’re trying to do. As I always say before the digital transformation — everything from edge, business model, how to engage with customers and clients, and through to a technology-as-assembly — to deliver those experiences and differentiation you have to have a distinctive transformation strategy. It leads to an acceleration capability, getting beyond the barriers, and planning the digital capabilities in the right sequence.

You asked, Dana, at the opening if there are emerging models to accomplish all of this. We have established at HPE something called Digital Next Advisory. That’s our joined customer engagement framework, through which we diagnose and pivot beyond the barriers that we commonly see in the customer digital ambitions. So that’s a high-level view of where we see things going, Dana.

Gardner: Why do you call your advisory service subsets “Moments,” and why have you ordered them the way you did?

Moments create momentum for digital

Partridge: We called them Moments because in our industry if you start calling things services then people believe, “Oh, well, that sounds like just a workshop that I’ll pay for.” It doesn’t sound very differentiated.

We also like the way it expresses co-innovation and co-engagement. A moment is something to be experienced with someone else. So there are two sides to that equation.

In terms of how we sequence them, actually they’re not sequenced. And that’s key. One of the things we do as a team across the world is to work out where the constraint points and barriers are. So think of it as a methodology.

And as with any good methodology, there are a lot of tools in the toolkit. The key for us as practitioners in the Digital Next Advisory service is to know what tool to bring at the right point to the customer.

As with any good methodology, there are a lot of tools in the toolkit. The key for us as practitioners in the Digital Next Advisory service is to know what tool to bring at the right point to the customer.

Sometimes that’s going to mean a communication issue, so let’s go solve for that particular problem first. Or, in some cases, it’s needing a differentiated technology partner, like HPE, to come in and create a vision, or a value proposition, that’s going to be different and unique. And so we would engage more specifically around that differentiation agenda.

There’s no sequencing; the sequencing is unique to each customer. And the right Moment is to make sure that the customer understands it is bidirectional. This is a co-engagement framework between two parties.

Gardner: All right, very good. Let’s welcome back Yara.

Schuetz: To reiterate what Craig mentioned, when we engage with a customer in a complex phenomenon such as digital transformation, it’s important to find common ground where we can and then move forward in the digital transformation journey specific to each of our customers.

Common core beliefs drive outcomes

We have three core beliefs. One is being edge-centric. And on the edge-centric core belief we believe that there are two business goals and business outcomes that our customers are trying to achieve.

No alt text provided for this image

In the top left, we have the human edge-centric journey, which is all about redefining customer experiences. In this journey, for example, the corporate initiative could mean the experiences of two personas. It could be the customer or the employees.

These initiatives are designed to increase revenues and productivity via such digital engagements as new services, such as mobile apps. And also to complement this human-to-edge journey we have the physical journey, or the physical edge. To gain insight and control means dealing with the physical edge. It’s about using, for example, Internet of things (IoT) technology for the environment the organization works in, operates in, or provide services in. So the business objective here in this journey consists of improving efficiency by means of digitizing the edge.

Complementary to the edge-centric side, we also have the core belief that the enterprise of the future will be cloud-enabled. By being cloud-enabled, we again separate the cloud-enabled capabilities into two distinct journeys.

The bottom right-hand journey is about modernizing and optimization. In this journey, initiatives address how IT can modernize its legacy environment with, for example, multi-cloud agility. It also includes, for example, optimization and management of services delivery, where different workloads should be best hosted. We’re talking about on-premises as well as different cloud models to focus the IT journey. That also includes software development, especially accelerating development.

This journey also involves the development improvement around personas. The aim is to speed up time-to-value with cloud-native adoption. For example, calling out microservices or containerization to shift innovation quickly over to the edge, using certain platforms, cloud platforms, and APIs.

The third core belief that the enterprise of the future should strive for is the data-driven, intelligence journey, which is all about analyzing and using data to create intelligence to innovate and differentiate from competitors. As a result, they can better target, for example, business analytics and insights using machine learning (ML) or artificial intelligence (AI). Those initiatives generate or consume data from the other journeys.

And complementary to this aspect is bringing trust to all of the digital initiatives. It’s directly linked to the intelligence journey because the data generated or consumed by the four journeys needs to be dealt with in a connected organization with resiliency and cybersecurity playing leading roles resulting in interest to internal as well as external stakeholders.

At the center is the operating model. And that journey really builds the center of the framework because skills, metrics, practices, and governance models have to be reshaped, since they dictate the outcomes of all digital transformation efforts.

These components build the enabling considerations that one must consider when you’re pursuing different business goals such as driving revenues, building productivity, or modernizing existing environments via multi-cloud agility. To put that all in the context of what many companies are really asking for right now is to put it in the context of everything-as-a-service.

Everything-as-a-service does not just belong to, for example, the cloud-enabled side. It’s not only about how you’re consuming technology. It also applies to the edge side for our customers, and in how they deliver, create, and monetize their services to their customers.

Gardner: Yara, please tell us how organizations are using all of this in practice. What are people actually doing?

Communicate clearly with Activate

Schuetz: One of the core challenges we’ve experienced together with customers is that they have trouble framing and communicating their transformation efforts in an easily understandable way across their entire organizations. That’s not an easy task for them.

Communication tension points tend to be, for example, how to really describe digital transformation. Is there any definition that really suits my business? And how can I visualize, easily communicate, and articulate that to my entire organization? How does what I’m trying to do with technology make sense in a broader context within my company?

So within the Activate Moment, we familiarize them with the digital journey map. This captures their digital ambition and communicates a clear transformation and execution strategy. The digital journey map is used as a model throughout the conversations. This tends to improve how an abstract and complex phenomenon like digital transformation can be delivered as something visual and simple to communicate.

Besides simplification, the digital journey map in the Activate Moment also helps describe an overview and gives a structure of various influencing categories and variables, as well as their relationship with each other, in the context of digital transformation.

Besides simplification, the digital journey map in the Activate Moment also helps describe an overview and gives a structure of various influencing categories and variables, as well as their relationship with each other in the context of digital transformation. It provides our customers guidance on certain considerations, and, of course, all the various possibilities of the application of technology in their business.

For example, at the edge, when we bring the digital journey map into the customer conversation in our Activate Moment, we don’t just talk about the edge generally. We refer to specific customer needs and what their edge might be.

In the financial industry, for example, we talk about branch offices as their edge. In manufacturing, we’re talking about production lines as their edges. If in retail, you have public customers, we talk about the venues as the edge and how — in times like this and the new normal — they can redefine experience and drive value there for their customers there.

Of course, this also serves as inspiration for internal stakeholders. They might say, “Okay, if I link these initiatives, or if I’m talking about this topic in the intelligence space, [how does that impact] the digitization of research and development? What does that mean in that context? And what else do I need to consider?”

Such inspiration means they can tie all of that together into a holistic and effective digital transformation strategy. The Activate Moment engages more innovation on the customer-centric side, too, by bringing insights into the different and various personas at a customer’s edge. They can have different digital ambitions and different digital aspirations that they want to prosper from and bring into the conversation.

Gardner: Thanks again, Yara. On the thinking around personas and the people, how does the issue of defining a new digital corporate culture fit into the Activate Moment?

Schuetz: It fits in pretty well because we are addressing various personas with our Activate Moment. For the chief digital officer (CDO), for example, the impact of the digital initiatives on the digital backbone are really key. She might ask, “Okay, what data will be captured and processed? And which insights will we drive? And how do we make these initiatives trusted?”

Gardner: We’re going to move on now to the next Moment, Align, and orchestrating initiatives with Aviviere. Tell us more about the orchestrating initiatives and the Align Moment, please.

Align with the new normal and beyond

Telang: The Align Moment is designed to help organizations orchestrate their broad catalog of digital transformation initiatives. These are the core initiatives that drive the digital agenda. Over the last few years, as we’ve engaged with customers in various industries, we have found that one of the most common challenges they encounter in this transformation journey is a lack of coordination and alignment between their most critical digital initiatives.

No alt text provided for this image

And, frankly, that slows their time-to-market and reduces the value realized from their transformation efforts. Especially now, with the new normal that we find ourselves in, organizations are rapidly scaling up and broadening out that their digital agenda.

As these organizations rapidly pivot to launching new digital experiences and business models, they need to rapidly coordinate their transformation agenda against an ever-increasing set of stakeholders — who sometimes have competing priorities. These stakeholders can be the various technology teams siting in an IT or digital office, or perhaps the business units responsible for delivering these new experience models to market. Or they can be the internal functions that support internal operations and supply chains of the organizations.

We have found that these groups are not always well-aligned to the digital agenda. They are not operating as a well-oiled machine in their pursuit of that singular digital vision. In this new normal, speed is critical. Organizations have to get aligned to the conversation and execute on all of the digital agenda quickly. That’s where the Align Moment comes in. It is designed to generate deep insights that help organizations evaluate a catalog of digital initiatives across organizational silos and to identify an execution strategy that speeds up their time-to-market.

No alt text provided for this image

So what does that actually look like? During the Align Moment, we bring together a diverse set of stakeholders that own or contribute to the digital agenda. Some of the stakeholders may sit in the business units, some may sit in internal functions, or maybe even on the digital office. But we bring them together to jointly capture and evaluate the most critical initiatives that drive the core of the digital agenda.

The objective is to jointly blend our own expertise and experience with that of our customers to jointly investigate and uncover the prerequisites and interdependencies that so often exist between these complex sets of enterprise-scale digital initiatives.

During the Align Moment, you might realize that the business units need to quickly recalibrate their business processes in order to meet the data security requirements coming in from the business unit or the digital team. For example, one of our customers found out during their own Align Moment that before they got too far down the path of developing their next generation of digital product, they needed to first build in data transparency and accessibility as a core design principle in their global data hub.

The methodology in the Align Moment significantly reduces execution risk as organizations embark on their multi-year transformation agendas. Quite frankly, these agendas are constantly evolving because the speed of the market today is so fast.

Our goal here is to drive a faster time-to-value for the entire digital agenda by coordinating the digital execution strategy across the organization. That’s what the Align Moment helps our customers with. That value has been brought to different stakeholders that we’ve engaged with.

The Align Moment has brought tremendous value to the CDO, for example. The CDO now has the ability to quickly make sense and — even in some cases — coordinate the complex web of digital initiatives running across their organizations, regardless of which silos they may be owned within. They can identify a path to execution that speeds up the realization of the entire digital agenda. I think of it as giving the CDO a dashboard through which they can now see their entire transformation on a singular framework.

We have found that the Align Moment delivers a lot of value for digital initiative owners. Because we work jointly across silos to de-risk, the execution pass implements that initiative whether it’s technology risk, process risk, or governance risk.

We’ve also found that the Align Moment delivers a lot of value for digital initiative owners. Because we jointly work across silos to de-risk, the execution pass implements that initiative whether it’s a technology risk, process risk, or governance risk. That helps to highlight the dependencies between these competing initiatives and competing priorities. And then, sequencing the work streams and efforts minimizes the risk of delays or mismatched deliverables, or mismatched outputs, between teams.

And then there is the chief information officer (CIO). This is a great tool for the CIO to take IT to the next level. They can elevate the impact of IT in the business, and in the various functions in the organization, by establishing agile, cross-functional work streams that can speed up the execution of the digital initiatives.

That’s in a nutshell what the Align Moment is about, helping our customers rapidly generate deep insights to help them orchestrate their digital agenda across silos, or break down silos, with the goal to speed up execution of their agendas.

Advance to the next big thing

Gardner: We’re now moving on to our next Moment, around stimulating differentiation, among other things. We now welcome back Christian to tell us about the Advance Moment.

Reichenbach: The train-of-thought here is that digital transformation is not only to optimize businesses by using technology. We also want to emphasize that technology is used to transform businesses by leveraging digital technology.

That means that we are using technology to differentiate the value propositions of our customers. And differentiation means, for example, new experiences for the customers of our customers, as well as new interactions with digital technology.

Further, it’s about establishing new digital business models, gaining new revenue streams, and expanding the ecosystem in a much broader sense. We want to leverage technology to differentiate the value propositions of our customers, and differentiation means you can’t do whatever one is doing by just copycatting, looking to your peers, and replicating what others are doing. That will not differentiate the value proposition.

Therefore, we specifically designed the Advance Moment where we co-innovate and co-ideate together with our customers to find their next big thing and driving technology to a much more differentiated value proposition.

Gardner: Christian, tell us more about the discreet steps that people need to do in order to get through that stimulating of differentiation.

Reichenbach: Differentiation comes from having new ideas and doing something different than in the past. That’s why we designed the Advance Moment to help our customers differentiate their unique value proposition.

No alt text provided for this image

The Advance Moment is designed as a thinking exercise that we do together with our customers across their diverse teams, meaning product owners, technology designers, engineers, and the CDO. This is a diverse team thinking about a specific problem they want to solve, but they shouldn’t think about it in isolation. They should think about what they do differently in the future to establish new revenue streams with maybe a new digital ecosystem to generate the new digital business models that we see all over the place in the annual reports from our customers.

Everyone is in the race to find the next big thing. We want to help them because we have the technology capabilities and experience to explain and discuss with our customers what is possible today with such leading technology as from HPE.

We can prove that we’ve done that. For example, we sit down with Continental, the second largest automotive part supplier in the world, and ideate about how we can redefine the experience of a driver who is driving along the road. We came up with a data exchange platform that helps our co-manufacturers to exchange data between each other so that the driver who’s sitting in the car gets new entertainment services that were not possible without a data exchange platform.

Our ideation and our Advance Moment are focused on redefining the experience and stimulating new ideas that are groundbreaking — and are not just copycatting what their peers are doing. And that, of course, will differentiate the value propositions from our customers in a unique way so that they can create new experiences and ultimately new revenue streams.

We’re addressing particular personas within our customer’s organization. That’s because today we see that the product owners in a company are powerful and are always asking themselves, “How can I bring my product to the next level?”

We’re addressing particular personas within our customer’s organization. That’s because today we see that the product owners in a company are powerful and are always asking themselves, “How can I bring my product to the next level? How can I differentiate my product so that it is not easily comparable with my peers?”

And, of course, the CDO in the customer organizations are looking to orchestrate these initiatives and support the product owners and engineers and build up the innovation engine with the right initiatives and right ideas. And, of course, when we’re talking about digital business transformation, we end up in the IT department because it has to operate somewhere.

So we bring in the experts from the IT department as well as the CIO to turn ideas quickly into realization. And for turning ideas quickly into something meaningful for our customers is what we designed the Accelerate Moment for.

Gardner: We will move on next to the Moment with Amos and learn about the Accelerate Moment, of moving toward the larger digital transformation value.

Accelerate from ideas into value

Ferrari: When it comes to realizing digital transformation, let me ask you a question, Dana. What do you think is the key problem our customers have?

Gardner: Probably finding ways to get started and then finding realization of value and benefits so that they can prove their initiative is worthwhile.

Ferrari: Yes. Absolutely. It’s a problem of prioritization of investment. They know that they need to invest, they need to do something, and they ask, “Where should I invest first? Should I invest in the big infrastructure first?”

But these decisions can slow things down. Yet time-to-market and speed are the keys today. We all know that this is what is driving the behavior of the people in their transformations. And so the key thing is the Accelerate Moment. It’s the Moment where we engage with our customers via workshops with them.

We enable them to extrapolate from their digital ambition and identify what will enable them to move into the realization of their digital transformation. “Where should I start? What is my journey’s path? What is my path to value?” These are the main questions that the Accelerate Moment answers.

No alt text provided for this image

As you can see, this is a part of the entire HPE Digital Next Advisory services, and it’s enabling the customer to move critically to the realization of benefits. In this engagement, you start with the decision about the use cases and the technology. There are a number of key elements and decisions that the customer is making. And this is where we’re helping them with the Accelerate Moment.

To deliver an Accelerate Moment, we use a number of steps. First, we frame the initiative by having a good discussion about their KPIs. How are you going to measure them? What are the benefits? Because the business is what is thriving. We know that. And we understand how the technology is the link to the business use case. So we frame the initiative and understand the use cases and scope out the use cases that advance the key KPIs that are the essential platform for the customer. That is a key step into the Moment.

Another important thing to understand is that in a digital transformation, a customer is not alone. No customer is really alone in that. It’s not successful if they don’t think holistically about their digital ecosystems. A customer is successful when they think about the complete ecosystem, including not only the key internal stakeholders but the other stakeholders surrounding them. Together they can enable them to build a new digital value and enable customer differentiation.

The next step is understanding the depth of technology across our digital journey map. And the digital journey map helps customers to see beyond just one angle. They may have started only from the IT point of view, or only from the developer point of view, or just the end user point of view. The reality is that IT now is becoming the value creator. But to be the value creator, they need to consider the entire technology of the entire company.

They need to consider edge-to-cloud, and data, as a full picture. This is where we can help them through a discussion about seeing the full technology that supports the value. How can you bring value to your full digital transformation?

The last step that we consider in the Accelerate Moment is to identify the elements surrounding your digital transformation that are the key building blocks and that will enable you to execute immediately. Those building blocks are key because they create what we call the minimal value product.

They should build up a minimum value product and surround it with the execution to realize the value immediately. They should do that without thinking, “Oh, maybe I need two or three years before realize that value.” They need to change to asking, “How can I do that in a very short time by creating something that is simple and straightforward to create by putting the key building blocks in place.”

This shows how everything is linked and how we need to best link them together. How? We link everything together with stories. And the stories are what help our key stakeholders realize what they needed to create. The stories are about the different stakeholders and how the different stakeholders see themselves in the future of digital transformation. This is the way we show them how this is going to be realized.

The end result is that we will deliver a number of stories that are used to assemble the key building blocks. We create a narrative to enable them to see how the applied technology enables them to create value for their company and achieve the key growth. This is the Accelerate Moment.

Gardner: Craig, as we’ve been discussing differentiation for your customers, what differentiates HPE Pointnext Services? Why are these four Moments the best way to obtain digital transformation?

No alt text provided for this image

Partridge: Differentiation is key for us, as well as for our customers across a complex and congested landscape of partners that the customers can choose. Some of the differentiation we’ve touched on here. There is no one else in the market, as far as I’m aware, that has the edge-to-cloud digital journey map, which is HPE’s fundamental model and allows us then to holistically paint the story of not only digital transformation and digital ambition — but also shows you how to do that at the initiative level and to how plug in those building blocks.

I’m not saying that anybody with just the maturity of an edge-to-cloud model can bring digital ambition to life, to visualize it through the Activate Moment, orchestrate it through the Align Moment, create differentiation through the Advance Moment, and then get to quicker value with the Accelerate Moment.

Gardner: Craig, for those organizations interested in learning more, how do they get started? Where can they go for resources to gain the ability to innovate and be differentiated?

Partridge: If anybody viewing this has seen something that they want to grab on to, that they think can accelerate their own digital ambition, then simply pick up the phone and call HPE and your sales rep. We have sales organizations from dedicated enterprise managers at some of that biggest customers around the world, on through to small- to medium-sized businesses with our inside-sales organization. Call your HPE sales rep and say the magic words “I want to engage with a digital adviser and I’m interested in Digital Next Advisory.” And that should be the flag that triggers a conversation with one of our digital advisers around the world.

Finally, there’s an email address, If worse comes to worst, throw an email to that address and then we’d be able to get straight back to you. So, it should make it as easy as possible and just reach out to HPE advisors in advance.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. View the video. Sponsor: Hewlett Packard Enterprise.


Posted in application transformation, artificial intelligence, Business intelligence, Cloud computing, containers, Data center transformation, DevOps, digital transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

How global data availability accelerates collaboration and delivers business insights

The next BriefingsDirect data strategy insights discussion explores the payoffs when enterprises overcome the hurdles of disjointed storage to obtain global data access.

By leveraging the latest in container and storage server technologies, the holy grail of inclusive, comprehensive, and actionable storage can be obtained. And such access extends across all deployment models – from hybrid cloud, to software-as-a-service (SaaS), to distributed data centers, and edge.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us here to examine the role that comprehensive data storage plays in delivering the rapid insights businesses need for digital business transformation with our guest, Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Denis, in our earlier discussions in this three-part series we learned about IBM’s vision for global consistent data, as well as the newest systems forming the foundation for these advances.

But let’s now explore the many value streams gained from obtaining global data access. We hear a lot about the rise of artificial intelligence (AI) adoption needed to support digital businesses. So what role does a modern storage capability — particularly with a global access function and value — play in that AI growth? 

Kennelly: As enterprises become increasingly digitally transformed, the amount of data they are generating is enormous. IDC predicts that something like 42 billion Internet of things (IoT) devices will be sold by 2025, and so the role of storage is not only centralized to data centers. It needs to be distributed across this entire hybrid cloud environment.

Discover and share AI data

For actionable AI, you want to build models on all of the data that’s been generated across this environment. Being able to discover and understand that data is critical, and that’s why it’s a key part of our storage capabilities. You need to run that storage on all of these highly distributed environments in a seamless fashion. You could be running anywhere — the data center, the public cloud, and at edge locations. But you want to have the same software and capabilities for all of these locations to allow for that essential seamless access.

That’s critical to enabling an AI journey because AI doesn’t just operate on the data sitting in a public cloud or data center. It needs to operate on all of the data if you want to get the best insights. You must get to the data from all of these locations and bring it together in a seamless manner.

Gardner: When we’re able to attain such global availability of data — particularly in a consistent context – how does that accelerate AI adoption? Are there particular use cases, perhaps around DevOps? How do people change their behavior when it comes to AI adoption, thanks to what the storage and data consistency can do for them?

Kennelly: First it’s about knowing where the data is and doing basic discovery. And that’s a non-trivial task because data is being generated across the enterprise. We are increasingly collaborating remotely and that generates a lot of extended data. Being able to access and share that data across environments is a critical requirement. It’s something that’s very important to us. 

Then — as you discover and share the data – you can also bring that data together into use by AI models. You can use it to actually generate better AI models across the various tiers of storage. But you don’t want to just end up saying, “Okay, I discovered all of the data. I’m going to move it to this certain location and then I’m going to run my analytics on it.”

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Instead, you want to do the analytics in real time and in a distributed fashion. And that’s what’s critical about the next level of storage.Coming back to what’s hindering AI adoption, number one is that data discovery because enterprises spent a huge amount of time just discovering the data. And when you get access, you need to have seamless access. And then, of course, as you build your AI models you need to infuse those analytics into the applications and capabilities that you’re developing.

And that leads to your question around DevOps, to be able to integrate the processes of generating and building AI models into the application development process so that we make sure the application developers can leverage those insights for the applications they are building.

Gardner: For many organizations, moving to hybrid cloud has been about application portability. But when it comes to the additional data mobility we gain from consistent global data access, there’s a potential greater value. Is there a second shoe to fall, if you will, Denis, when we can apply such data mobility in a hybrid cloud environment?

Access data across hybrid cloud 

Kennelly: Yes, and that second shoe is about to fall. The first part of our collective cloud journey was all about moving to the public cloud, moving everything to public clouds, and building applications with cloud-based data.

What we discovered in doing that is that life is not so simple, and we’re really now in a hybrid cloud world for many reasons. Because of that success, we now need the hybrid cloud approach.

The need for more cloud portability has led to technologies like containers to get portability across all of the environments — from data centers to clouds. As we roll out containers into production, however, the whole question of data becomes even more critical.

That need for more cloud portability has led to technologies like containers to get portability across all of these environments – from data centers to clouds. As we roll out containers and these workloads into production, the whole data question is more critical.

You can now build an application that runs in a certain environment, and containers allow you to move that application to other environments very quickly. But if the data doesn’t follow — if the data access doesn’t follow that application seamlessly — then you face some serious challenges and problems.

And that is the next shoe to drop, and it’s dropping right now. As we roll out these sophisticated applications into production, being able to copy data or get access to data across this hybrid cloud environment is the biggest challenge the industry is facing.

Gardner: When we envision such expansive data mobility, we often think about location, but it also impacts the type of data – be it file, block, and object storage, for example. Why must there be global access geographically — but also in terms of the storage type and across the underlying technology platforms? 

Kennelly: To the application developer, we really have to hide from them that layer of complexity of the storage type and platform. At the end of the day, the application developer is looking for a consistent API through which to access the data services, whether that’s file, block, or object. They shouldn’t have to care about that level of detail.

No alt text provided for this image

It’s important that there’s a focus on consistent access via APIs to the developer. And then the storage subsystem has to take care of the federated global access of the data. Also, as we generate data, the storage subsystem should scale horizontally.These are the design principles we have put into the IBM Storage platform. Number one, you get seamless actions and consistent access – be it file, object, or block storage. And we can scale horizontally as you generate data across that hybrid cloud environment.

Gardner: The good news is that global data access enablement can now be done with greater ease. The bad news is the global access enablement can be done anywhere, anytime, and with ease.

And so we have to also worry about access, security, permissions, and regulatory compliance issues. How do you open the floodgates, in a sense, for common access to distributed data, but at the same time put in the guardrails that allow for the management of that access in a responsible way?

Global data access opens doors

Kennelly: That’s a great question. As we introduce simplicity and ease of data access, we can’t just open it up to everybody. We have to make sure we have good authentication as part of the design, using things like two-factor authentication on the data-access APIs.

But that’s only half of the problem. In the security world, the unfortunate acceptance is that you probably are going to get breached. It’s in how you respond that really differentiates you and determines how quickly you can get the business back on its feet.

And so, when something bad happens, the third critical role for the storage subsystem to play is in the access control to the persistence storage. At the end of the day, that is what people are after. Being able to understand the typical behavior of those storage systems, and how data is usually being stored, forms a baseline against which you can understand when something out of the ordinary is happening.

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Clearly, if you’re under a malware or CryptoLocker attack, you see a very different input/output (IO) pattern than you would normally see. We can detect that in real time, understand when it happens, and make sure you have protected copies of the data so you can quickly access that and get back to business and back online quickly.Why is all of that important? Because we live in a world where it’s not a case of if it will happen, it’s really when it will happen. How we can respond is critical.

Gardner: Denis, throughout our three-part series we’ve been discussing what we can do, but we haven’t necessarily delved into specific use cases. I know you can’t always name businesses and reference customers, but how can we better understand the benefits of a global data access capability in the context of use cases?

In practice, when the rubber hits the road, how does global data storage access enable business transformation? Is there a key metric you look for to show how well your storage systems support business outcomes? 

Global data storage success

Kennelly: We’re at a point right now when customers are looking to drive new business models and to move much more quickly in their hybrid cloud environments.

There are enabling technologies right now facilitating that. There’s a lot of talk about edge with the advent of 5G networks, which enable a lot of this to happen. When you talk about seamless access and the capability to distribute data across these environments, you need the underlying network infrastructure to make that happen.

Customers are looking to drive new business models and to move much more quickly in their hybrid cloud deployments. There’s a lot of talk about edge with the advent of 5G networks. When you talk about seamless access and the capability to distribute data across these environments, you need the underlying network infrastructure to make that happen.

As we do that, we’re looking at a number of key business measures and metrics. We have done some independent surveys and analysis looking at the business value that we drive for our clients with a hybrid cloud platform and things like portability, agility, and seamless data access.

In terms of business value, we have four or five measures. For example, we can drive roughly 2.5 times more business value for our clients — everything from top-line growth to operational savings. And that’s something that we have tested with many clients independently.

One example that’s very relevant in the world we live in today is we have a cloud provider that needed to have more federated access to their global data. But they also wanted to distribute that through edge nodes in a consistent manner. And that’s just an example of why this is happening in action.

Gardner: You know, some of the major consumers of analytics in businesses these days are data scientists, and they don’t always want to know what’s going on underneath the covers. On the other hand, what goes on underneath the covers can greatly impact how well they can do their jobs, which are often essential to digital business transformation.

No alt text provided for this image

For you to address a data scientist specifically about why global access for data and storage modernization is key, what would you tell them? How do you describe the value that you’re providing to someone like a data scientist who plays such a key role in analytics?

Kennelly: Well, data scientists talk a lot about data sets. They want access to data sets so they can test their hypothesis very quickly. In a nutshell, we surface data sets quicker and faster than anybody else at a price performance that leads the industry — and that’s what we do every day to enable data scientists.

Gardner: Throughout our series of three storage strategy discussions, we’ve talked about how we got here and what we’re doing. But we haven’t yet talked about what comes next.

These enabling technologies not only satisfy business imperatives and requirements now but set up organizations to be even more intelligent over time. Let’s look to the future for the expanding values when you do data access globally and across hybrid clouds well. 

Insight-filled future drives growth

Kennelly: Yes, you get to critically look at current and new business models. At the end of the day, this is about driving business growth. As you start to look at these environments — and we’ve talked a lot about analytics and data – it becomes about getting competitive advantage through real-time insights about what’s going on in your environments.

You become able to better understand your supply chain, what’s happening in certain products, and in certain manufacturing lines. You’re able to respond accordingly. There’s a big operational benefit in terms of savings. You don’t have to have excess capacity in the environment.

Part 1 in the IBM Storage innovation series

Part 2 in the series 

Also, in seeking new business opportunities, you will detect the patterns needed to have insights you hadn’t had before by doing analytics and machine learning into what’s critical in your systems and markets. If you move your IT environment and centralize everything in one cloud, for example, then that really hinders that progress.By being able to do that with all of the data as it’s generated in real time, you get very unique insights that provide competitive advantage.

Gardner: And lastly, why IBM? What sets you apart from the competition in the storage market for obtaining these larger goals of distributed analytics, intelligence, and competitiveness?

Kennelly: We have shown over the years that we have been at the forefront of many transformations of businesses and industries. Going back to the electronic typewriter, if we want to go back far enough, or now to our business-to-business (B2B) or business-to-employee (B2E) models in the hybrid cloud — IBM has helped businesses make these transformations. That includes everything from storage to data and AI through to hybrid cloud platforms, with Red Hat Enterprise Linux, and right out to our business service consulting.

IBM has the end-to-end capabilities to make that all happen. It positions us as an ideal partner who can do so much.

I love to talk about storage and the value of storage, and I spend a lot of time talking with people in our business consulting group to understand the business transformations that clients are trying to drive and the role that storage has in that. Likewise, with our data science and data analytics teams that are enabling those technologies.

The combination of all of those capabilities as one idea is a unique differentiator for us in the industry. And it’s why we are developing the leading edge capabilities, products, and technology to enable the next digital transformations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.


Posted in AIOps, application transformation, big data, Business intelligence, Cloud computing, containers, Cyber security, data analysis, data center, Data center transformation, digital transformation, enterprise architecture, IBM, machine learning, Software, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How consistent storage services across all tiers and platforms attains data simplicity, compatibility, and lower cost

This BriefingsDirect Data Strategies Insights discussion series, Part 2, explores the latest technologies and products delivering common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

New advances in storage technologies, standards, and methods have changed the game when it comes to overcoming the obstacles businesses too often face when seeking pervasive analytics across their systems and services. 

Stay with us now as we examine how IBM Storage is leveraging containers and the latest storage advances to deliver inclusive, comprehensive, and actionable storage.  

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the future of storage strategies that accelerate digital transformation, please welcome Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: In our earlier discussion we learned about the business needs and IBM’s large-scale vision for global, consistent data. Let’s now delve beneath the covers into what enables this new era of data-driven business transformation. 

In our last discussion, we also talked about containers — how they had been typically relegated to application development. What should businesses know about the value of containers more broadly within the storage arena as well as across other elements of IT?

Containers for ease, efficiency

Kennelly: Sometimes we talk about containers as being unique to application development, but I think the real business value of containers is in the operational simplicity and cost savings. 

When you build applications on containers, they are container-aware. When you look at Kubernetes and the controls you have there as an operations IT person, you can scale up and scale down your applications seamlessly. 

As we think about that and about storage, we have to include storage under that umbrella. Traditionally, storage was independently doing of a lot of the work. Now we are in a much more integrated environment where you have cloud-like behaviors. And you want to deliver those cloud-like behaviors end-to-end — be it for the applications, for the data, for the storage, and even for the network — right across the board. That way you can have a much more seamless, easier, and operationally efficient way of running your environment. 

Containers are much more than just an application development tool; they are a key enabler to operational improvement across the board.

Gardner: Because hybrid cloud and multi-cloud environments are essential for digital business transformation, what does this container value bring to bridging the hybrid gap? How do containers lead to a consistent and actionable environment, without integrations and complexity thwarting wider use of assets around the globe?

Kennelly: Let’s talk about what a hybrid cloud is. To me, a hybrid cloud is the ability to run workloads on a public cloud and on a private cloud traditional data center. And even right out to edge locations in your enterprise where there are no IT people whatsoever. 

Being able to do that consistently across that environment — that’s what containers bring. They allow a layer of abstraction above the target environment, be it a bare-metal server, a virtual machine (VM), or a cloud service – and you can do that seamlessly across those environments.

That’s what a hybrid cloud platform is and what enables that are containers and being able to have a seamless runtime across this entire environment.

Today, as an enterprise, we still have assets sitting on a data center. Yet typical horizontal business processes, such as HR or sales, want to move to a SaaS model while still retaining core differentiating business processes. 

And that’s core to digital transformation, because when we start to think about where we are today as an enterprise, we still have assets sitting on the data center. Typically, what you see out there are horizontal business processes, such as human resources or sales, and you might want to move those more to a software as a service (SaaS) capability while still retaining your core, differentiating business processes.

For compliance or regulatory reasons, you may need to keep those assets in the data center. Maybe you can move some pieces. But at the same time, you want to have the level of efficiency you gain from cloud-like economics. You want to be able to respond to business needs, to scale up and scale down the environment, and not design the environment for a worst-case scenario. 

That’s why a hybrid cloud platform is so critical. And underneath that, why containers are a key enabler. Then, if you think about the data in storage, you want to seamlessly integrate that into a hybrid environment as well.

Gardner: Of course, the hybrid cloud environment extends these days more broadly with the connected edge included. For many organizations the edge increasingly allows real-time analytics capabilities by taking advantage of having compute in so many more environments and closer to so many more devices.

What is it about the IBM hybrid storage vision that allows for more data to reside at the edge without having to move it into a cloud, analyze it there, and move it back? How are containers enabling more data to stay local and still be part of a coordinated whole greater than the sum of the parts?

Data and analytics at the edge

Kennelly: As an industry, we go from being centralized to decentralized — what I call a pendulum movement every number of years. If you think back, we were in the mainframe, where everything was very centralized. Then we went to distributed systems and decentralized everything.

With cloud we began to recentralize everything again. And now we are moving our clouds back out to the edge for a lot of reasons, largely because of egress and ingress challenges and to seek efficiency in moving more and more of that data. 

No alt text provided for this image

When I think about edge, I am not necessarily thinking about Internet of things (IoT) devices or sensors, but in a lot of cases this is about branch and remote locations. That’s where a core part of the enterprise operates, but not necessarily with an IT team there. And that part of the enterprise is generating data from what’s happening in that facility, be it a manufacturing plant, a distribution center, or many others.

As you generate that data, you also want to generate the analytics that are key to understanding how the business is reacting and responding. Do you want to move all that data to a central cloud to run analytics, and then take the result back out to that distribution center? You can do that, but it’s highly inefficient — and very costly. 

What our clients are asking for is to keep the data out at these locations and to run the analytics locally. But, of course, with all of the analytics you still want to share some of that data with a central cloud.

So, what’s really important is that you can share across this entire environment, be it from a central data center or a central cloud out to an edge location and provide what we call seamless access across this environment. 

With our technology, with things like IBM Spectrum Scale, you gain that seamless access. We abstract the data access as if you are accessing the data locally — or it could be back in the cloud. But in terms of the applications, it really doesn’t care. That seamless access is core to what we are doing.

Gardner: The IBM Storage portfolio is broad and venerable. It includes flash, disk, and tape, which continues to have many viable use cases. So, let’s talk about the products and how they extend the consistency and commonality that we have talked about and how that portfolio then buttresses the larger hybrid storage vision.

Storage supports all environments 

Kennelly: One of the key design points of our portfolio, particularly our flash line, is being able to run in all environments. We have one software code base across our entire portfolio. That code runs on our disk subsystems and disk controllers, but it can also run on your platform of choice. So, we absolutely support all platforms across the board. So that’s one design principle. 

Secondly, we embrace containers very heavily. And being able to run on containers and provide data services across those containers provides that seamless access that I talked about. That’s a second major design principle.

Yet as we look at our storage portfolio, we also want to make sure we optimize the storage and optimize the spend by the customer by tiered storage and being able to move data across those different tiers of storage.

As we look at our storage portfolio, we also want to make sure we optimize the storage and optimize the spend by the customer by tiered storage and being able to move data across those different tiers of storage.

You mentioned tape storage. And so, for example, at times you may want to move from fast, online, always-on, and high-end storage to a lower tier of less expensive storage such as tape, maybe for data retention reasons. You’ll then need an air gap solution and you’ll want to move to cold storage, as we call it, i.e. on tape. We support that capability and we can manage your data across that environment. 

There are three core design principles to our IBM Storage portfolio. Number one is we can run seamlessly across these environments. Number two, we provide seamless access to the data across those environments. And number three, we support optimization of the storage for the use case needed, such being able to tier the storage to your economic and workload needs.

Gardner: Of course, what people are also interested in these days is the FlashSystem performance. Tell us about some of the latest and greatestwhen it comes to FlashSystem. You have the new 5200, the high-end 9200, and those also complement some of your other products like ESS 3200

Flash provides best performance

Kennelly: Yes, we continue to expand the portfolio. With the FlashSystems, and some of our recent launches, some things don’t change. We’re still able to run across these different environments.

But in terms of price-performance, especially with the work we have done around our flash technology, we have optimized our storage subsystems to use standard flash technologies. In terms of price for throughput, when we look at this against our competitors, we offer twice the performance for roughly half the price. And this has been proven as we look at our competitors’ technology. That’s due to leveraging our innovations around what we call the FlashCore Module, wherein we are able to use standard flash in those disk drives and enable compression on the fly. That’s driving the roadmap in terms of throughput and performance at a very, very competitive price point.

Gardner: Many of our readers and listeners, Denis, are focused on their digital business transformation. They might not be familiar with some of these underlying technological advances, particularly end-to-end Non-Volatile Memory Express (NVMe). So why are these systems doing things that just weren’t possible before?

No alt text provided for this image

Kennelly: A lot of it comes down to where the technology is today and the price points that we can get from flash from our vendors. And that’s why we are optimizing our flash roadmap and our flash drives within these systems. It’s really pushing the envelope in terms of performance and throughput across our flash platforms.

Gardner: The desired end-product for many organizations is better and pervasive analytics. And one of the great things about artificial intelligence (AI) and machine learning (ML) is it’s not only an output — it’s a feature of the process of enhancing storage and IT.

How are IT systems and storage using AI inside these devices and across these solutions? What is AI bringing to enable better storage performance at a lower price point?

Kennelly: We continue to optimize what we can do in our flash technology, as I said. But when you embark on an AI project, something like 70 to 80 percent of the spend is around discovery, gaining access to the data, and finding out where the data assets are. And we have capabilities like IBM Spectrum Discover that help catalog and understand where the data is and how to access that data. It’s a critical piece of our portfolio on that journey to AI.

We also have integrations with AI services like Cloudera out of the box so that we can seamlessly integrate with those platforms and help those platforms differentiate using our Spectrum Scale technology.

But in terms of AI, we have some really key enablers to help accelerate AI projects through discovery and integration with some of the big AI platforms.

Gardner: And these new storage platforms are knocking off some impressive numbers around high availability and low latency. We are also seeing a great deal of consolidation around storage arrays and managing storage as a single pool. 

On the economics of the IBM FlashSystem approach, these performance attributes are also being enhanced by reducing operational costs and moving from CapEx to OpEx purchasing.

Storage-as-a-service delivers

Kennelly: Yes, there is no question we are moving toward an OpEx model. When I talked about cloud economics and cloud-like flexibility behavior at a technology level, that’s only one side of the equation. 

On the business side, IT is demanding cloud consumption models, OpEx-type models, and pay-as-you-go. It’s not just a pure financial equation, it’s also how you consume the technology. And storage is no different. This is why we are doing a lot of innovation around storage-as-a-service. But what does that really mean? 

It means you ask for a service. “I need a certain type of storage with this type of availability, this type of performance, and this type of throughput.” Then we as a storage vendor take care of all the details behind that. We get the actual devices on the floor that meet those requirements and manage that. 

As those assets depreciate over a number of years, we replace and update those assets in a seamless manner to the client.

We already have the technology to support all environments. Now we want to make sure we have a seamless consumption model and the business processes of delivering storage-as-a-service and being able to replace and upgrade that storage over time — all seamless to the client.

As the storage sits in the data center, maybe the customer says, “I want to move some of that data to a cloud instance.” We also offer a seamless capability to move the data over to the cloud and run that service on the cloud. 

We already have all the technology to do that and the platform support for all of those environments. What we are working on now is making sure we have a seamless consumption model and the business processes of delivering that storage-as-a-service, and how to replace and upgrade that storage over time — while making it all seamless to the client. 

I see storage moving quickly to this new storage consumption model, a pure OpEx model. That’s where we as an industry will go over the next few years.

Gardner: Another big element of reducing your total cost of ownership over time is in how well systems can be managed. When you have a common pool approach, a comprehensive portfolio approach, you also gain visibility, a single pane of glass when it comes to managing these systems.

Intelligent insights via storage

Kennelly: That’s an area we continue to invest in heavily. Our IBM Storage Insights platform provides tremendous insights in how the storage subsystems are running operationally. It also provides insights within the storage in terms of where you have space constraints or where you may need to expand. 

But that’s not just a manual dashboard that we present to an operator. We are also infusing AI quite heavily into that platform and using AIOps to integrate with Storage Insights to run storage operations at much lower costs and with more automation.

And we can do that in a consistent manner right across the environments, whether it’s a flash storage array, mainframe attached, or a tape device. It’s all seamless across the environment. You can see those tiers and storage as one platform and so are able to respond quickly to events and understand events as they are happening.

Gardner: As we close out, Denis, for many organizations hybrid cloud means that they don’t always know what’s coming and lack control over predicting their IT requirements. Deciding in advance how things get deployed isn’t always an option.

How do the IBM FlashSystems, and your recent announcements in February 2021, provide a path to a crawl-walk-run adoption approach? How do people begin this journey regardless of the type of organization and the size of the organization?

Kennelly: We are introducing an update to our FlashSystem 5200platform, which is our entry point platform. Now, that consistent software platform runs our storage software, IBM Spectrum Virtualize. It’s the same software as in our high-end arrays at the very top of our pyramid of capabilities. 

No alt text provided for this image

As part of that announcement, we are also supporting other public cloud vendors. So you can run the software on our arrays, or you can move it out to run on a public cloud. You have tremendous flexibility and choice due to the consistent software platform.

And, as I said, it’s our entry point so the price is very, very competitive. This is a part of the market where we see tremendous growth. You can experience the best of the IBM Storage platform at a low-cost entry point, but also get the tremendous flexibility. You can scale up that environment within your data center and right out to your choice of how to use the same capabilities across the hybrid cloud.

There has been tremendous innovation by the IBM team to make sure that our software supports this myriad of platforms, but also at a price point that is the sweet spot of what customers are asking for now.

Gardner: It strikes me that we are on the vanguard of some major new advances in storage, but they are not just relegated to the largest enterprises. Even the smallest enterprises can take advantage and exploit these great technologies and storage benefits.

Kennelly: Absolutely. When we look at the storage market, the fastest growing part is at that lower price point — where it’s below $50K to $100K unit costs. That’s where we see tremendous growth in the market and we are serving it very well and very efficiently with our platforms. And, of course, as people want to scale and grow, they can do that in a consistent and predictable manner.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

You may also be interested in:

Posted in Cloud computing, data analysis, data center, Data center transformation, digital transformation, IBM, storage | Tagged , , , , , , , , , , , , , | Leave a comment

How storage advances help businesses digitally transform across a hybrid cloud world

The next BriefingsDirect data strategies insights discussion explores how consistent and global storage models can best propel pervasive analytics and support digital business transformation.

Decades of disparate and uncoordinated storage solutions have hindered enterprises’ ability to gain common data services across today’s hybrid cloud, distributed data centers, and burgeoning edge landscapes.

Yet only a comprehensive data storage model that includes all platforms, data types, and deployment architectures will deliver the rapid insights that businesses need.

Stay with us to examine how IBM Storage is leveraging containers and the latest storage advances to deliver the holy grail of inclusive, comprehensive, and actionable storage.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the future promise of the storage strategies that accelerate digital transformation, please welcome Denis Kennelly, General Manager, IBM Storage. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Clearly the world is transforming digitally. And hybrid cloud is helping in that transition. But what role specifically does storage play in allowing hybrid cloud to function in a way that bolsters and even accelerates digital transformation?

Kennelly: As you said, the world is undergoing a digital transformation, and that is accelerating in the current climate of a COVID-19 world. And, really, it comes down to having an IT infrastructure that is flexible, agile, has cloud-like attributes, is open, and delivers the economic value that we all need.

That is why we at IBM have a common hybrid cloud strategy. A hybrid cloud approach, we now know, is 2.5 times more economical than a public cloud-only strategy. And why is that? Because as customers transform — and transform their existing systems — the data and systems sit on-premises for a long time. As you move to the public cloud, the cost of transformation has to overcome other constraints such as data sovereignty and compliance. This is why hybrid cloud is a key enabler.

Hybrid cloud for transformation

Now, underpinning that, the core building block of the hybrid cloud platform, is containers and Kubernetesusing our OpenShifttechnology. That’s the key enabler to the hybrid cloud architecture and how we move applications and data within that environment.

As the customer starts to transform and looks at those applications and workloads as they move to this new world, being able to access the data is critical and being able to keep that access is a really important step in that journey. Integrating storage into that world of containers is therefore a key building block on which we are very focused today.

Storage is where you capture all that state, where all the data is stored. When you think about cloud, hybrid cloud, and containers — you think stateless. You think about cloud-like economics as you scale up and scale down. Our focus is bridging those two worlds and making sure that they come together seamlessly. To that end, we provide an end-to-end hybrid cloud architecture to help those customers in their transformation journeys.

Gardner: So often in this business, we’re standing on the shoulders of the giants of the past 30 years; the legacy. But sometimes legacy can lead to complexity and becomes a hindrance. What is it about the way storage has evolved up until now that people need to rethink? Why do we need something like containers, which seem like a fairly radical departure?

Kennelly: It comes back to the existing systems. You know, I think storage at the end of the day was all about the applications, the workloads that we ran. It was storage for storage’s sake. You know, we designed applications, we ran applications and servers, and we architected them in a certain fashion.

When you get to a hybrid cloud world … If you’re in a digitally transformed business, you can respond rapidly. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity.

And, of course, they generated data and we wanted access to that data. That’s just how the world happened. When you get to a hybrid cloud world — I mean, we talk about cloud-like behavior, cloud-like economics — it manifests itself in the ability to respond.

If you’re in a digitally transformed business, you can respond to needs in your supply chain rapidly, maybe to a surge in demand based on certain events. Your infrastructure needs to respond to those needs versus having the maximum throughput capacity that would ever be needed. That’s the benefit cloud has brought to the industry, and why it’s so critically important.

Now, maybe traditionally storage was designed for the worst-case scenario. In this new world, we have to be able to scale up and scale down elastically like we do in these workloads in a cloud-like fashion. That’s what has fundamentally changed and what we need to change in those legacy infrastructures. Then we can deliver more of our analysis-services-consumption-type model to meet the needs of the businesses.

Gardner: And on that economic front, digitally transformed organizations need data very rapidly, and in greater volumes — with that scalability to easily go up and down. How will the hybrid cloud model supported by containers provide faster data in greater volumes, and with a managed and forecastable economic burden?

Disparate data delivers insights

Kennelly: In a digitally transformed world, data is the raw material to a competitive advantage. Access to data is critical. Based on that data, we can derive insights and unique competitive advantages using artificial intelligence (AI) and other tools. But therein lies the question, right?

When we look at things like AI, a lot of our time and effort is spent on getting access to the data and being able to assemble that data and move it to where it is needed to gain those insights.

Being able to do that rapidly and at a low cost is critical to the storage world. And so that’s what we are very focused on, being able to provide those data services — to discover and access the data seamlessly. And, as required, we can then move the data very rapidly to build on those insights and deliver competitive advantage to a digitally transformed enterprise.

Gardner: Denis, in order to have comprehensive data access and rapidly deliver analytics at an affordable cost, the storage needs to run consistently across a wide variety of different environments — bare-metal, virtual machines (VMs), containers — and then to and from both public and private clouds, as well as the edge.

What is it about the way that IBM is advancing storage that affords this common view, even across that great disparity of environments?

Kennelly: That’s a key design principle for our storage platform, what we call global access or a global file system. We’re going right back to our roots of IBM Research, decades ago where we invented a lot of that technology. And that’s the core of what we’re still talking about today — to be able to have seamless access across disparate environments.

A key design principle for our storage platform, what we call global access or a global file system, goes back to our roots at IBM Research. We invented a lot of that technology. And that’s at the core of what we’re talking about — seamless access across disparate environments.

Access is one issue, right? You can get read-access to the data, but you need to do that at high performance and at scale. At the same time, we are generating data at a phenomenal rate, so you need to scale out the storage infrastructure seamlessly. That’s another critical piece of it. We do that with products or capabilities we have today in things like IBM Spectrum Scale.

But another key design principle in our storage platforms is being to run in all of those environments — bare-metal servers, to VMs, to containers, and right out to the edge footprints. So we are making sure our storage platform is designed and capable of supporting all of those platforms. It has to run on them and as well as support the data services — the access services, the mobility services and the like, seamlessly across those environments. That’s what enables the hybrid cloud platform at the core of our transformation strategy.

Gardner: In addition to the focus on the data in production environments, we also should consider the development environment. What does your data vision include across a full life-cycle approach to data, if you will?

Be upfront with data in DevOps

Kennelly: It’s a great point because the business requirements drive the digital transformation strategy. But a lot of these efforts run into inertia when you have to change. The development processes teams within the organization have traditionally done things in a certain way. Now, all of a sudden, they’re building applications for a very different target environment — this hybrid cloud environment, from the public cloud, to the data center, and right out to the edge.

The economics we’re trying to drive require flexible platforms across the DevOpstool chain so you can innovate very quickly. That’s because digital transformation is all about how quickly you can innovate via such new services. The next question is about the data.

As you develop and build these transformed applications in a modern, DevOps cloud-like development process, you have to integrate your data assets early and make sure you know the data is available — both in that development cycle as well as when you move to production. It’s essential to use things like copy-data-management services to integrate that access into your tool chain in a seamless manner. If you build those applications and ignore the data, then it becomes a shock as you roll it into production.

This is the key issue. A lot of times we can get an application running in one scenario and it looks good, but as you start to extend those services across more environments — and haven’t thought through the data architecture — a lot of the cracks appear. A lot of the problems happen.

You have to design in the data access upfront in your development process and into your tool chains to make sure that’s part of your core development process.

Gardner: Denis, over the past several years we’ve learned that containers appear to be the gift that keeps on giving. One of the nice things about this storage transition, as you’ve described, is that containers were at first a facet of the development environment.

Developers leveraged containers first to solve many problems for runtimes. So it’s also important to understand the limits that containers had. Stateful and persistent storage hadn’t been part of the earlier container attributes.

How technically have we overcome some of the earlier limits of containers?

Containers create scalable benefits

Kennelly: You’re right, containers have roots in the open-source world. Developers picked up on containers to gain a layer of abstraction. In an operational context, it gives tremendous power because of that abstraction layer. You can quickly scale up and scale down pods and clusters, and you gain cloud-like behaviors very quickly. Even within IBM, we have containerized software and enabled traditional products to have cloud-like behaviors.

We were able to quickly move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs such as when there’s a spike in demand and you need to scale up the environment. Containers are amazing in how quickly and how simple that is.

We have been able to move to a scalable, cloud-like platform very quickly using container technology, which is a tremendous benefit as a developer. We then moved containers to operations to respond to business needs to scale up and down. Containers are amazing in how quickly and how simple that is.

Now, with all of that power and the capability to scale up and scale down workloads, you also have a storage system sitting at the back end that has to respond accordingly. That’s because as you scale up more containers, you generate more input/output (IO) demands. How does the storage system respond?

Well, we have managed to integrate containers into the storage ecosystem. But, as an industry, we have some work to do. The integration of storage with containers is not just the simple IO channel to the storage. It also needs to be able to scale out accordingly, and to be managed. It’s an area we at IBM are focused on working closely with our friends at Red Hat to make sure that’s a very seamless integration and gives you consistent, global behavior.

Gardner: With security and cyber-attacks being so prominent in people’s minds in early 2021, what impacts do we get with a comprehensive data strategy when it comes to security? In the past, we had disparate silos of data. Sometimes, bad things could happen between the cracks.

So as we adopt containers consistently is there an overarching security benefit when it comes to having a common data strategy across all of your data and storage types?

Prevent angles of attack

Kennelly: Yes. It goes back to the hybrid cloud platform and having potentially multiple public clouds, data center workloads, edge workloads, and all of the combinations thereof. The new core is containers, but you know that with applications running across that hybrid environment that we’ve expanded the attack surface beyond the data center.

By expanding the attack surface, unfortunately, we’ve created more opportunities for people to do nefarious things, such as interrupt the applications and get access to the data. But when people attack a system, the cybercriminals are really after the data. Those are the crown jewels of any organization. That’s why this is so critical.

Data protection then requires understanding when somebody is tampering with the data or gaining access to data and doing something nefarious with that data. As we look at our data protection technologies, and as we protect our backups, we can detect if something is out of the ordinary. Integrating that capability into our backups and data protection processes is critical because that’s when we see at a very granular level what’s happening with the data. We can detect if behavioral attributes have changed from incremental backups or over time.

We can also integrate that into business process because, unfortunately, we have to plan for somebody attacking us. It’s really about how quickly we can detect and respond very quickly to get the systems back online. You have to plan for the worst-case scenario.

That’s why we have such a big focus on making sure we can detect in real time when something is happening as the blocks are literally being written to the disk. We can then also unwind to when we seek a good copy. That’s a huge focus for us right now.

Gardner: When you have a comprehensive data infrastructure, can go global and access data across all of these different environments, it seems to me that you have set yourself up for a pervasive analytics capability, which is the gorilla in the room when it comes to digital business transformation. Denis, how does the IBM Storage vision help bring more pervasive and powerful analytics to better drive a digital business?

Climb the AI Ladder

Kennelly: At the end of the day, that’s what this is all about. It’s about transforming businesses, to drive analytics, and provide unique insights that help grow your business and respond to the needs of the marketplace.

It’s all about enabling top-line growth. And that’s only possible when you can have seamless access to the data very quickly to generate insights literally in real time so you can respond accordingly to your customer needs and improve customer satisfaction.

This platform is all about discovering that data to drive the analytics. We have a phrase within IBM, we call it “The AI Ladder.” The first rung on that AI ladder is about discovering and accessing the data, and then being able to generate models from those analytics that you can use to respond in your business.

We’re all in a world based on data. AI has a major role to play where we can look at business processes and understand how they are operating and then drive greater automation.That’s a huge focus for us — optimizing and automating existing business processes.

We’re all in a world based on data. And we’re using it to not only look for new business opportunities but for optimizing and automating what we already have today. AI has a major role to play where we can look at business processes and understand how they are operating and then, based on analytics and AI, drive greater automation. That’s a huge focus for us as well: Not only looking at the new business opportunities but optimizing and automating existing business processes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: IBM Storage.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cloud computing, containers, data analysis, Data center transformation, DevOps, digital transformation, IBM, Information management, machine learning, Security, storage | Leave a comment

The future of work is happening now thanks to Digital Workplace Services

Businesses, schools, and governments have all had to rethink the proper balance between in-person and remote work. And because that balance is a shifting variable — and may well continue to be for years after the pandemic — it remains essential that the underlying technology be especially agile.

The next BriefingsDirect worker strategies discussion explores how a partnership behind a digital workplace services solution delivers a sliding scale for blended work scenarios. We’ll learn how Unisys, Dell, and their partners provide the time-proof means to secure applications intelligently — regardless of location.

We’ll also hear how an increasingly powerful automation capability makes the digital workplace easier to attain and support.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in cloud-delivered desktop modernization, please welcome Weston Morris, Global Strategy, Digital Workplace Services, Enterprise Services, at Unisys, and Araceli Lewis, Global Alliance Lead for Unisys at Dell Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Weston, what are the trends, catalysts, and requirements transforming how desktops and apps are delivered these days?

Morris: We’ve all lived through the hype of virtual desktop infrastructure (VDI). Every year for the last eight or nine years has supposedly been the year of VDI. And this is the year it’s going to happen, right? It had been a slow burn. And VDI has certainly been an important part of the “bag of tricks” that IT brings to bear to provide workers with what they need to be productive.

COVID sends enterprises to cloud

But since the beginning of 2020, we’ve all seen — because of the COVID-19 pandemic — VDI brought to the forefront in the importance of having an alternative way of delivering a digital workplace to workers. This has been especially important in environments where enterprises had not invested in mobility, the cloud, or had not thought about making it possible for user data to reside outside of their desktop PCs.

Those enterprises had a very difficult time moving to a work-from-home (WFH) model — and they struggled with that. Their first instinct was, “Oh, I need to buy a bunch of laptops.” Well, everybody wanted laptops at the beginning of the pandemic, and secondly, they were being made in China mostly — and those factories were shut down. It was impossible to buy a laptop unless you had the foresight to do that ahead of time.

And that’s when the “aha” moment came for a lot of enterprises. They said, “Hey, cloud-based virtual desktops — that sounds like the answer, that’s the solution.” And it really is. They could set that up very quickly by spinning up essentially the digital workplace in the cloud and then having their apps and data stream down securely from the cloud to their end users anywhere. That’s been the big “aha” moment that we’ve had as we look at our customer base and enterprises across the world. We’ve done it for our own internal use.

Gardner: Araceli, it sounds like some verticals and in certain organizations they may have waited too long to get into the VDI mindset. But when the pandemic hit, they had to move quickly.

What is about the digital workplace services solution that you all are factoring together that makes this something that can be done quickly?

Lewis: It’s absolutely true that the pandemic elevated digital workplace technology from being a nice-to-have, or a luxury, to being an absolute must-have. We realized after the pandemic struck that public sector, education, and more parts of everyday work needed new and secure ways of working remotely. And it had to become instantaneously available for everyone.

You had every C-level executive across every industry in the United States shifting to the remote model within two weeks to 30 days, and it was also needed globally. Who better than Dell on laptops and these other endpoint devices to partner with Unisys globally to securely deliver digital workspaces to our joint customers? Unisys provided the security capabilities and wrapped those services around the delivery, whereas we at Dell have the end-user devices.

You had every C-level executive across every industry in the U.S. shifting to the remote model within two weeks to 30 days, and it was also needed globally. Unisys provided the security capabilities and wrapped those services around delivery, whereas Dell had the end-user devices.

What we’ve seen is that the digitalization of it all can be done in the comfort of everyone’s home. You’re seeing them looking at x-rays, or a nurse looking into someone’s throat via telemedicine, for example. These remote users are also able to troubleshoot something that might be across the world using embedded reality, virtual reality (VR) embedded, and wearables.

We merged and blended all of those technologies into this workspaces environment with the best alliance partners to deliver what the C-level executives wanted immediately.

Gardner: The pandemic has certainly been an accelerant, but many people anticipated more virtual delivery of desktops and apps as inevitable. That’s because when you do it, you get other timely benefits, such as flexible work habits. Millennials tend to prefer location-independence, for example, and there are other benefits during corporate mergers and acquisitions and for dynamic business environments.

So, Weston, what are some of the other drivers that reward people when they make the leap to virtual delivery of apps and desktops?

Take the virtual leap, reap rewards

Morris: I’m thinking back to a conversation I had with you, Araceli, back in March. You were excited and energized around the topic of business continuity, which obviously started with the pandemic.

But, Dana, there are other forces at work that preceded the pandemic and that we know will continue after the pandemic. And mergers and acquisition are a very big one. We see a tremendous amount of activity there in the healthcare space, for example, which was affected in multiple ways by the pandemic. Pharmaceuticals and life sciences as well, there are multiple merger activities going on there.

One of the big challenges in a merger or acquisition is how to quickly get the acquired employees working as first-class citizens as quickly as possible. That’s always been difficult. You either give them two laptops, or two desktops, and say, “Here’s how you do the work in the new company, and here’s where you do the work in the old company.” Or you just pull the plug and say, “Now, you have to figure out how to do everything in a new way in web time, including human resources and all of those procedures in a new environment — and hopefully you will figure it all out.”

But with a cloud-based, virtual desktop capability — especially with cloud-bursting — you can quickly spin up as much capacity as you need and build upon the on-premises capabilities you already have, such as on Dell EMC VxRail, and then explode that into the cloud as needed using VMware Horizon to the Microsoft Azure cloud.

That’s an example of providing a virtual desktop for all of the newly acquired employees for them do their new corporate-citizen stuff while they keep their existing environment and continue to be productive by doing the job you hired them to do when you made the acquisition. That’s a very big use case that we’re going to continue to see going forward.

Gardner: Now, there were number of hurdles historically toward everyone adopting VDI. One of the major use cases was, of course, security and being able to control content by having it centrally located on your servers or on your cloud — rather than stored out on every device. Is that still a driving consideration, Weston? Are people still looking for that added level of security, or has that become passé?

Morris: Security has become even more important throughout the pandemic. In the past, to a large extent, the corporate firewall-as-secure-the-perimeter model has worked fairly well. And we’ve been punching holes in the firewall for several years now.

But with the pandemic — with almost everyone working from home — your office network just exploded. It now extends everywhere. Now you have to worry about how well secured any one person’s home network is. Do they have their password changed or default password changed on their home router? Have they updated the firmware on it? And a lot of these things are beyond the average worker to worry about and to be thinking about.

But if we separate out the workload and put it into the cloud — so that you have the digital workplace sitting in the cloud — that is much more secure than a device sitting on somebody’s desk connected to a very questionable home network environment.

Gardner: Another challenge in working toward more modern desktop delivery has been cost, because it’s usually been capital-intensive and required upfront investment. But when you modernize via the cloud that can shift.

Araceli, what are some of the challenges that we’re now able to overcome when it comes to the economics of virtual desktop delivery?

Cost benefits of partnering

Lewis: The beautiful thing here is that in our partnership with Unisys and Dell Financial Services (DFS), we’re able to utilize different utility models when it comes to how we consume the technology.

We don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. So, that’s extremely flexible.

You don’t have to have upfront capital expenditures. We basically look at different ways that we can do server and platform infrastructure. Then we can consume the technology in the most efficient manner, and that works with the books and how we’re going to depreciate. It’s extremely flexible.

And by partnering with Unisys, they secure those VDI solutions across all of the three core components: The VDI portion within the data center, the endpoint devices, and of course, the software. By partnering with Unisys in our alliance ecosystem, we get the best of DFS, Dell Technology, VMware software, and Unisys security capabilities.

Gardner: Weston, another issue that’s dogged VDI adoption is complexity for the IT department. When we think about VDI, we can’t only think about end users. What has changed for how the IT department deploys infrastructure, especially for a hybrid approach where VDI is delivered both from on-premises data centers as well as the cloud?

Intelligent virtual agents assist IT

Morris: Araceli and I have had several conversations about this. It’s an interesting topic. There has always been a lot of work to stand up VDI. If you’re starting from scratch, you’re thinking about storage, IOPS, and network capacity. Where are my apps? What’s the connectivity? How are we going to run it at optimal performance? After all, are the end users happy with the experience they’re getting? And how can I even know that what their experience is?

And now, all that’s changed thanks to the evolving technology. One is the advent of artificial intelligence (AI) and the use of personal intelligent virtual assistance. At home, we’re used to that, right? We ask AlexaSiri, or Cortana what’s going on with the weather? What’s happening in the news? We ask our virtual assistants all of these things and we expect to be able to get instant answers and help. Why is that not available in the enterprise for IT? Well, the answer is it is now available.

As you can imagine on the provisioning side, wouldn’t it be great if you were able to talk to a virtual assistant that understood the provisioning process? You simply answer questions posed by the assistant. What is it you need to provision? What is your load that you’re looking at? Do you have engineers that need to access virtual desktops? What types of apps might they need? What is the type of security?

Then the virtual assistant understands the business and IT processes to provision the infrastructure needed virtually in the cloud to make that all happen or to cloud-burst from your on-premises Dell VxRail into the cloud.

That is a very important game changer. The other aspect of the intelligent virtual agent is it now resides on the virtual desktop as well. I, as an at-home worker, may have never seen a virtual desktop before. And now, the virtual assistant pops up and guides the home worker through the process of connecting, explaining how their apps work, and saying, “I’m always here. I’m ready to give you help whenever possible.” But I think I’ll defer to the expert here.

Araceli, do you want to talk about the power of the hybrid environment and how that simplifies the infrastructure?

Multiple workloads managed

Lewis: Sure, absolutely. At Dell EMC, we are proud of the fact that Gartner rates us number one, as a leader in the category for pretty much all of the products that we’ve included in this VDI solution. When Unisys and my alliances team get the technology, it’s already been tested from a hyper-converged infrastructure (HCI) perspective. VxRail has been tested, tried-and-true as an automated system in which we combine servers, storage, network, and the software.

That way, Weston and I don’t have to worry about what size are we going to use. We actually have T-shirt sizes already for the number of VDI users that are needed that have been thought out. We have the graphics-intensive portion of it thought out. And we can basically deploy quickly and then put the workloads on them as we need to spin them up or spin them down or to add more.

We can adjust on the fly. That’s a true testament of our HCI being the backbone of the solution. And we don’t have to get into all of the testing, regression testing, and the automation and self-healing of it. Because a lot of that management would have had to be done by enterprise IT or by a managed services provider but it’s done instead via the lifecycle management of the Dell EMC VxRail HCI solution.

That is a huge benefit, the fact that we deliver a solution from the value line and the hypervisor on up. We can then focus on the end users’ services and we don’t have to be swapping out components or troubleshooting because all of the refinement that Dell has done in that technology today.

Morris: Araceli, the first time you and your team showed me the cloud-bursting capability, it just blew me away. I know in the past how hard it was to expand any infrastructure. You showed me where, you know, every industry and every enterprise are going to have a core base of assumptions. So, why not put that under Dell VxRail?

Then, as you need to expand, cloud-burst into, in this case, Horizonrunning on Azure. And that can all be done now through a single dashboard. I don’t have to be thinking, “Okay, now I have to have the separate workload, it’s in the cloud, this other workload that’s on my on-premises cloud with VxRail.” It’s all done through one, single dashboard that can be automated on the back end through a virtual agent, which is pretty cool.

Gardner: It sure seems in hindsight that the timing here was auspicious. Just as the virus was forcing people to rapidly find a virtual desktop solution, you had put together the intelligence and automation along with software-defined infrastructure like HCI. And then you also gained the ease in hybrid by bursting to the cloud.

And so, it seems that the way that you get to a solution like this has never been easier, just when it was needed to be easy for organizations such as small- to medium-sized businesses (SMBs) and verticals like public sector and education. So, was the alliance and partnering, in fact, a positive confluence of timing?

Greater than sum of parts

Morris: Yes. The perfect storm analogy certainly applies. It was great when I got the phone call from Araceli, saying, “Hey, we have this business continuity capability.” We at Unisys had been thinking about business continuity as well.

We looked at the different components that we each brought. Unisys with its security around Stealth or capability to proactively monitor infrastructure and desktops and see what’s going on and automatically fix them via the intelligent virtual agent and automation. And realizing that this was really a great solution, a much better solution than the individual parts.

We could not make this happen without all of the cool stuff that Dell brings in terms of the HCI, the clients, and, of course, the very powerful VMware-based virtual desktops. And we added to that some things that we have become very good at in our digital workplace transformation. The result is something that can make a real difference for enterprises. You mentioned the public sector and education. Those are great examples of industries that really can benefit from this.

Gardner: Araceli, anything more to offer on how your solution came together, the partners and the constituent parts?

Lewis: Consistent infrastructure, operations, and the help of our partner, Unisys, globally, delivers the services to the end users. This was just a partnership that had to come together.

We were getting so many requests early during the pandemic, an overwhelming amount of demand from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking.

We at Dell couldn’t do it alone. We needed those data center spaces. We needed the capabilities of their architects and teams to deliver for us. We were getting so many requests early during the pandemic, an overwhelming amount of demand from every C-level suite across the country, and from every vertical and industry. We had to rely on Unisys as our trusted partner not only in the public sector but in healthcare and banking. But we knew if we partnered with them, we could give our community what they needed to get through the pandemic.

Gardner: And among those constituent parts, how is important part is Horizon? Why is it so important?

Lewis: VMware Horizon is the glue. It streamlines desktop and app delivery in various ways. The first would be by cloud-bursting. It actually gives us the capability to do that in a very simple fashion.

Secondly, it’s a single pane of glass. It delivers all of the business-critical apps to any device, anywhere on a single screen. So that makes it simple and comprehensive for the IT staff.

We can also deliver non-persistent virtual desktops. The advantage here is that it makes software patching and distribution a whole lot easier. We don’t have all the complexity. If there were ever a security concern or issue, we simply blow away that non-persistent virtual desktop and start all over. It gets us to our first phase, square one, and we would otherwise have to spend countless hours of backups and restores to get us to where we are safe again. So, it pulls everything together for us and being a user have a seamless interface for the IT staff who don’t have the complexity, and it gives us the best of our world while we get out to the cloud.

Gardner: Weston, on the intelligent agents and bots, do you have an example of how it works in practice? It’s really fascinating to me that you’re using AI-enabled robotic process collaboration (RPA) tools to help the IT department set this up. And you’re also using it to help the end-user learn how to onboard themselves, get going, and then get ongoing support.

Amelia AI ascertains answers

Morris: It’s an investment we began almost 24 months ago, branded as the Unisys InteliServe platform, which initially was intended to bring AI, automation, and analytics to the service desk. It was designed to improve the service desk experience and make it easier to use, make it scalable, and to learn over time what kinds of problems people needed help solving.

But we realized once we had it in place, “Wow, this intelligent virtual agent can almost be an enterprise personal assistant where it can be trained on anything, on any business process.” So, we’ve been training it on fixing common IT problems … password resets, can’t log in, can’t get to the virtual private network (VPN), Outlook crashes, those types of things. And it does very well at those sorts of activities.

But the core technology is also perfectly suited to be trained for IT processes as well as business processes inside of the enterprise. For example, for this particular scenario of supporting virtual desktops. If a customer has a specific process for provisioning virtual desktops, they may have specific pools of types of virtual desktops, certain capacities, and those can be created ahead of time, ready to go.

Then it’s just a matter of communicating with the intelligent virtual assistant to say, “I need to add more users to this pool,” or, “We need to remove users,” or, “We need to add a whole new pool.” The agent is branded as Amelia. It has a female voice, through it doesn’t have to be, but in most cases, it is.

When we speak with Amelia, she’s able to ask questions that guide the user through the process. They don’t have to know what the process is. They don’t do this very often, right? But she can be trained to be an expert on it.

Amelia collects the information needed, submits it to the RPA that communicates with Horizon, Azure, and the VxRail platforms to provision the virtual desktops as needed. And this can happen very quickly. Whereas in the past, it may have taken days or weeks to spin up a new environment for a new project, or for a merger and acquisition, or in this case, reacting to the pandemic, and getting people able to work from home.

By the same token, when the end users open up their virtual desktops, they connect to the Horizon workspace, and there is Amelia. She’s there ready to respond to totally different types of questions: “How do I use this?” “Where’s my apps?” “This is new to me, what do I do? How do I connect?” “What about working from home?” “What’s my VPN connection working like, and how do I get that connected properly?” “What about security issues?” There, she’s now able to help with the standard end-user types issues as well.

Gardner: Araceli, any examples of where this intelligent process automation has played out in the workplace? Do we have some ways of measuring the impact?

Simplify, then measure the impact

Lewis: We do. It’s given us, in certain use cases, the predictability and the benefit of a pay-as-you-grow linear scale, rather than the pay-by-the-seat type of solution. In the past, if we had a state or a government agency where they need, for example, 10,000 seats, we would measure them by the seat. If there’s a situation like a pandemic, or any other type of environment where we have to adjust quickly, how could we deliver 10,000 instances in the past?

Now, using Dell EMC ready-architectures with the technologies we’ve discussed — and with Unisys’ capabilities — we can provide such a rapid and large deployment in a pay-as-you-grow linear scale. We can predict what the pricing is going to be as they need to use it for these public sector agencies and financial firms. In the past, there was a lot of capital expenditures (CapEx). There was a lot of process, a lot of change, and there were just too many unknowns.

These modern platforms have simplified the management of the backends of the software and the delivery of it to create a true platform that we can quantify and measure — not only just financially, but from a time-to-delivery perspective as well.

Morris: I have an example of a particular customer where they had a manual process for onboarding. Such onboarding includes multiple steps, one of which is, “Give me my digital workplace.”

But there are other things, too. The training around gaining access to email, for example. That was taking almost 40 hours. Can you imagine a person starting their job, and 40 hours later they finally get the stuff they need to be productive? That’s a lot of downtime.

After using our automation, that transition was down to a little over eight hours. What that means is a person starts filling out their paperwork with HR on day one, gets oriented, and then the next day they have everything they need to be productive. What a big difference. And in the offboarding — it’s even more interesting. What happens when a person leaves the company? Maybe under unfavorable circumstances, we might say.

In the past, the manual processes for this customer took almost 24 hours before everything was turned off. What does that mean? That means that an unhappy, disgruntled employee has 24 hours. They can come in, download content, get access to materials or perhaps be disruptive, or even destructive, with the corporate intellectual property, which is very bad.

Through automation, this offboarding process is now down to six minutes. I mean that person hasn’t even walked out of the room and they’ve been locked out completely from that IT environment. And that can be even be done more quickly if we’re talking about a virtual desktop environment, in which the switch can be thrown immediately and completely. Access is completely and instantly removed from the virtual environment.

Gardner: Araceli, is there a best-of-breed, thin-client hardware approach that you’re using? What about use cases such as graphics-intense or computer-aided design (CAD) applications? What’s the end-point approach for some of these more intense applications?

Viable, virtual, and versatile solutions

Lewis: Being Dell Technologies, that was a perfect question for us, Dana. We understand the persona of the end users. As we roll out this technology, let’s say it’s for an engineering team where they do CAD drawings as an engineering group. If you look at the persona, and we partner with Unisys and look at what each end-user’s needs are, you can determine if they need more memory, more processing power, and if they need a more graphics-intensive device. We can do that. Our Wyseend-clients that can do that, the Wyse 3000s and the 5000s.

But I don’t want to pinpoint one specific type of device per user because we could be talking about a doctor, or we could be talking about a nurse in an intensive care unit. She is going to need something more mobile. We can also provide end-user devices that are ruggedized, maybe in an oil field or in a construction site. So, from an engineering perspective, we can adopt the end-user device to their persona and their needs and we can meet all of those requirements. It’s not a problem.

Gardner: Weston, anything from your vantage point on the diversity and agility of those endpoint devices and why this solution is so versatile?

Morris: There is diversity at both ends. Araceli, you talked about being able to on the backend provision and scale up and down the capacity and capability of a virtual desktop to meet the personas’ needs.

Millennials want choice on how they connect. Am I connecting from home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day. They don’t want to lose work in between. That all is entirely possible with this infrastructure.

And then on the end-user side, and you mentioned, Dana, Millennials. They may want choice of how they connect. Am I connecting in through my own personal laptop at home? Do I want to have access to a thin client when I want to go back to work? Do I want to come in through a mobile? And maybe I want to do all three in the same day? And they don’t want to lose work in between. That is all entirely possible with this infrastructure.

Gardner: Let’s look to the future. We’ve been talking about what’s possible now. But it seems to me that we’ve focused on the very definition of agility: It scales, it’s fast, and it’s automated. It’s applicable across the globe.

What comes next? What can you do with this technology now that you have it in place? It seems to me that we have an opportunity to do even more.

Morris: We’re not backing down from AI and automation. That is here to stay, and it’s going to continue to expand. People have finally realized the power of cloud-based VDI. That is now a very important tool for IT to have in their bag of tricks. They can respond to very specific use cases in a very fast, scalable, and effective way.

In the future we will see that AI continues to provide guidance, not only in the provisioning that we’ve talked about, not only in startup and use on the end-user side — but in providing analytics as to how the entire ecosystem is working. That’s not just the virtual desktops, but the apps that are in the cloud as well and the identity protection. There’s a whole security component that AI has to play a role in. It almost sounds like a pipe dream, but it’s just going to make life better. AI absolutely will do that when it’s used appropriately.

Lewis: I’m looking to the future on how we’re going to live and work in the next five to 10 years. It’s going to be tough to go back to what we were used to. And I’m thinking forward to the Internet of Things (IoT). There’s going to be an explosion of edge devices, of wearables, and how we incorporate all of those technologies will be a part of a persona.

Typically, we’re going to be carrying our work everywhere we go. So, how are we going to integrate all of the wearables? How are we going to make voice recognition more adaptable? VR, AI, robotics, drones — how are we going to tie all of that together?

Nowadays, we tie our home systems and our cooling and heating to all of the things around us to interoperate. I think that’s going to go ahead and continue to grow exponentially. I’m really excited that we’ve partnered with Unisys because we wouldn’t want to do something like this without a partner who is just so deeply entrenched in the solutions. I’m looking forward to that.

Gardner: What advice would give to an organization that hasn’t bitten off the virtual desktop from the cloud and hybrid environment yet? What’s the best way to get started?

Morris: It’s really important to understand your users, your personas. What are they consuming? How do they want to consume it? What is their connectivity like? You need to understand that, if you’re going to make sure that you can deliver the right digital workplace to them and give them an experience that matters.

Lewis: At Dell Technologies, we know how important it is to retain our top and best talent. And because we’ve been one of the top places to work for the past few years, it’s extremely important to make sure that technology and access to technology help to enable our workforce.

I truly feel that any one of our customers or end users that hasn’t looked at VDI, and hasn’t realized the benefits across savings, and keeping a competitive advantage in this fast-paced world, that they also need to retain their talent, too. To do that they need to give their employees the best tools and the best capabilities to be the very best. They have to look at VDI in some way, shape, or form. As soon as we bring it to them — whether technically, financially, or for competitive factors — it really makes sense. It’s not a tough sell at all, Dana.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Dell Technologies.

Posted in Cloud computing, Virtualization, VMware | Tagged , , , , , , , , , , , | Leave a comment

Customer experience management has never been more important or impactful

The next BriefingsDirect digital business innovation discussion explores how companies need to better understand and respond to their markets one subscriber at a time. By better listening inside of their products, businesses can remove the daylight between their digital deliverables and their customers’ impressions.

Stay with us now as we hear from a customer experience (CX) management expert at SAP on the latest ways that discerning customers’ preferences informs digital business imperatives.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the business of best fulfilling customer wants and needs, please welcome Lisa Bianco, Global Vice President, Experience Management and Advocacy at SAP Procurement Solutions. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What was the catalyst about five years ago that led you there at SAP Procurement to invest in a team devoted specifically to CX innovation?

Bianco: As a business-to-business (B2B) organization, we recognized that B2B was changing and it was starting to look and feel more like business-to-consumer (B2C). The days of leaders dictating the solutions and products that their end users were going to be leveraging for day-to-day business stuff — like procurement or finance — we found we were competing with what an end-user’s experience would be with the products or applications they use in their personal life.

No alt text provided for this image

We all know this; we’ve all been there. We would go to work to use the tools, and there used to be those times we would use the printer for our kids’ flyers for their birthday because it was a much better tool than what we had at home. And that had shifted.

But then business leaders were competing with rogue employees using tools like versus SAP Ariba’s solution for procurement to buy things for their businesses. And so with that maverick spend, companies weren’t having the same insights that they needed to make decisions. So, we knew that we had to ensure that that end-user experience at work replicated what they might feel at home. It reflected that shift in persona from a decision-maker to that of a user.

Gardner: Whether it’s B2B or B2C, there tends to be a group of people out there who are really good at productivity and will find ways to improve things if you only take the chance to listen and follow their lead, right?

Bianco: That’s exactly right.

Gardner: And what was it about B2B in the business environment that was plowing new ground when it came to listening rather than just coming up with a list of requirements, baking it into the software, and throwing it over the wall?

Leaders listen to customer experience

Bianco: The truth is, better listening to B2B resulted in a centralized shift for leaders. All of a sudden, a chief procurement officer (CPO) who made a decision on a procurement solution, or a chief information officer (CIO) who made a decision on an enterprise resource planning (ERP) solution, they were beginning to get flak from cross-functional leaders who were end-users and couldn’t actually do their functions.

In B2B we found that we had to start understanding the feelings of employees and the feelings of our customers. And that’s not really what you do in B2B, right? Marketing and branding at SAP now said that the future of business has feelings. And that’s a shock. I can’t tell you how many times I have talked to leaders who say, “I want to switch the word empathy in our mission statement because that’s not strong leadership in B2B.”

The truth is we had to shift. Society was shifting to that place and understanding that feelings allow us to understand the experiences because experiences were that of people. We can only make so many decisions based on our operational data.

But the truth is we had to shift. Society was shifting to that place and understanding that feelings allow us to understand the experiences because the experiences were that of people. We can only make so many decisions based on our operational data, right? You really have to understand the why.

We did have to carve out a new path, and it’s something we still do to this day. Many B2B companies haven’t evolved to an experience management program, because it’s tough. It’s really hard.

Gardner: If we can’t just follow the clicks, and we can’t discern feelings from the raw data, we need to do something more. What do we do? How do we understand why people feel good or bad about what they are doing?

Bianco: We get over that hurdle by having a corporate strategy that puts the customer at the center of all we do. I like to think of it as having a customer-centric decision-making platform. That’s not to say it’s a product. It’s really a shift in mindset that says, “We believe we will be a successful company if our customers’ feelings are positive, if their experiences are great.”

If you look at the disruptors such as Airbnb or Amazon, they prioritize CX over their own objectives as a business and their own business success, things like net-new software sales or renewal targets. They focus on the experiences that their customers have throughout their lifecycle.

No alt text provided for this image

That’s a big shift for corporate America because we are so ingrained in producing for the board and we are so ingrained in producing for the investors that oftentimes putting that customer first is secondary. It’s a systemic shift in culture and thinking that tends to be what we see in the emerging companies today as they grab such huge market share. It’s because they shifted that thinking.

Gardner: Right. And when you shift the thinking in the age of social media — and people can share what their impressions are — that becomes a channel and a marketing opportunity in itself. People aren’t in a bubble. They are able to say and even demonstrate in real time what their likes are, what their dislikes are, and that’s obvious to many other people around them.

Customer feedback ecosystem

Bianco: Dana, you are pointing out risk. And it’s so true. And this year, the disrupter that COVID-19 has created is a tectonic shift in our digitalization of customer feedback. And now, via social media and Twitter, if you are not at the forefront of understanding what your customers’ feelings are — and what they may or may not say — and you are not doing that in a proactive way, you run the risk of it playing out socially in a public forum. And the longer that goes unattended to, you start to lose trust.

When you start to lose trust, it is so much harder to fix than understanding in the lifecycle of a customer the problems that they face, fixing those and making that a priority.

Gardner: Why is this specifically important in procurement? Is there something about procurement, supply chain, and buying that this experience focus is important? Or does it cut across all functions in business?

Bianco: It’s across all functions in business. However, if you look at procurement in the world today, it incorporates a vast ecosystem. It’s one of those functions in business that includes buyers and suppliers. It includes logistics, and it’s complex. It is one of the core areas of a business. When that is disrupted it can have drastic effects on your business.

No alt text provided for this image

We saw that in spades this year. It affects your supply chain, where you can have alternative opportunities to regain your momentum after a disruption. It affects your workforce and all of the tools and materials necessary for your company to function when it shifts and moves home. And so with that, we look from SAP’s perspective at these personas that navigate through a multitude of products in your organization. And in procurement, because that ecosystem is there for our customers, understanding the experience of all of those parties allows for customers to make better decisions.

A really good example is one of the world’s largest consulting firms. They took 500,000 employees in offices around the world and found that they had to immediately put them in their homes. They had to make sure they had the products they needed, like computers, green screens, or leisure wear.

They learned what looks good enough on a virtual Zoom meeting. Procurement had to understand what their employees needed within a week’s time so that they didn’t lose revenue deploying the services that their customers had purchased and rely on them for.

Understanding that lifecycle really helps companies, especially now. Seeing the recent disruption made them able to understand exactly what they need to do and quickly make decisions to make experiences better to get their business back on track.

Gardner: Well, this is also the year or era of moving toward automation and using data and analytics more, even employing bots and robotic process automation (RPA). Is there something about that tack in our industry now that can be brought to CX management? Is there a synergy between not just doing this manually, but looking to automation and finding new insights using new tools?

Automate customer journeys

Bianco: It’s a really great insight into the future of understanding the experiences of a customer. A couple of things come to mind. As you look at operational data, we have all recognized the importance of having operational data; so usage data, seeing where the clicks are throughout your product. Really documenting customer journey maps.

If you automate the way you get feedback you don’t just have operational data; you need to get that feelings to come through with experience data … to help drive to where automation needs to happen.

But if you automate the way you get feedback you don’t just have operational data; you need to get the feelings to come through with experience data. And that experience data can help drive where automation needs to happen. You can then embed that kind of feedback-loop-process in typical survey-type tools or embed them right into your systems.

And so that helps you understand some areas where we can remove steps from in the process, especially as many companies look to procurement to create automation. And so the more we can understand where we have those repetitive flows and we can automate, the better.

Gardner: Is that what you mean by listening inside of the product or does that include other things, too?

Bianco: It includes other things. As you may know, SAP purchased a company called Qualtrics. They are experts in experience management, and we have been able to move from and evolve from traditional net promoter score (NPS) surveys into looking at micro moments to get customer feedback as they are doing a function. We have embedded certain moments inside of our product that allow us to capture feedback in real time.

Gardner: Lisa, a little earlier you alluded that there are elements of what happens in the B2C world as individual consumers and what we can then learn and take into the B2B world. Is there anything top of mind for you that you have experienced as a consumer that you said, “Aha, I want to be able to do that or bring that type of experience and insight to my B2B world?”

Customer service is king in B2B

Bianco: Yes, you know what happened to me just this week as a matter of fact? There is a show on TV right now about chess. With all of us being at home, many of us are consuming copious amounts of content. And I went and ordered a chess set, it came, it was beautiful, it was from Wayfair, and one of the pieces was broken.

I snapped a little picture of the piece that had broken and they had an amazing app that allowed me to say, “Look, I don’t need you to replace the whole thing, it’s just this one little piece, and if you can just send me that, that would be great.”

And they are like, “You know what? Don’t worry about sending it back. We are just going to send you a whole new set.” It was like a $100 set. So I now have two sets because they were gracious enough to see that I didn’t have a great experience. They didn’t want me to deal with sending it back. They immediately sent me the product that I wanted.

No alt text provided for this image

I am, like, where is that in B2B? Where is that in the complex area of procurement that I find myself? How can we get that same experience for our customers when something goes wrong?

When I began this program, we would try to figure out what is that chess set. Other organizations use garlic knots, like at pizza restaurants. While you and your kids wait 25 minutes for the pizza to be made, a lot of pizza shops offer garlic knots to make you happy so the wait doesn’t seem so long. What is that equivalent for B2B?

It’s hard. What we learned early on, and I am so grateful for, is that in B2B many end users and customers know how difficult it is to make some of their experiences better, because it’s complex. They have a lot of empathy for companies trying to go down such a path, in this case, for procurement.

But with that, what their garlic knot is, what their free product or chess set is, is when we tell them that their voice matters. It’s when we receive their feedback, understand their experience against our operational data, and let them know that we have the resources and budget to take action on their feedback and to make it better.

Either we show them that we have made it better or we tell them, “We hear what you are saying, but that doesn’t fit into our future.” You have to be able to have that complete feedback loop, otherwise you alienate your customer. They don’t want to feel like you are asking for their feedback but not doing anything with it.

And so that’s one of the most important things we learned here. That’s the thing that I witnessed from a B2C perspective and tried to replicate in B2B.

Gardner: Lisa, I’m sensing that there is an opportunity for the CX management function to become very important for overall digital business transformation. The way that Wayfair was able to help you with the chess set required integration, cooperation, and coordination between what were probably previously siloed parts of their organization.

That means the helpdesk, the ordering and delivering, exception management capabilities, and getting sign-off on doing this sort of thing. It had to mean breaking down those silos — both in process, data, and function. And that integration is often part of an all-important digital transformation journey.

So are you finding that people like yourself, who are spearheading the experience management for your customers, are in a catbird seat of identifying where silos, breakdowns, and gaps exist in the B2B supplier organizations?

Feedback fuels cross-training

Bianco: Absolutely. Here is what I have learned: I am going to focus on cloud, especially in companies that are either cloud companies or had been an on-premises company and are migrating to being a cloud company. SAP Ariba did this over the last 20 years. It has migrated from on-premises to cloud, so we have a great DNA understanding of that. SAP is out doing the same thing; many companies are.

And what’s important to realize, at least from my perspective — it was an “Aha” moment — is that there is a tendency in the B2C world leadership to say, “Look, I am looking at all this data and feedback around customers. Can’t we just go fix this particular customer issue, and they are going to be happy?”

Most of the issues our customers were facing were systemic. There was consistent feedback about something that wasn’t working. We had to recognize that these systemic issues needed to be solved by a cross-functional group of people.

What we found in the B2B data was that most of the issues our customers were facing were systemic. It was broad strokes of consistent feedback about something that wasn’t working. We had to recognize that these systemic issues needed to be solved by a cross-functional group of people.

That’s really hard because so many folks have their own budgets, and they lead only a particular function. To think about how they might fix something more broadly took our organization quite a bit of time to wrap our heads around. Because now you need a center of excellence, a governance model that says that CX is at the forefront, and that you are going to have accountability in the business to act on that feedback and those actions. And you are going to compose a cross-functional, multilevel team to get it done.

It was funny early on, in our receiving feedback that customer support is a problem. Support was the problem. The support function was awful. I remember the head of support was like, “Oh, my gosh. I am going to get fired. I just hate my job. I don’t know what to do.”

When you look at the root cause you find that quality is a root-cause issue, but quality wasn’t just in one or another product — it was across many products. That broader quality issue led to how we enabled our support teams to understand how to better support those products. That quality issue also impacted how we went to market and we showed the features and functions of the product.

We developed a team called the Top X Organization that aggregated cross-functional folks, held them accountable to a standard of a better outcome experience for our customers, and then led a program to hit certain milestones to transform that experience. But all that is a heavy lift for many companies.

Gardner: That’s fascinating. So, your CX advocates — by having that cross-functional perspective by nature — became advocates for better processes and higher quality at the organization level. They are not just advocating for the customer; they are actually advocating for the betterment of the business. Are you finding that and where do you find the people that can best do that?

Responsibility of active listening

Bianco: It’s not an easy task, it’s for few and far between. Again, it takes a corporate strategy. Dana, when you asked me the question earlier on, “What was the catalyst that brought you here?” I oftentimes chuckle. There isn’t a leader on the planet who isn’t going to have someone come to them, like I did at the time, and say, “Hey, I think we should listen to our customers.” Who wouldn’t want to do that? Everyone wants to do that. It sounds like a really good idea.

But, Dana, it’s about active listening. If you watch movies, there is often a scene where there is a husband and wife getting therapy. And the therapist says, “Hey, did you hear what she said?” or, “Did you hear what he said?” And the therapist has them repeat it back. Your marriage or a struggle you have with relationships is never going to get better just by going and sitting on the couch and talking to the therapist. It requires each of you to decide internally that you want this to be better, and that you are going to make the changes necessary to move that relationship forward.

It’s not dissimilar to the desire to have a CX organization, right? Everyone thinks it’s a great idea to show in their org chart that they have a leader of CX. But the truth is you have to really understand the responsibility of listening. And that responsibility sometimes devolves into just taking a survey. I’m all for sending a survey out to our customers, let’s do it. But that is the smallest part of a CX organization.

No alt text provided for this image

It’s really wrapped up in what the corporate strategy is going to be: A customer-centric, decision-making model. If we do that, are we prepared to have a governance structure that says we are going to fund and resource making experiences better? Are we going to acknowledge the feedback and act on it and make that a priority in business or not?

Oftentimes leaders get caught up in, “I just want to show I have a CX team and I am going to run a survey.” But they don’t realize the responsibility that gives them when now they have on paper all the things that they know they have an opportunity to make better for their customers.

Gardner: You have now had five years to make these changes. In theory this sounds very advantageous on a lot of levels and solves some larger strategic problems that you would have a hard time addressing otherwise.

So where’s the proof? Do you have qualitative, quantitative indicators? Maybe it’s one of those things that’s really hard to prove. But how do you rate customer advocacy and CX role? What does it get you when you do it well?

Feelings matter at all levels

Bianco: Really good point. We just came off of our five-year anniversary this week. We just had an NPS survey and we got some amazing trends. In five years, we have seen an even greater improvement in the last 18 months — an 11-point increase in our customer feedback. And that not only translates into the survey, as I mentioned, but it also translates with influencers and analysts.

Gartner has noted the increase in our ability to address CX issues and make them better. We can see that in terms of the 11-point increase. We can see that in terms of our reputation within our analyst community.

And we also see it in the data. Customers are saying, “Look, you are much more responsive to me.” We see a 35-percent decrease in customers complaining in their open text fields about support. We see customers mentioning less the challenges they have seen in the area of integration, which is so incredibly important.

We see a 35-percent decrease in customers complaining in their open text fields about support. We see customers less challenged by integration, which is so incredibly important.

And we also hear less from our own SAP leaders who felt like NPS just exposed the fact that they might not be doing their job well, which was initially the experience we got from leaders who were like, “Oh my gosh. I don’t want you to talk about anything that makes it look like I am not doing my job.” We created a culture where we have been more open to feedback. We now relish in that insight, versus feeling defensive.

And that’s a culture shift that took us five years to get to. Now you have leaders chomping at the bit to get those insights, get that data, and make the changes because we have proof. And that proof did start with an organizational change right in the beginning. It started with new leadership in certain areas like support. Those things translated into the success we have today. But now we have to evolve beyond that. What’s the next step for us?

Gardner: Before we talk about your next steps, for those organizations that are intrigued by this — that want to be more customer-centric and to understand why it’s important — what lessons have you learned? What advice do you have for organizations that are maybe just beginning on the CX path?

Bianco: How long is this show?

Gardner: Ten more minutes, tops.

Bianco: Just kidding. I mean gosh, I have learned a lot. If I look back — and I know some of my colleagues at IBM had a similar experience — the feedback is this. We started by deploying NPS. We just went out there and said we are going to do these NPS surveys and that’s going to shake the business into understanding how our customers are feeling.

We grew to understand that our customers came to SAP because of our products. And so I think I might have spent more time listening inside of the products. What does that mean? It certainly means embedding micro-moments, of aggregating feedback, in the product to help understand — and allows our developers to understand what they need to do. But that need to be done in a very strategic way.

It’s also about making sure that any time anyone in the company wants to listen to customers, you ensure that you have the budget and the resources necessary to make that change — because otherwise you will alienate your customers.

Another area is you have to have executive leadership. It has to be at the root of your corporate objectives. Anything less than that and you will struggle. It doesn’t mean you won’t have some success, but when you are looking at the root of making experience better, it’s about action. That action needs to be taken by the folks responsible for your products or services. Those folks have to be incented, or they have to be looped in and committed to the program. There has to be a governance model that measures the experience of the customer based on how the customer interprets it — not how you interpret it.

If, as a company, you interpret success as net-new software sales, you have to shift that mindset. That’s not how your customers view their own success.

Gardner: That’s very important and powerful. Before we sign off, five years in, where do you go now? Is there an acceleration benefit, a virtuous adoption pattern of sorts when you do this? How do you take what you have done and bring it to a step-change improvement or to an even more strategic level?

Turn feedback into action

Bianco: The next step for us is to embed the experience program in every phase of the customer’s journey. That includes every phase of our engagement journey inside of our organization.

So from start to finish, what are the teams providing that experience, whether it’s a service or product? That would be one. And, again, that requires the governance that I mentioned. Because action is where it’s at — regardless of the feedback you are getting and how many places you listen. Action is the most important piece to making their experience better.

This requires governance because action is where it’s at — regardless of the feedback. Taking action is the most important piece to making the customer experience better.

Another is to move beyond just NPS surveys. Again, it’s not that this is a new concept, but as I watched the impact of COVID-19 on accelerating digital feedback, social forums, and public forums, we measured that advocacy. It’s not just the, “Will you recommend this product to a friend or colleague?” In addition it’s about, “Will you promote this company or not?”

That is going to be more important than ever, because we are going to continue in a virtual environment next year. As much as we can help frame what that feedback might be — and be proactive — is where I see success for SAP in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

Posted in Ariba, artificial intelligence, Business intelligence, Business networks, digital transformation, ERP, Help desk, machine learning, managed services, marketing, procurement, retail, SAP, SAP Ariba, social media, supply chain, User experience | Tagged , , , , , , , , , | Leave a comment

How to industrialize data science to attain mastery of repeatable intelligence delivery

Businesses these days are quick to declare their intention to become data-driven, yet the deployment of analytics and the use of data science remains spotty, isolated, and often uncoordinated.

To fully reach their digital business transformation potential, businesses large and small need to make data science more of a repeatable assembly line — an industrialization, if you will — of end-to-end data exploitation.

The next BriefingsDirect Voice of Analytics Innovation discussion explores the latest methods, tools, and thinking around making data science an integral core function that both responds to business needs and scales to improve every aspect of productivity.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the ways that data and analytics behave more like a factory — and less like an Ivory Tower — please welcome Doug Cackett, EMEA Field Chief Technology Officer at Hewlett Packard Enterprise. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Doug, why is there a lingering gap — and really a gaping gap — between the amount of data available and the analytics that should be taking advantage of it?

Cackett: That’s such a big question to start with, Dana, to be honest. We probably need to accept that we’re not doing things the right way at the moment. Actually, Forrester suggests that something like 40 zettabytes of data are going to be under management by the end of this year, which is quite enormous.

And, significantly, more of that data is being generated at the edge through applications, Internet of Things (IoT), and all sorts of other things. This is where the customer meets your business. This is where you’re going to have to start making decisions as well.

So, the gap is two things. It’s the gap between the amount of data that’s being generated and the amount you can actually comprehend and create value from. In order to leverage that data from a business point of view, you need to make decisions at the edge.

You will need to operationalize those decisions and move that capability to the edge where your business meets your customer. That’s the challenge we’re all looking for machine learning (ML) — and the operationalization of all of those ML models into applications — to make the difference.

Gardner: Why does HPE think that moving more toward a factory model, industrializing data science, is part of the solution to compressing and removing this gap?

Data’s potential at the edge

Cackett: It’s a math problem, really, if you think about it. If there is exponential growth in data within your business, if you’re trying to optimize every step in every business process you have, then you’ll want to operationalize those insights by making your applications as smart as they can possibly be. You’ll want to embed ML into those applications.

Because, correspondingly, there’s exponential growth in the demand for analytics in your business, right? And yet, the number of data scientists you have in your organization — I mean, growing them exponentially isn’t really an option, is it? And, of course, budgets are also pretty much flat or declining.

There’s exponential growth in the demand for analytics in your business. And yet the number of data scientists in your organization, growing them, is not exponential. And budgets are pretty much flat or declining.

So, it’s a math problem because we need to somehow square away that equation. We somehow have to generate exponentially more models for more data, getting to the edge, but doing that with fewer data scientists and lower levels of budget.

Industrialization, we think, is the only way of doing that. Through industrialization, we can remove waste from the system and improve the quality and control of those models. All of those things are going to be key going forward.

Gardner: When we’re thinking about such industrialization, we shouldn’t necessarily be thinking about an assembly line of 50 years ago — where there are a lot of warm bodies lined up. I’m thinking about the Lucille Ball assembly line, where all that candy was coming down and she couldn’t keep up with it.

Perhaps we need more of an ultra-modern assembly line, where it’s a series of robots and with a few very capable people involved. Is that a fair analogy?

Industrialization of data science

Cackett: I think that’s right. Industrialization is about manufacturing where we replace manual labor with mechanical mass production. We are not talking about that. Because we’re not talking about replacing the data scientist. The data scientist is key to this. But we want to look more like a modern car plant, yes. We want to make sure that the data scientist is maximizing the value from the data science, if you like.

We don’t want to go hunting around for the right tools to use. We don’t want to wait for the production line to play catch up, or for the supply chain to catch up. In our case, of course, that’s mostly data or waiting for infrastructure or waiting for permission to do something. All of those things are a complete waste of their time.

As you look at the amount of productive time data scientists spend creating value, that can be pretty small compared to their non-productive time — and that’s a concern. Part of the non-productive time, of course, has been with those data scientists having to discover a model and optimize it. Then they would do the steps to operationalize it.

But maybe doing the data and operations engineering things to operationalize the model can be much more efficiently done with another team of people who have the skills to do that. We’re talking about specialization here, really.

But there are some other learnings as well. I recently wrote a blog about it. In it, I looked at the modern Toyota production system and started to ask questions around what we could learn about what they have learned, if you like, over the last 70 years or so.

It was not just about automation, but also how they went about doing research and development, how they approached tooling, and how they did continuous improvement. We have a lot to learn in those areas.

For an awful lot of organizations that I deal with, they haven’t had a lot of experience around such operationalization problems. They haven’t built that part of their assembly line yet. Automating supply chains and mistake-proofing things, what Toyota called jidoka, also really important. It’s a really interesting area to be involved with.

Gardner: Right, this is what US manufacturing, in the bricks and mortar sense, went through back in the 1980s when they moved to business process reengineering, adopted kaizen principles, and did what Demingand more quality-emphasis had done for the Japanese auto companies.

And so, back then there was a revolution, if you will, in physical manufacturing. And now it sounds like we’re at a watershed moment in how data and analytics are processed.

Cackett: Yes, that’s exactly right. To extend that analogy a little further, I recently saw a documentary about Morgan cars in the UK. They’re a hand-built kind of car company. Quite expensive, very hand-built, and very specialized.

And I ended up by almost throwing things at the TV because they were talking about the skills of this one individual. They only had one guy who could actually bend the metal to create the bonnet, the hood, of the car in the way that it needed to be done. And it took two or three years to train this guy, and I’m thinking, “Well, if you just automated the process, and the robot built it, you wouldn’t need to have that variability.” I mean, it’s just so annoying, right?

In the same way, with data science we’re talking about laying bricks — not Michelangelo hammering out the figure of David. What I’m really trying to say is a lot of the data science in our customer’s organizations are fairly mundane. To get that through the door, get it done and dusted, and give them time to do the other bits of finesse using more skills — that’s what we’re trying to achieve. Both [the basics and the finesse] are necessary and they can all be done on the same production line.

Gardner: Doug, if we are going to reinvent and increase the productivity generally of data science, it sounds like technology is going to be a big part of the solution. But technology can also be part of the problem.

What is it about the way that organizations are deploying technology now that needs to shift? How is HPE helping them adjust to the technology that supports a better data science approach?

Define and refine

Cackett: We can probably all agree that most of the tooling around MLOps is relatively young. The two types of company we see are either companies that haven’t yet gotten to the stage where they’re trying to operationalize more models. In other words, they don’t really understand what the problem is yet.

Forrester research suggests that only 14 percent of organizations that they surveyed said they had a robust and repeatable operationalization process. It’s clear that the other 86 percent of organizations just haven’t refined what they’re doing yet. And that’s often because it’s quite difficult.

Many of these organizations have only just linked their data science to their big data instances or their data lakes. And they’re using it both for the workloads and to develop the models. And therein lies the problem. Often they get stuck with simple things like trying to have everyone use a uniform environment. All of your data scientists are both sharing the data and sharing the computer environment as well.

Data scientists can be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating terabytes of data, which can take a long time. That also demands new resources, including new hardware.

And data scientists can often be very destructive in what they’re doing. Maybe overwriting data, for example. To avoid that, you end up replicating the data. And if you’re going to replicate terabytes of data, that can take a long period of time. That also means you need new resources, maybe new more compute power and that means approvals, and it might mean new hardware, too.

Often the biggest challenge is in provisioning the environment for data scientists to work on, the data that they want, and the tools they want. That can all often lead to huge delays in the process. And, as we talked about, this is often a time-sensitive problem. You want to get through more tasks and so every delayed minute, hour, or day that you have becomes a real challenge.

The other thing that is key is that data science is very peaky. You’ll find that data scientists may need no resources or tools on Monday and Tuesday, but then they may burn every GPU you have in the building on Wednesday, Thursday, and Friday. So, managing that as a business is also really important. If you’re going to get the most out of the budget you have, and the infrastructure you have, you need to think differently about all of these things. Does that make sense, Dana?

Gardner: Yes. Doug how is HPE Ezmeral being designed to help give the data scientists more of what they need, how they need it, and that helps close the gap between the ad hoc approach and that right kind of assembly line approach?

Two assembly lines to start

Cackett: Look at it as two assembly lines, at the very minimum. That’s the way we want to look at it. And the first thing the data scientists are doing is the discovery.

The second is the MLOps processes. There will be a range of people operationalizing the models. Imagine that you’re a data scientist, Dana, and I’ve just given you a task. Let’s say there’s a high defection or churn rate from our business, and you need to investigate why.

First you want to find out more about the problem because you might have to break that problem down into a number of steps. And then, in order to do something with the data, you’re going to want an environment to work in. So, in the first step, you may simply want to define the project, determine how long you have, and develop a cost center.

You may next define the environment: Maybe you need CPUs or GPUs. Maybe you need them highly available and maybe not. So you’d select the appropriate-sized environment. You then might next go and open the tools catalog. We’re not forcing you to use a specific tool; we have a range of tools available. You select the tools you want. Maybe you’re going to use Python. I know you’re hardcore, so you’re going to code using Jupyter and Python.

And the next step, you then want to find the right data, maybe through the data catalog. So you locate the data that you want to use and you just want to push a button and get provisioned for that lot. You don’t want to have to wait months for that data. That should be provisioned straight away, right?

You can do your work, save all your work away into a virtual repository, and save the data so it’s reproducible. You can also then check the things like model drift and data drift and those sorts of things. You can save the code and model parameters and those sorts of things away. And then you can put that on the backlog for the MLOps team.

Then the MLOps team picks it up and goes through a similar data science process. They want to create their own production line now, right? And so, they’re going to seek a different set of tools. This time, they need continuous integration and continuous delivery (CICD), plus a whole bunch of data stuff they want to operationalize your model. They’re going to define the way that that model is going to be deployed. Let’s say, we’re going to use Kubeflow for that. They might decide on, say, an A/B testing process. So they’re going to configure that, do the rest of the work, and press the button again, right?

Clearly, this is an ongoing process. Fundamentally that requires workflow and automatic provisioning of the environment to eliminate wasted time, waiting for stuff to be available. It is fundamentally what we’re doing in our MLOps product.

But in the wider sense, we also have consulting teams helping customers get up to speed, define these processes, and build the skills around the tools. We can also do this as-a-service via our HPE GreenLake proposition as well. Those are the kinds of things that we’re helping customers with.

Gardner: Doug, what you’re describing as needed in data science operations is a lot like what was needed for application development with the advent of DevOps several years ago. Is there commonality between what we’re doing with the flow and nature of the process for data and analytics and what was done not too long ago with application development? Isn’t that also akin to more of a cattle approach than a pet approach?

Operationalize with agility

Cackett: Yes, I completely agree. That’s exactly what this is about and for an MLOps process. It’s exactly that. It’s analogous to the sort of CICD, DevOps, part of the IT business. But a lot of that tool chain is being taken care of by things like Kubeflow and MLflow Project, some of these newer, open source technologies.

I should say that this is all very new, the ancillary tooling that wraps around the CICD. The CICD set of tools are also pretty new. What we’re also attempting to do is allow you, as a business, to bring these new tools and on-board them so you can evaluate them and see how they might impact what you’re doing as your process settles down.

The way we’re doing MLOps and data science is progressing extremely quickly. So you don’t want to lock yourself into a corner where you’re trapped in a particular workflow. You want to have agility. It’s analogous to the DevOps movement.

The idea is to put them in a wrapper and make them available so we get a more dynamic feel to this. The way we’re doing MLOps and data science generally is progressing extremely quickly at the moment. So you don’t want to lock yourself into a corner where you’re trapped into a particular workflow. You want to be able to have agility. Yes, it’s very analogous to the DevOps movement as we seek to operationalize the ML model.

The other thing to pay attention to are the changes that need to happen to your operational applications. You’re going to have to change those so they can tool the ML model at the appropriate place, get the result back, and then render that result in whatever way is appropriate. So changes to the operational apps are also important.

Gardner: You really couldn’t operationalize ML as a process if you’re only a tools provider. You couldn’t really do it if you’re a cloud services provider alone. You couldn’t just do this if you were a professional services provider.

It seems to me that HPE is actually in a very advantageous place to allow the best-of-breed tools approach where it’s most impactful but to also start put some standard glue around this — the industrialization. How is HPE is an advantageous place to have a meaningful impact on this difficult problem?

Cackett: Hopefully, we’re in an advantageous place. As you say, it’s not just a tool, is it? Think about the breadth of decisions that you need to make in your organization, and how many of those could be optimized using some kind of ML model.

You’d understand that it’s very unlikely that it’s going to be a tool. It’s going to be a range of tools, and that range of tools is going to be changing almost constantly over the next 10 and 20 years.

This is much more to do with a platform approach because this area is relatively new. Like any other technology, when it’s new it almost inevitably to tends to be very technical in implementation. So using the early tools can be very difficult. Over time, the tools mature, with a mature UI and a well-defined process, and they become simple to use.

But at the moment, we’re way up at the other end. And so I think this is about platforms. And what we’re providing at HPE is the platform through which you can plug in these tools and integrate them together. You have the freedom to use whatever tools you want. But at the same time, you’re inheriting the back-end system. So, that’s Active Directory and Lightweight Directory Access Protocol (LDAP) integrations, and that’s linkage back to the data, your most precious asset in your business. Whether that be in a data lake or a data warehouse, in data marts or even streaming applications.

This is the melting point of the business at the moment. And HPE has had a lot of experience helping our customers deliver value through information technology investments over many years. And that’s certainly what we’re trying to do right now.

Gardner: It seems that HPE Ezmeral is moving toward industrialization of data science, as well as other essential functions. But is that where you should start, with operationalizing data science? Or is there a certain order by which this becomes more fruitful? Where do you start?

Machine learning leads change

Cackett: This is such a hard question to answer, Dana. It’s so dependent on where you are as a business and what you’re trying to achieve. Typically, to be honest, we find that the engagement is normally with some element of change in our customers. That’s often, for example, where there’s a new digital transformation initiative going on. And you’ll find that the digital transformation is being held back by an inability to do the data science that’s required.

There is another Forrester report that I’m sure you’ll find interesting. It suggests that 98 percent of business leaders feel that ML is key to their competitive advantage. It’s hardly surprising then that ML is so closely related to digital transformation, right? Because that’s about the stage at which organizations are competing after all.

So we often find that that’s the starting point, yes. Why can’t we develop these models and get them into production in time to meet our digital transformation initiative? And then it becomes, “Well, what bits do we have to change? How do we transform our MLOps capability to be able to do this and do this at scale?”

Often this shift is led by an individual in an organization. There develops a momentum in an organization to make these changes. But the changes can be really small at the start, of course. You might start off with just a single ML problem related to digital transformation.

We acquired MapR some time ago, which is now our HPE Ezmeral Data Fabric. And it underpins a lot of the work that we’re doing. And so, we will often start with the data, to be honest with you, because a lot of the challenges in many of our organizations has to do with the data. And as businesses become more real-time and want to connect more closely to the edge, really that’s where the strengths of the data fabric approach come into play.

So another starting point might be the data. A new application at the edge, for example, has new, very stringent requirements for data and so we start there with building these data systems using our data fabric. And that leads to a requirement to do the analytics and brings us obviously nicely to the HPE Ezmeral MLOps, the data science proposition that we have.

Gardner: Doug, is the COVID-19 pandemic prompting people to bite the bullet and operationalize data science because they need to be fleet and agile and to do things in new ways that they couldn’t have anticipated?

Cackett: Yes, I’m sure it is. We know it’s happening; we’ve seen all the research. McKinsey has pointed out that the pandemic has accelerated a digital transformation journey. And inevitably that means more data science going forward because, as we talked about already with that Forrester research, some 98 percent think that it’s about competitive advantage. And it is, frankly. The research goes back a long way to people like Tom Davenport, of course, in his famous Harvard Business Review article. We know that customers who do more with analytics, or better analytics, outperform their peers on any measure. And ML is the next incarnation of that journey.

Gardner: Do you have any use cases of organizations that have gone to the industrialization approach to data science? What is it done for them?

Financial services benefits

Cackett: I’m afraid names are going to have to be left out. But a good example is in financial services. They have a problem in the form of many regulatory requirements.

When HPE acquired BlueData it gained an underlying technology, which we’ve transformed into our MLOps and container platform. BlueData had a long history of containerizing very difficult, problematic workloads. In this case, this particular financial services organization had a real challenge. They wanted to bring on new data scientists. But the problem is, every time they wanted to bring a new data scientist on, they had to go and acquire a bunch of new hardware, because their process required them to replicate the data and completely isolate the new data scientist from the other ones. This was their process. That’s what they had to do.

So as a result, it took them almost six months to do anything. And there’s no way that was sustainable. It was a well-defined process, but it’s still involved a six-month wait each time.

So instead we containerized their Cloudera implementation and separated the compute and storage as well. That means we could now create environments on the fly within minutes effectively. But it also means that we can take read-only snapshots of data. So, the read-only snapshot is just a set of pointers. So, it’s instantaneous.

They scaled out their data science without scaling up their costs or the number of people required. They are now doing that in a hybrid cloud environment. And they only have to change two lines of code to push workloads into AWS, which is pretty magical, right?

They were able to scale-out their data science without scaling up their costs or the number of people required. Interestingly, recently, they’ve moved that on further as well. Now doing all of that in a hybrid cloud environment. And they only have to change two lines of code to allow them to push workloads into AWS, for example, which is pretty magical, right? And that’s where they’re doing the data science.

Another good example that I can name is GM Finance, a fantastic example of how having started in one area for business — all about risk and compliance — they’ve been able to extend the value to things like credit risk.

But doing credit risk and risk in terms of insurance also means that they can look at policy pricing based on dynamic risk. For example, for auto insurance based on the way you’re driving. How about you, Dana? I drive like a complete idiot. So I couldn’t possibly afford that, right? But you, I’m sure you drive very safely.

But in this use-case, because they have the data science in place it means they can know how a car is being driven. They are able to look at the value of the car, the end of that lease period, and create more value from it.

These are types of detailed business outcomes we’re talking about. This is about giving our customers the means to do more data science. And because the data science becomes better, you’re able to do even more data science and create momentum in the organization, which means you can do increasingly more data science. It’s really a very compelling proposition.

Gardner: Doug, if I were to come to you in three years and ask similarly, “Give me the example of a company that has done this right and has really reshaped itself.” Describe what you think a correctly analytically driven company will be able to do. What is the end state?

A data-science driven future

Cackett: I can answer that in two ways. One relates to talking to an ex-colleague who worked at Facebook. And I’m so taken with what they were doing there. Basically, he said, what originally happened at Facebook, in his very words, is that to create a new product in Facebook they had an engineer and a product owner. They sat together and they created a new product.

Sometime later, they would ask a data scientist to get involved, too. That person would look at the data and tell them the results.

Then they completely changed that around. What they now do is first find the data scientist and bring him or her on board as they’re creating a product. So they’re instrumenting up what they’re doing in a way that best serves the data scientist, which is really interesting.

The data science is built-in from the start. If you ask me what’s going to happen in three years’ time, as we move to this democratization of ML, that’s exactly what’s going to happen. I think we’ll end up genuinely being information-driven as an organization.

That will build the data science into the products and the applications from the start, not tack them on to the end.

Gardner: And when you do that, it seems to me the payoffs are expansive — and perhaps accelerating.

Cackett: Yes. That’s the competitive advantage and differentiation we started off talking about. But the technology has to underpin that. You can’t deliver the ML without the technology; you won’t get the competitive advantage in your business, and so your digital transformation will also fail.

This is about getting the right technology with the right people in place to deliver these kinds of results.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in Cloud computing, storage | Tagged , , , , , , , , , , , , , , , | Leave a comment

How remote work promises to deliver new levels of engagement, productivity, and innovation

The way people work has changed more in 2020 than the previous 10 years combined — and that’s saying a lot. Even more than the major technological impacts of cloud, mobile, and big data, the COVID-19 pandemic has greatly accelerated and deepened global behavioral shifts.

The ways that people think about where and how to work may never be the same, and new technology alone could not have made such a rapid impact.

So now is the time to take advantage of a perhaps once-in-a-lifetime disruption for the better. Steps can be taken to make sure that such a sea change comes less with a price and more with a broad boon — to both workers and businesses.

The next BriefingsDirect work strategies panel discussion explores research into the future of work and how unprecedented innovation could very well mean a doubling of overall productivity in the coming years.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We’re joined by a panel to hear insights on how a remote-first strategy leads to a reinvention of work expectations and payoffs. Please welcoming our guests, Jeff Vincent, Chief Executive Officer at Lucid Technology ServicesRay Wolf, Chief Executive Officer at A2K Partners, and Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, you’ve done some new research at Citrix. You’ve looked into what’s going on with the nature of work and a shift from what seems to be from chaos to opportunity. Tell us about the research and why it fosters such optimism.

Minahan: Most of the world has been focused on the here-and-now, with how to get employees home safely, maintain business continuity, and keep employees engaged and productive in a prolonged work-from-home model. Yet we spent the bulk of the last year partnering with Oxford Analytica and Coleman Parkes to survey thousands of business and IT executives and to conduct qualitative interviews with C-level executives, academia, and futurists on what work is going to look like 15 years from now — in 2035 — and predict the role that technology will play.

Certainly, we’re already seeing an acceleration of the findings from the report. And if there’s any iota of a silver lining in this global crisis we’re all living through, it’s that it has caused many organizations to rethink their operating models, business models, and their work models and workforce strategies.

Work has no-doubt forever changed. We’re seeing an acceleration of companies embracing new workforce strategies, reaching to pools of talent in remote locales using technology, and opening up access to skill sets that were previously too costly near their office and work hubs.

Now they can access talent anywhere, enabling and elevating the skill sets of all employees by leveraging artificial intelligence (AI) and machine learning (ML) to help them perform as their best employees. They are ensuring that they can embrace entirely new work models, possibly even the Uber-fication of work by tapping into recent retirees, work-from-home parents, and caregivers who had opted-out of the workforce — not because they didn’t have the skills or expertise that folks needed — but because traditional work models didn’t support their home environment.

We’re seeing an acceleration of companies liberated by the fact that they realize work can happen outside of the office. Many executives across every industry have begun to rethink what the future of work is going to look like when we come out of this pandemic.

Gardner: Tim, one of the things that jumped out at me from your research was a majority feel that technology will make workers at least twice as productive by 2035. Why such a newfound opportunity for higher productivity, which had been fairly flat for quite a while? What has changed in behavior and technology that seems to be breaking us out of the doldrums when it comes to productivity?

Work 2035: Citrix Research

Reveals a More Intelligent Future

Minahan: Certainly, the doubling of employee productivity is a factor of a couple things. Number one, new more flexible work models allow employees to work wherever they can do their best work. But more importantly, it is the emergence of the augmented worker, using AI and ML to help not just offer up the right information at the right time, but help employees make more informed decisions and speed up the decision-making process, as well as automating menial tasks so employees can focus on the strategic aspects of driving creativity and innovation for the business. This is one of the areas we think is the most exciting as we look forward to the future.

Gardner: We’re going to dig into that research more in our discussion. But let’s go to Jeff at Lucid Technology Services. Tell us about Lucid, Jeff, and why a remote-first strategy has been a good fit for you.

Remote service keep SMBs safe

Vincent: Lucid Technology Services delivers what amounts to a fractional chief information officer (CIO) service. Small- to medium-sized businesses (SMBs) need CIOs but don’t generally have the working capital to afford a full-time, always-on, and always-there CIO or chief technology officer (CTO). That’s where we fill the gap.

We bring essentially an IT department to SMBs, everything from budgeting to documentation — and all points in between. And one of the big things that taught us to look forward is by looking backward. In 1908, Henry Ford gave us the modern assembly line, which promptly gave us the model T. And so horse-drawn buggy whip factories and buggy accessories suddenly became obsolete.

Something similar happened in the early 1990s. It was a fad called the Internetand it revolutionized work in ways that could not have been foreseen up to that point in time. We firmly believe that we’re on the precipice of another revolution of work just like then. The technology is mature at this point. We can move forward with it, using things like Citrix.

Gardner: Bringing a CIO-caliber function to SMBs sounds like it would be difficult to scale, if you had to do it in-person. So, by nature, you have been a pioneer in a remote-first strategy. Is it effective? Some people think you can’t be remote and be effective.

Vincent: Well, that’s not what we’ve been finding. This has been an evolution in my business for 20 years now. And the field has grown as the need has grown. Fortunately, the technology has kept pace with it. So, yes, I think we’re very effective.

Previously, let’s say a CPA firm of 15 providers, or a medical firm of three or four doctors with another 10 or so administrative and assistance staff on site all of the time, they had privileged information and data under regulation that needs safeguarding.

Well, if you are Arthur Andersen, a large, national firm, or Kaiser Permanente, or some really large corporation that has an entire team of IT staff on-site, then that isn’t really a problem. But when you’re under 25 to 50 employees, that’s a real problem because even if you were compromised, you wouldn’t necessarily know it.

If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do the work of a lot of people.

We leverage monitoring technology, such as next-generation firewalls, and a team of people looking after that network operation center (NOC) and help desk to head those problems off at the pass. If problems do develop, we can catch them when they’re still small. And with such a light, agile team that’s heavy on tech and the infrastructure behind it, a very few people can do a lot of work for a lot of people. That is the secret sauce of our success.

Gardner: Jeff, from your experience, how often is it the CIO who is driving the remote work strategy?

Vincent: I don’t think remote work prior to the pandemic could have been driven from any other any other seat than the CIO/CTO. It’s his or her job. It’s their entire ethos to keep the finger on pulse of technology, where it’s going, and what it’s currently capable of doing.

In my experience, anybody else on the C-suite team has so much else going on. Everybody is wearing multiple hats and doing double-duty. So, the CTO is where that would have been driven.

But now, what I’ve seen in my own business, is that since the pandemic, as the CTO, I’m not generally leading the discussion — I’m answering the questions. That’s been very exciting and one of the silver linings I’ve seen through this very trying time. We’re not forcing the conversation anymore. We are responding to the questions. I certainly didn’t envision a pandemic shutting down businesses. But clearly, the possibility was there, and it’s been a lot easier conversation [about remote work] to have over the past several months.

The nomadic way of work

Gardner: Ray, tell us about A2K Partners. What do you have in common with Jeff Vincent at Lucid about the perceived value of a remote-first strategy?

Wolf: A2K Partners is a digital transformation company. Our secret sauce is we translate technology into the business applications, outcomes, and impacts that people care about.

Our company was founded by individuals who were previously in C-level business positions, running global organizations. We were the consumers of technology. And honestly, we didn’t want to spend a lot of time configuring the technologies. We wanted to speed things up, drive efficiency, and drive revenue and growth. So we essentially built the company around that.

We focus on work redesign, work orchestration, and employee engagement. We leverage platforms like Citrix for the future of work and for bringing in productivity enhancements to the actual processes of doing work. We ask, what’s the current state? What’s the future state? That’s where we spend a lot of our time.

As for a remote-first strategy, I want to highlight that our company is a nomadic company. We recruit people who want to live and work from anywhere. We think there’s a different mindset there. They are more apt to accept and embrace change. So untethered work is really key.

What we have been seeing with our clients — and the conversations that we’re having currently today — is the leaders of every organization, at every level, are trying to figure out how we come out of this pandemic better than when we went in. Some actually feel victims, and we’re encouraging them that this is really an opportunity.

Some statistics from the last three economic downturns: One very interesting one is that companies that started before the downturn in the bottom 20 percent emerged in the top 20 percent after the downturn. And you ask yourself, “How does a mediocre company all of a sudden rise to the top through a crisis?” This is where we’ve been spending time, in figuring out what plays they are running and how to better help them execute on it.

As Work Goes Virtual, Citrix Research Shows

Companies Need to Follow Talent Fleeing Cities

The companies that have decided to use this as a period to change the business model, change the services and products they’re offering, are doing it in stealth mode. They’re not noisy. There are no press releases. But I will tell you that next March, June, or September, what will come from them will create an Amazon-like experience for their customers and their employees.

Gardner: Tim, in listening to Jeff and Ray, it strikes me that they look at remote work not as the destination — but the starting point. Is that what you’re starting to see? Have people reconciled themselves with the notion that a significant portion of their workforce will probably be remote? And how do we use that as a starting point — and to what?

Minahan: As Jeff said, companies are rethinking their work models in ways they haven’t since Henry Ford. We just did OnePoll research polling with thousands of US-based knowledge workers. Some 47 percent have either relocated out of big metropolitan areas or are in the process of doing that right now. They can primarily because they’ve proven to themselves that they can be productive when not necessarily in the office.

No alt text provided for this image

Similarly, some 80 percent of companies are now looking at making remote work a more permanent part of their workforce strategy. And why is that? It is not just merely should Sam or Sally work in the office or work at home. No, they’re fundamentally rethinking the role of work, the workforce, the office, and what role the physical office should play.

And they’re seeing an opportunity, not just from real estate cost-reduction, but more so from access to talent. If we remember back nine months ago to before the great pandemic, we were having a different discussion. That discussion was the fact that there was a global talent shortage, according to McKinsey, of 95 million medium- to high-skilled workers.

That hasn’t changed. It was exacerbated at that time because we were organized around traditional work-hub models — where you build an office, build a call center, and you try like heck to hire people from around that area. Of course, if you happen to build in a metropolitan area right down the street from one of your top competitors — you can see the challenge.

In addition, there was a challenge around attaining the right skillsets to modernize and digitize your businesses. We’re also seeing an acceleration in the need for those skills because, candidly, very few businesses can continue to maintain their physical operations in light of the pandemic. They have had to go digital.

And so, as companies are rethinking all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere, just as Ray indicated. I like the nomadic work concept.

As companies rethink all of this, they’re reviewing how to use technology to embrace a much more flexible work model, one that gives access to talent anywhere. I like the nomadic work concept.

Now, how do I use technology to even further raise the skillsets of all of my employees so they perform like the very best. This is where that interesting angle of AI and ML comes in, of being able to offer up the right insights to guide employees to the right next step in a very simple way. At the same time, that approach removes the noise from their day and helps them focus on the tasks they need to get done to be productive. It gives them the space to be creative and innovative and to drive that next level of growth for their company.

Gardner: Jeff, it sounds like the remote work and the future of work that Tim is describing sets us up for a force-multiplier when it comes to addressable markets. And not just addressable markets in terms of your customers, who can be anywhere, but also that your workers can be anywhere. Is that one of the things that will lead to a doubling of productivity?

Workers and customers anywhere

Vincent: Certainly. And the thing about truth is that it’s where you find it. And if it’s true in one area of human operations, it’s going to at least have some application in every other. For example, I live in the Central Valley of California. Because of our climate, the geology, and the way this valley was carved out of the hillside, we have a disproportionately high ability to produce food. So one of the major industries here in the Central Valley is agriculture.

You can’t do what we do here just anywhere because of all those considerations: climate, soil, and rainfall, when it comes. The fact that we have one of the tallest mountain ranges right next to us gives us tons of water, even if it doesn’t rain a lot here in Fresno. But you can’t outsource any of those things. You can’t move any of those things — but that’s becoming a rarity.

If you focus on a remote-first workplace, you can source talent from anywhere; you can locate your business center anywhere. So you get a much greater recruiting tool both for clientele and for talent.

Another thing that has been driven by this pandemic is that people have been forced to go home, stay there, and work there. Either you’re going to figure out a way to get around the obstacles of not being able to go to the office or you’re going to have to close down, and nobody wants to do that. So they’ve learned to adapt, by and large.

And the benefits that we’re seeing are just manifold. They go into everything. Our business agility is much greater. The human considerations of your team members improve, too. They have had an artificial dichotomy between work responsibilities and home life. Think of a single parent trying to raise a family and put bread on the table.

Work Has Changed Forever, So That Experience

Must Be Empowered to Any Location

Now, with the remote-first workplace, it becomes much easier. Your son, your daughter, they have a medical appointment; they have a school need; they have something going on in the middle of the day. Previously you had to request time off, schedule around that, and move other team members into place. And now this person can go and be there for their child, or their aging parents, or any of the other hundreds of things that can go sideways for a family.

With a cloud-based workforce, that becomes much less of a problem. You have still got some challenges you’ve got to overcome, but there are fewer of them. I think everybody is reaping the benefits of that because with fewer people needing to be in the office, that means you can have a smaller office. Fewer people on the roads means less environmental impact of moving around and commuting for an hour twice a day.

Gardner: Ray Wolf, what is it about technology that is now enabling these people to be flexible and adaptive? What do you look for in technology platforms to give those people the tools they need?

Do more with less

Wolf: First, let’s talk about the current technology situation. The average worker out there has eight applications and 10 windows open. The way technology is provisioned to some of our remote workers is working against them. We have these technologies for all. Just because you give someone access to a customer relationship management (CRM) system or a human resources (HR) system doesn’t necessarily make them more productive. It doesn’t take into consideration how they like to do work. When you bring on new employees, it leaves it up to the individual to figure out how to get stuff done.

With the new platforms, Citrix Workspace with intelligence, for example, we’re able to take those mundane tasks and lock then into memory muscle through automation. And so, what we do is free-up time and energy using the Citrix platform. Then people can start moving and essentially upscaling, taking on higher cognitive tasks, and building new products and services.

No alt text provided for this image

That’s what we love about it. The other side is it’s no code and low code. The key here is just figuring out where to get started and making sure that the workers have their fingerprints on the plan because your worker today knows exactly where the inefficiencies are. They know where the frustration is. So we have a number of use cases that in the matter of six weeks, we were able to unlock almost a day per week worth of productivity gains, of which one of our customers in the sale spaces, a sales vice president, coined the word “proactivity.”

For them, they were taking that one extra day a week and starting to be proactive by pursuing new sales and leads and driving revenue where they just didn’t have the bandwidth before.

Through of our own polling of about 200 executives, we discovered that 50 percent of the companies are scaling down on their resources because they are unsure of the future. And that leaves them with the situation of doing more with less. That’s why the automation platforms are ideal for freeing up time and energy so they can deal with a reduced work force, but still gain the bandwidth to pursue new services and products. Then they can come out and be in that top 20 percent after the pandemic.

Gardner: Tim, I’m hearing Citrix Workspace referred to as an automation platform. How does Workspace not just help people connect, but helps them automate and accelerate productivity?

Keep talent optimized every day

Minahan: Ray put his finger on the pulse of the third dynamic we were seeing pre-pandemic, and it’s only been exacerbated. We talked first about the global shortage of medium- to high-skills talent. But then we talked about the acute shortage of digital skills that those folks need.

The third part is, if you’re lucky enough to have that talent, it’s likely they are very frustrated at work. A recent Gallup poll says 87 percent of employees are disengaged at work, and that’s being exacerbated by all of the things that Ray talked about. We’ve provided these workers with all of these tools, all these different channels, Teams and Slack and the like, and they’re meant to improve their performance in collaboration. But we have reached a tipping point of complexity that really has turned your top talent into task rabbits.

What Citrix does with our digital Workspace technology is it abstracts away all of that complexity. It provides unified access to everything an employee needs to be productive in one experience that travels with them. So, their work environment is this digital workspace — no matter what device they are on, no matter what location they are at, no matter what work channel they need to navigate across.

The second thing is it wrappers that in security, both secure access on the way in (I call it the bouncer at the front door), as well as ongoing contextual application of security policies. I call that the bodyguard who follows you around the club to make sure you stay out of trouble. And that gives IT the confidence that those employees can indeed work wherever they need to, and from whatever device they need to, with a level of comfort that their company’s information and assets are made secure.

What gets exciting now is the intelligence components. Infusing this with ML and AI automates away and guides an employee through their workday. It automates away those menial tasks so they can focus on what’s important.

But what gets exciting now is the intelligence components. Infusing this with ML and AI automates away and guides an employee through their work day. It automates away those menial tasks so they can focus on what’s important.

And that’s where folks like A2K come in. They can bring in their intellectual property and understanding of the business processes — using those low- to no-code tools — to actually develop extensions to the workspace that meet the needs of individual functions or individual industries and personalize the workspace experience for every individual employee.

Ray mentioned sales force productivity. They are also doing call center optimization. So, very, very discreet solutions that before required users to navigate across multiple different applications but are now handled through a micro app player that simplifies the engagement model for the employee, offering up the right insights and the right tasks at the right time so that they can do their very best work.

Gardner: Jeff Vincent, we have been talking about this in terms of worker productivity. But I’m wondering about leadership productivity. You are the CEO of a company that relies on remote work to a large degree. How do you find that tools like Citrix and remote-first culture works for you as a leader? Do you feel like you can lead a company remotely?

Workspace enhances leadership

Vincent: Absolutely. I’m trying to take a sip out of a fire hose, because everything I am hearing is exactly what we have been seeing — just put a bit more eloquently and with a bit more data behind it — for quite a long time now.

Leading a remote team really isn’t any different than leading a team that you look at. I mean, one of the aspects of leadership, as it pertains to this discussion, is having everybody know what is expected of them and when the due date is, enabling them with the tools they need to get the work done on time and on budget, right?

No alt text provided for this image

And with Citrix Workspace technology, the workflows automate expense report approvals, they automate calendar appointments, and automate the menial tasks that take up a lot of our time every single day. They now become seamless. They happen almost without effort. So that allows the leaders to focus on, “Okay, what does John need today to get done the task that’s going to be due in a month or in a quarter? Where are we at with this prospect or this leader or this project?”

And it allows everybody to take a moment, reflect on where they are, reflect on where they need to be, and then get more surgical with our people on getting there.

Gardner: Ray, also as a CEO, how do you see the intersection of technology, behavior, and culture coming together so that leaders like yourself are the ones going to be twice as productive?

Wolf: This goes to a human capital strategy, where you’re focusing on the numerator. So, the cost of your resources and the type of resource you need fit within a band. That’s the denominator.

The numerator is what productivity you get out of your workforce. There’s a number of things that have to come into play. It’s people, process, culture, and technology — but not independent or operating in a silo.

And that’s the big opportunity Jeff and Tim are talking about here. Imagine when we start to bring system-level thinking to how we do work both inside and outside of our company. It’s the ecosystem, like hiring Ray Wolf as the individual contributor, yet getting 13 Ray Wolfs; that’s great.

But what happens if we orchestrate the work between finance, HR, the supply chain, and procurement? And then we take it an even bigger step by applying this outside of our company with partners?

How Lucid Technology Services Adapts

To the Work-from-Home Revolution

We’re working with a very large distributor right know with hundreds of resellers. In order to close deals, they have to get into the other partner’s CRM system. Well, today, that happens with about eight emails over a number of days. And that’s just inefficient. But with Citrix Workspace you’re able to cross-integrate processes inside and outside of your company in a secure manner, so that entire ecosystems work seamlessly. As an example, just think about the travel reservation systems, which are not owned by the airlines, but are still a heart-lung function for them, and they have to work in unison.

We’re really jazzed about that. How did we discover this? Two things. One, I’m an aerospace engineer by first degree, so I saw this come together in complex machines, like jet engines. And then, second, by running a global company, I was spending 80 hours a week trying to reconcile disparate data: One data set says sales were up, another that productivity was up, and then my profit margins go down. I couldn’t figure it out without spending a lot of hours.

And then we started a new way of thinking, which is now accelerated with the Citrix Workspace. Disparate systems can work together. It makes clear what needs to be done, and then we can move to the next level, which is democratization of data. With that, you’re able to put information in front of people in synchronization. They can see complex supply chains complete, they can close sales quicker, et cetera. So, it’s really awesome.

I think we’re still at the tip of the iceberg. The innovation that I’m aware of on the product roadmap with Citrix is just awesome, and that’s why we’re here as a partner.

Gardner: Tim, we’re hearing about the importance of extended enterprise collaboration and democratization of data. Is there anything in your research that shows why that’s important and how you’re using that understanding of what’s important to help shape the direction of Citrix products?

Augmented workers arrive

Minahan: As Ray said, it’s about abstracting away that lower-level complexity, providing all the integrations, the source systems, the service security model, and providing the underlying workflow engines and tools. Then experts like Lucid and A2K can extend that to create new solutions for driving business outcomes.

From the research, we can expect the emergence of the augmented worker, number one. We’re already beginning to see it with bots and robotic process automation (RPA) systems. But at Citrix we’re going to be moving to a much higher level, where it will do things similar to what Ray and Jeff were saying, abstracting away a lot of the menial tasks that can be automated. But we can also perform at a higher level, tasks at a much more informed and rapid pace through use of AI, which can compress and analyze massive amounts of data that would take us a very long time individually. ML can adapt and personalize that experience for us.

The research indicates that while robots will replace some tasks and jobs, they will also create many new jobs. You’ll see a rise in demand for new roles, such as a bot or AI trainer, a virtual reality manager, and advanced data scientists.

Secondly, the research indicates that while robots will replace some tasks and jobs, they will also create many new jobs. And, according to our Work 2035 research, you’ll see a rise in demand for new roles, such as a bot or AI trainer, a virtual reality manager, advanced data scientists, privacy and trust managers, and design thinkers such as the folks at A2K and Lucid Technology Solutions are already doing. They are already working with clients to uncover the art of the possible and rethinking business process transformation.

Importantly, we also identified the need for flexibility of work. Shifting your mindset from thinking about a workforce in terms of full-time equivalents (FTEs)instead of pools of talent. And you understand the individual skillsets that you need and bring them together and assemble them rather quickly to address a certain project or issue that you have using digital Citrix Workspace technology, and then disassemble them just as quickly.

But you’ll also see a change in leadership. AI is going to take over a lot of those business decisions and possibly eliminate the need for some middle management teams. The bulk of our focus can be not so much managing as driving new creative ideas and innovation.

Gardner: I’d love to hear more from both Jeff and Ray about how businesses prepare themselves to best take advantage of the next stages of remote work. What do you tell businesses about thinking differently in order to take advantage of this opportunity?

Imagine what’s possible to work

Vincent: Probably the single biggest thing you can do to get prepared for the future of work is to rethink IT and your human capital, your team members. What do they need as a whole?

A business calls me up and says, “Our server is getting old, we need to get a new server.” And previously, I’d say, “Well, I don’t know if you actually need a server on-site, maybe we talk about the cloud.”

So educate yourself as a business leader on what out there is possible. Then take that step, listen to your IT staff, listen to your IT director, whoever that may be, and talk to them about what is out there and what’s really possible. The technology enabling remote work has grown exponentially, even in last few months, in its adoption and capabilities.

If you looked at the technology a year or two ago, that world doesn’t exist anymore. The technology has grown dramatically. The price point has come down dramatically. What is now possible wasn’t a few years ago.

So listen to your technology advisers, look at what’s possible, and prepare yourself for the next step. Take capital and reinvest it into the future of work.

Wolf: What we’re seeing that’s working the best is people are getting started anyway, anyhow. There really wasn’t a playbook set up for a pandemic, and it’s still evolving. We’re experiencing about 15 years’ worth of change in every three months of what’s going on. And there’s still plenty of uncertainty, but that can’t paralyze you.

No alt text provided for this image

We recommend that people fundamentally take a look at what your core business is. What do you do for a living? And then everything that enables you to do that is kind of ancillary or secondary.

When it comes to your workforce — whether it’s comprised of contractors or freelancers or permanent employees — no matter where they are, have a get stuff done mentality. It’s about what you are trying to get done. Don’t ask them about the systems yet. Just say, “What are you trying to get done?” And, “What will it take for you to double your speed and essentially only put half the effort into it?”

And listen. And then define, configure, and acquire the technologies that will enable that to happen. We need to think about what’s possible at the ground level, and not so much thinking about it all in terms of the systems and the applications. What are people trying to do every day and how do we make their work experience and their work life better so that they can thrive through this situation as well as the company?

Gardner: Tim, what did you find most surprising or unexpected in the research from the Work 2035 project? And is there a way for our audience to learn more about this Citrix research?

Minahan: One of the most alarming things to me from the Work 2035 project, the one where we’ve gotten the most visceral reaction, was the anticipation that, by 2035, in order to gain an advantage in the workplace, employees would literally be embedding microchips to help them process information and be far more productive in the workforce.

I’m interested to see whether that comes to bear or not, but certainly it’s very clear that the role of AI and ML — we’re only scratching the surface as we drive to new work models and new levels of productivity. We’re already seeing the beginnings of the augmented worker and just what’s possible when you have bots sitting — virtually and physically — alongside employees in the workplace.

We’re seeing the future of work accelerate much quicker than we anticipated. As we emerge out the other side of the pandemic, with the guidance of folks like Lucid and A2K, companies are beginning to rethink their work models and liberate their thinking in ways they hadn’t considered for decades. So it’s an incredibly exciting time.

Gardner: And where can people go to learn more about your research findings at Citrix?

Minahan: To view the Work 2035 project, you can find the foundational research at, but this is an ongoing dialogue that we want to continue to foster with thought leaders like Ray and Jeff, as well as academia and governments, as we all prepare not just technically but culturally for the future of work.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

Posted in application transformation, artificial intelligence, BYOD, Citrix, Cloud computing, Cyber security, digital transformation, enterprise architecture, Information management, machine learning, professional services, Security, User experience | Tagged , , , , , , , , , , , , | Leave a comment

How the Journey to Modern Data Management is Paved with an Inclusive Edge-to-Cloud Data Fabric

The next BriefingsDirect Voice of Analytics Innovation discussion focuses on the latest insights into end-to-end data management strategies.

As businesses seek to gain insights for more elements of their physical edge — from factory sensors, myriad machinery, and across field operations — data remains fragmented. But a Data Fabric approach allows information and analytics to reside locally at the edge yet contribute to the global improvement in optimizing large-scale operations.

Stay with us now as we explore how edge-to-core-to-cloud dispersed data can be harmonized with a common fabric to make it accessible for use by more apps and across more analytics.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the ways all data can be managed for today’s data-rich but too often insights-poor organizations, we’re joined by Chad Smykay, Field Chief Technology Officer for Data Fabric at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chad, why are companies still flooded with data? It seems like they have the data, but they’re still thirsty for actionable insights. If you have the data, why shouldn’t you also have the insights readily available?

Smykay: There are a couple reasons for that. We still see today challenges for our customers. One is just having a common data governance methodology. That’s not just to govern the security and audits, and the techniques around that — but determining just what your data is.

I’ve gone into so many projects where they don’t even know where their data lives; just a simple matrix of where the data is, where it lives, and how it’s important to the business. This is really the first step that most companies just don’t do.

Gardner: What’s happening with managing data access when they do decide they want to find it? What’s been happening with managing the explosive growth of unstructured data from all corners of the enterprise?

Tame your data

Smykay: Five years ago, it was still the Wild West of data access. But we’re finally seeing some great standards being deployed and application programming interfaces (APIs) for that data access. Companies are now realizing there’s power in having one API to rule them all. In this case, we see mostly Amazon S3

There are some other great APIs for data access out there, but just having more standardized API access into multiple datatypes has been great for our customers. It allows for APIs to gain access across many different use cases. For example, business intelligence (BI) tools can come in via an API. Or an application developer can access the same API. So that approach really cuts down on my access methodologies, my security domains, and just how I manage that data for API access.

Gardner: And when we look to get buy-in from the very top levels of businesses, why are leaders now rethinking data management and exploitation of analytics? What are the business drivers that are helping technologists get the resources they need to improve data access and management?

Smykay: The business drivers gain when data access methods are as reusable as possible across the different use cases. It used to be that you’d have different point solutions, or different open source tools, needed to solve a business use-case. That was great for the short-term, maybe with some quarterly project or something for the year you did it in.

But then, down the road, say three years out, they would say, “My gosh, we have 10 different tools across the many different use cases we’re using.” It makes it really hard to standardize for the next set of use cases.

Gaining a common, secure access layer that can access different types of data is the biggest driver of our HPE Data Fabric. And the business drivers gain when the data access methods are as reusable as possible. 

So that’s been a big business driver, gaining a common, secure access layer that can access different types of data. That’s been the biggest driver for our HPE Data Fabric. That and having common API access definitely reduces the management layer cost, as well as the security cost.

Gardner: It seems to me that such data access commonality, when you attain it, becomes a gift that keeps giving. The many different types of data often need to go from the edge to dispersed data centers and sometimes dispersed in the cloud. Doesn’t data access commonality also help solve issues about managing access across disparate architectures and deployment models?

Smykay: You just hit the nail on the head. Having commonality for that API layer really gives you the ability to deploy anywhere. When I have the same API set, it makes it very easy to go from one cloud provider, or one solution, to another. But that can also create issues in terms of where my data lives. You still have data gravity issues, for example. And if you don’t have portability of the APIs and the data, you start to see some lock-in with the either the point solution you went with or the cloud provider that’s providing that data access for you.

Gardner: Following through on the gift that keeps giving idea, what is it about the Data Fabric approach that also makes analytics easier? Does it help attain a common method for applying analytics?

Data Fabric deployment options

Smykay: There are a couple of things there. One, it allows you to keep the data where it may need to stay. That could be for regulatory reasons or just depend on where you build and deploy the analytics models. A Data Fabric helps you to start separating out your computing and storage capabilities, but also keeps them coupled for wherever the deployment location is.

No alt text provided for this image

For example, a lot of our customers today have the flexibility to deploy IT resources out in the edge. That could be a small cluster or system that pre-processes data. They may typically slowly trickle all the data back to one location, a core data center or a cloud location. Having these systems at the edge gives them the benefit of both pushing information out, as well as continuing to process at the edge. They can choose to deploy as they want, and to make the data analytics solutions deployed at the core even better for reporting or modeling.

Gardner: It gets to the idea of act locally and learn globally. How is that important, and why are organizations interested in doing that?

Smykay: It’s just-in-time, right? We want everything to be faster, and that’s what this Data Fabric approach gets for you.

In the past, we’ve seen edge solutions deployed, but you weren’t processing a whole lot at the edge. You were pushing along all the data back to a central, core location — and then doing something with that data. But we don’t have the time to do that anymore.

Unless you can change the laws of physics — last time I checked, they haven’t done that yet — we’re bound by the speed of light for these networks. And so we need to keep as much data and systems as we can out locally at the edge. Yet we need to still take some of that information back to one central location so we can understand what’s happening across all the different locations. We still want to make the rearview reporting better globally for our business, as well as allow for more global model management.

Gardner: Let’s look at some of the hurdles organizations have to overcome to make use of such a Data Fabric. What is it about the way that data and information exist today that makes it hard to get the most out of it? Why is it hard to put advanced data access and management in place quickly and easily?

Track the data journey

Smykay: It’s tough for most organizations because they can’t take the wings off the airplane while flying. We get that. You have to begin by creating some new standards within your organization, whether that’s standardizing on an API set for different datatypes, multiple datatypes, a single datatype.

Then you need to standardize the deployment mechanisms within your organization for that data. With the HPE Data Fabric, we give the ability to just say, “Hey, it doesn’t matter where you deploy. We just need some x86 servers and we can help you standardize either on one API or multiple APIs.”

We now support more than 10 APIs, as well as the many different data types that these organizations may have.

We see a lot of data silos out there today with customers — and they’re getting worse. They’re now all over the place between multiple cloud providers. And there’s all the networking in the middle. I call it silo sprawl.

Typically, we see a lot of data silos still out there today with customers – and they’re getting worse. By worse, I mean they’re now all over the place between multiple cloud providers. I may use some of these cloud storage bucket systems from cloud vendor A, but I may use somebody else’s SQL databases from cloud vendor B, and those may end up having their own access methodologies and their own software development kits (SDKs).

Next you have to consider all the networking in the middle. And let’s not even bring up security and authorization to all of them. So we find that the silos still exist, but they’ve just gotten worse and they’ve just sprawled out more. I call it the silo sprawl.

Gardner: Wow. So, if we have that silo sprawl now, and that complexity is becoming a hurdle, the estimates are that we’re going to just keep getting more and more data from more and more devices. So, if you don’t get a handle on this now, you’re never going to be able to scale, right?

Smykay: Yes, absolutely. If you’re going to have diversity of your data, the right way to manage it is to make it use-case-driven. Don’t boil the ocean. That’s where we’ve seen all of our successes. Focus on a couple of different use cases to start, especially if you’re getting into newer predictive model management and using machine learning (ML) techniques.

But, you also have to look a little further out to say, “Okay, what’s next?” Right? “What’s coming?” When you go down that data engineering and data science journey, you must understand that, “Oh, I’m going to complete use case A, that’s going to lead to use case B, which means I’m going to have to go grab from other data sources to either enrich the model or create a whole other project or application for the business.”

You should create a data journey and understand where you’re going so you don’t just end up with silo sprawl.

Gardner: Another challenge for organizations is their legacy installations. When we talk about zettabytes of data coming, what is it about the legacy solutions — and even the cloud storage legacy — that organizations need to rethink to be able to scale?

Zettabytes of data coming

Smykay: It’s a very important point. Can we just have a moment of silence? Because now we’re talking about zettabytes of data. Okay, I’m in.

Some 20 years ago, we were talking about petabytes of data. We thought that was a lot of data, but if you look out to the future, we’re talking about some studies showing connected Internet of Things (IoT) devices generating this zettabytes amount of data.

No alt text provided for this image

If you don’t get a handle on where your data points are going to be generated, how they’re going to be stored, and how they’re going to be accessed now, this problem is just going to get worse and worse for organizations.

Look, Data Fabric is a great solution. We have it, and it can solve a ton of these problems. But as a consultant, if you don’t get ahead of these issues right now, you’re going to be under the umbrella of probably 20 different cloud solutions for the next 10 years. So, really, we need to look at the datatypes that you’re going to have to support, the access methodologies, and where those need to be located and supported for your organization.

Gardner: Chad, it wasn’t that long ago that we were talking about how to manage big data, and Hadoop was a big part of that. NoSQL and other open source databases in particular became popular. What is it about the legacy of the big data approach that also needs to be rethought?

Smykay: One common issue we often see is the tendency to go either/or. By that I mean saying, “Okay, we can do real-time analytics, but that’s a separate data deployment. Or we can do batch, rearview reporting analytics, and that’s a separate data deployment.” But one thing that our HPE Data Fabric has always been able to support is both — at the same time — and that’s still true.

So if you’re going down a big data or data lake journey — I think now the term now is a data lakehouse, that’s a new one. For these, basically I need to be able to do my real-time analytics, as well as my traditional BI reporting or rearview mirror reporting — and that’s what we’ve been doing for over 10 years. That’s probably one of the biggest limitations we have seen.

But it’s a heavy lift to get that data from one location to another, just because of the metadata layer of Hadoop. And then you had dependencies with some of these NoSQL databases out there on Hadoop, it caused some performance issues. You can only get so much performance out of those databases, which is why we have NoSQL databases just out of the box of our Data Fabric — and we’ve never run into any of those issues.

Gardner: Of course, we can’t talk about end-to-end data without thinking about end-to-end security. So, how do we think about the HPE Data Fabric approach helping when it comes to security from the edge to the core?

Secure data from edge to core

Smykay: This is near-and-dear to my heart because everyone always talks about these great solutions out there to do edge computing. But I always ask, “Well, how do you secure it? How do you authorize it? How does my application authorization happen all the way back from the edge application to the data store in the core or in the cloud somewhere?”

That’s what I call off-sprawl, where those issues just add up. If we don’t have one way to secure and manage all of our different data types, then what happens is, “Okay, well, I have this object-based system out there, and it has its own authorization techniques.” It has its own authentication techniques. By the way, it has its own way of enforcing security in terms of who has access to what, unless … I haven’t talked about monitoring, right? How do we monitor this solution?

So, now imagine doing that for each type of data that you have in your organization — whether it’s a SQL database, because that application is just a driving requirement for that, or a file-based workload, or a block-based workload. You can see where this starts to steamroll and build up to be a huge problem within an organization, and we see that all the time.

We’re seeing a ton of issues today in the security space. We’re seeing people getting hacked. It happens all the way down to the application layer, as you often have security sprawl that makes it very hard to manage all of the different systems.

And, by the way, when it comes to your application developers, that becomes the biggest annoyance for them. Why? Because when they want to go and create an application, they have to go and say, “Okay, wait. How do I access this data? Oh, it’s different. Okay. I’ll use a different key.” And then, “Oh, that’s a different authorization system. It’s a completely different way to authenticate with my app.”

I honestly think that’s why we’re seeing a ton of issues today in the security space. It’s why we’re seeing people get hacked. It happens all the way down to the application layer, as you often have this security sprawl that makes it very hard to manage all of these different systems.

Gardner: We’ve come up in this word sprawl several times now. We’re sprawling with this, we’re sprawling with that; there’s complexity and then there’s going to be even more scale demanded.

The bad news is there is quite a bit to consider when you want end-to-end data management that takes the edge into consideration and has all these other anti-sprawl requirements. The good news is a platform and standards approach with a Data Fabric forms the best, single way to satisfy these many requirements.

So let’s talk about the solutions. How does HPE Ezmeral generally — and the Ezmeral Data Fabric specifically — provide a common means to solve many of these thorny problems?

Smykay: We were just talking about security. We provide the same security domain across all deployments. That means having one web-based user interface (UI), or one REST API call, to manage all of those different datatypes.

No alt text provided for this image

We can be deployed across any x86 system. And having that multi-API access — we have more than 10 – allows for multi-data access. It includes everything from storing data into files and storing data in blocks. We’re soon going to be able to support blocks in our solution. And then we’ll be storing data into bit streams such as Kafka, and then into a NoSQL database as well.

Gardner: It’s important for people to understand that HPE Ezmeral is a larger family and that the Data Fabric is a subset. But the whole seems to be greater than the sum of the parts. Why is that the case? How has what HPE is doing in architecting Ezmeral been a lot more than data management?

Smykay: Whenever you have this “whole is greater than the sum of the parts,” you start reducing so many things across the chain. When we talk about deploying a solution, that includes, “How do I manage it? How do I update it? How do I monitor it?” And then back to securing it.

Honestly, there is a great report from IDC that says it best. We show a 567-percent, five-year return on investment (ROI). That’s not from us, that’s IDC talking to our customers. I don’t know of a better business value from a solution than that. The report speaks for itself, but it comes down to these paper cuts of managing a solution. When you start to have multiple paper cuts, across multiple arms, it starts to add up in an organization.

Gardner: Chad, what is it about the HPE Ezmeral portfolio and the way the Data Fabric fits in that provides a catalyst to more improvement?

All data put to future use

Smykay: One, the HPE Data Fabric can be deployed anywhere. It can be deployed independently. We have hundreds and hundreds of customers. We have to continue supporting them on their journey of compute and storage, but today we are already shipping a solution where we can containerize the Data Fabric as a part of our HPE Ezmeral Container Platform and also provide persistent storage for your containers.

The HPE Ezmeral Container Platform comes with the Data Fabric, it’s a part of the persistent storage. That gives you full end-to-end management of the containers, not only the application APIs. That means the management and the data portability.

So, now imagine being able to ship the data by containers from your location, as it makes sense for your use case. That’s the powerful message. We have already been on the compute and storage journey; been down that road. That road is not going away. We have many customers for that, and it makes sense for many use cases. We’ve already been on the journey of separating out compute and storage. And we’re in general availability today. There are some other solutions out there that are still on a road map as far as we know, but at HPE we’re there today. Customers have this deployed. They’re going down their compute and storage separation journey with us.

Gardner: One of the things that gets me excited about the potential for Ezmeral is when you do this right, it puts you in a position to be able to do advanced analytics in ways that hadn’t been done before. Where do you see the HPE Ezmeral Data Fabric helping when it comes to broader use of analytics across global operations?

Smykay: One of our CMOs used to say it best, and which Jack Morris has said: “If it’s going to be about the data, it better be all about the data.”

No alt text provided for this image

When you improve automating data management across multiple deployments — managing it, monitoring it, keeping it secure — you can then focus on those actual use cases. You can focus on the data itself, right? That’s living in the HPE Data Fabric. That is the higher-level takeaway. Our users are not spending all their time and money worrying about the data lifecycle. Instead, they can now go use that data for their organizations and for future use cases.

HPE Ezmeral sets your organization up to use your data instead of worrying about your data. We are set up to start using the Data Fabric for newer use cases and separating out compute and storage, and having it run in containers. We’ve been doing that for years. The high-level takeaway is you can go focus on using your data and not worrying about your data.

Gardner: How about some of the technical ways that you’re doing this? Things like global namespaces, analytics-ready fabrics, and native multi-temperature management. Why are they important specifically for getting to where we can capitalize on those new use cases?

Smykay: Global namespaces is probably the top feature we hear back from our customers on. It allows them to gain one view of the data with the same common security model. Imagine you’re a lawyer sitting at your computer and you double-click on a Data Fabric drive, you can literally then see all of your deployments globally. That helps with discovery. That helps with bringing onboard your data engineers and data scientists. Over the years that’s been one of the biggest challenges, they spend a lot of time building up their data science and data engineering groups and on just discovering the data.

Global namespace means I’m reducing my discovery time to figure out where the data is. A lot of this analytics-ready value we’ve been supporting in the open source community for more than 10 years. There’s a ton of Apache open source projects out there, like PrestoHive, and Drill. Of course there’s also Spark-ready, and we have been supporting Spark for many years. That’s pretty much the de facto standard we’re seeing when it comes to doing any kind of real-time processing or analytics on data.

As for multi-temperature, that feature allows you to decrease your cost of your deployment, but still allows managing all your data in one location. There are a lot of different ways we do that. We use erasure coding. We can tear off to Amazon S3-compliant devices to reduce the overall cost of deployment.

These features contribute to making it still easier. You gain a common Data Fabric, common security layer, and common API layer.

Gardner: Chad, we talked about much more data at the edge, how that’s created a number of requirements, and the benefits of a comprehensive approach to data management. We talked about the HPE Data Fabric solution, what it brings, and how it works. But we’ve been talking in the abstract.

What about on the ground? Do you have any examples of organizations that have bitten off and made Data Fabric core for them? As an adopter, what do they get? What are the business outcomes?

Central view benefits businesses 

Smykay: We’ve been talking a lot about edge-to-core-to-cloud, and the one example that’s just top-of-mind is a big, tier-1 telecoms provider. This provider makes the equipment for your AT&Ts and your Vodafones. That equipment sits out on the cell towers. And they have many Data Fabric use cases, more than 30 with us. 

But the one I love most is real-time antenna tuning. They’re able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antenna. They do it via real-time data collection on the antennas and then aggregating that across all of the different layers that they have in their deployments.

One example is real-time antennae tuning. They’re able to improve customer satisfaction in real time and reduce the need to physically return to hotspots on an antennae. They do it instead via real-time data collection and aggregating that across all of their deployments.

They gain a central view of all of the data using a modern API for the DevOps needs. They still centrally process data, but they also process it at the edge today. We replicate all of that data for them. We manage that for them and take a lot of the traditional data management tasks off the table for them, so they can focus on the use case of the best way to tune antennas.

Gardner: They have the local benefit of tuning the antenna. But what’s the global payback? Do we have a business quantitative or qualitative returns for them in doing that?

Smykay: Yes, but they’re pretty secretive. We’ve heard that they’ve gotten a payback in the millions of dollars, but an immediate, direct payback for them is in reducing the application development spend everywhere across the layer. That reduction is because they can use the same type of API to publish that data as a stream, and then use the same API semantics to secure and manage it all. They can then take that same application, which is deployed in a container today, and easily deploy it to any remote location around the world.

Gardner: There’s that key aspect of the application portability that we’ve danced around a bit. Any other examples that demonstrate the adoption of the HPE Data Fabric and the business pay-offs?

Smykay: Another one off the top of my head is a midstream oil and gas customer in the Houston area. This one’s not so much about edge-to-core-to-cloud. This is more about consolidation of use cases.

We discussed earlier that we can support both rearview reporting analytics as well as real-time reporting use cases. And in this case, they actually have multiple use cases, up to about five or six right now. Among them, they are able to do predictive failure reports for heat exchangers. These heat exchangers are deployed regionally and they are really temperamental. You have to monitor them all the time.

But now they have a proactive model where they can do a predictive failure monitor on those heat exchangers just by checking the temperatures on the floor cameras. They bring in all real-time camera data and they can predict, “Oh, we think we’re having an issue with this heat exchanger on this time and this day.” So that decreases management cost for them.

They also gain a dynamic parts management capability for all of their inventory in their warehouses. They can deliver faster, not only on parts, but reduce their capital expenditure (CapEx) costs, too. They have gained material measurement balances. When you push oil across a pipeline, they can detect where that balance is off across the pipeline and detect where they’re losing money, because if they are not pushing oil across the pipe at x amount of psi, they’re losing money.

So they’re able to dynamically detect that and fix it along the pipe. They also have a pipeline leak detection that they have been working on, which is modeled to detect corrosion and decay.

The point is there are multiple use cases. But because they’re able to start putting those data types together and continue to build off of it, every use case gets stronger and stronger.

Gardner: It becomes a virtuous adoption cycle; the more you can use the data generally, then the more value, then the more you invest in getting a standard fabric approach, and then the more use cases pop up. It can become very powerful.

This last example also shows the intersection of operational technology (OT) and IT. Together they can start to discover high-level, end-to-end business operational efficiencies. Is that what you’re seeing?

Data science teams work together

Smykay: Yes, absolutely. A Data Fabric is kind of the Kumbaya set among these different groups. If they’re able to standardize on the IT and developer side, it makes it easier for them to talk the same language. I’ve seen this with the oil and gas customer. Now those data science and data engineering teams work hand in hand, which is where you want to get in your organization. You want those IT teams working with the teams managing your solutions today. That’s what I’m seeing. As you get a better, more common data model or fabric, you get faster and you get better management savings by having your people working better together.

Gardner: And, of course, when you’re able to do data-driven operations, procurement, logistics, and transportation you get to what we’re referring generally as digital business transformation.

Chad, how does a Data Fabric approach then contribute to the larger goal of business transformation?

Depending on size of the organization, you’re talking to three to five different groups, and sometimes 10 different people, just to put a use case together. But as you create a common data access method, you see an organization where it’s easier and easier for not only your use cases, but your businesses to work together on the goal of whatever you’re trying to do and use your data for.

Listen to the podcast. Find it on iTunes. Read a full transcript or downloada copy. Sponsor:Hewlett Packard Enterprise.

Posted in big data, Business intelligence, Cloud computing, Cyber security, data analysis, data center, Data center transformation, digital transformation, Enterprise architect, Hadoop, Hewlett Packard Enterprise, Security, storage | Leave a comment

How COVID-19 teaches higher education institutes to embrace latest IT to advance remote learning

Like many businesses, innovators in higher education have been transforming themselves for the digital age for years, but the COVID-19 pandemic nearly overnight accelerated the need for flexible new learning models.

As a result, colleges and universities must rapidly redefine and implement a new and dynamic balance between in-person and remote interactions. This new normal amounts to more than a repaving of centuries-old, in-class traditions of higher education with a digital wrapper. It requires re-invention — and perhaps new ways of redefining — of the very act of learning itself.

The next BriefingsDirect panel discussion explores how such innovation today in remote learning may also hold lessons for how businesses and governments interact with and enlighten their workers, customers, and ultimately citizens.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share recent experiences in finding new ways to learn and work during a global pandemic are Chris Foward, Head of Services for IT Services at The University of Northampton in the UK; Tim Minahan, Executive Vice President of Business Strategy and Chief Marketing Officer at Citrix, and Dr. Scott Ralls, President of Wake Tech Community Collegein Raleigh, North Carolina. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solution.

Here are some excerpts:

Gardner: Scott, tell us about Wake Tech Community College and why you’ve been able to accelerate your path to broader remote learning?

Ralls: Wake Tech is the largest college in North Carolina, one of the largest community colleges in the United States. We have 75,000 total students across all of our different program areas spread over six different campuses.

In mid-March, we took an early step in moving completely online because of the COVID-19 pandemic. But if we had just started our planning at that point, I think we would have been in trouble; it would have been a big challenge for us, as it has been for much of higher education.

The journey really began six years earlier with a plan to move to a more online-supported, virtual-blended world. For us, the last six months have been about sprinting. We are on a journey that hasn’t been so much about changing our direction or changing our efficacy, but really sprinting the last one-fourth of the race. And that’s been difficult and challenging.

But it’s not been as challenging as if you were trying to figure out the directions from the very beginning. I’ve been very proud of our team, and I think things are going remarkably well here despite a very challenging situation.

Education sprints online

Gardner: Chris, please tell us about The University of Northampton and how the pandemic has accelerated change for you.

Foward: The University of Northampton has invested very heavily in its campus. A number of years ago and we built a new one called Waterside campus. The Waterside campus was designed to work with active blended learning (ABL) as an approach to delivering all course works, and — similar to Wake Tech — we’ve faced challenges around how we deliver online teaching.

We were in a fortunate position because during the building of our new campus we implemented all-new technology from the ground up — from our plant-based systems right through to our backend infrastructure. We aimed at taking on new technologies that were either cloud-based or that allowed us to deliver teaching in a remote manner. That was done predominantly to support our ABL approach to delivery of education. But certainly the COVID-19 pandemic has sped up the uptake of those services.

Gardner: Chris, what was the impetus to the pre-pandemic blended learning? Why were you doing it? How did technology help support it?

Foward: The University of Northampton since 2014 has been moving toward its current institutional approach to learning and teaching. We never perceived of this as a large-scale online learning or a distance learning solution. But ABL does rely on fluent and thoughtful use of technologies for learning.

Our teachers found that the work they’ve done since 2014 really did stand us in good stead as we were able to very quickly change from an on-campus-taught environment to a digital experience for our students.

And this has stood the university in good stead in terms of how we actually deliver to our students. What our lecturers and teachers found is that the work they’ve done since 2014 really did stand us in a good stead as we were able to very quickly change from an on-campus-taught environment to a digital experience for our students.

Gardner: Scott, has technology enabled you to seek remote learning, or was remote learning the goal and then you had to go find the technology? What’s the relationship between remote learning and technology?

Ralls: For us, particularly in community colleges, it was more the second in that remote learning is an important priority for us because a majority of our students work. So the issues of just having the convenience of remote learning started community colleges in the United States down the path of remote learning much more quickly than for other forms of higher education. And so that helped us years ago to start thinking about what technologies are required.

Our college has been very thoughtful about the equity issues in remote learning. Some students succeed in more remote learning platforms, while others struggle with what those solutions may be. It was much more about the need for remote learning to allow working students with the capacities and conveniences, and then looking at what the technologies are and the best practices to achieve those goals.

Businesses learn from schools’ success

Gardner: Tim, when you hear Chris and Scott describing what they are doing in higher education, does it strike you that they are leaders and innovators compared generally to businesses? Should businesses pay attention to what’s going on in higher education these days, particularly around remote, balanced, and blended interactions?

Minahan: Yes, I certainly think they are leading, Dana. That leadership comes from having been prepared for this in advance. If there’s any silver lining to this global crisis we are all living through, it’s that it’s caused organizations and participants in all industries to rethink how they work, school, and live.

Employers, having seen that work can now actually happen outside of an office, are catching up similarly. They’re rethinking their long-term workforce strategies and work models. They’re embracing more flexible and hybrid work approaches for the long-term.

And lower costs and improved productivity and engagement are giving them access to new pools of talent that were previously inaccessible to them in the traditional work-hub model, where you build a big office or call center and then you hire folks to fill them. Now, they can remotely reach talent in any location, including retirees or stay-at-home parents, and caretakers. They can be reactivated into the workforce.

As Kids Do More Remote School,

Managers Have Extra Homework, Too

Similarly to the diversity of the student body you’re seeing at Wake Tech, to do this they need a foundation, a digital workspace platform, that allows them to deliver consistent and secure access to the resources that employees or staff — or in this case, students — need to do their very best work across any channel or location. That can be in the classroom, on the road, or as we’ve seen recently in the home.

I think going forward, you’re going to see not just higher education, which we are hearing about here, but all industries begin to embrace this blended model for some very real benefits, both to their employees and their constituents, but to their own organizations as well.

Gardner: Chris, because Northampton put an emphasis on technology to accomplish blended learning, was the technology typical a few years back — traditional, stack-based enterprise IT — a hindrance? Did you need to rethink technology as you were trying to accomplish your education goals?

Tech learning advances agility

Foward: Yes, we did. When we built our new campus, we looked at what new technologies were coming onto the market. We then moved toward a couple of key suppliers to ensure that we received best-in-class services as well as easy-to-use products. We chose partners like Microsoft for our software programs, like Office, and those sorts of productivity products.

We chose Cisco for networking and servers, and we also pulled in Citrix for delivery of our virtual applications and desktops from any location, anywhere, anytime. It allows flexibility for our students to access the systems from a smartphone and see a specific CAB-type models if we join those through solutions we have. It allows our factor of business and law to be able to present some of this bespoke software that they use. We can tailor the solutions that they see within these environments to meet the educational needs and courses that they are attending.

Gardner: Scott, at Wake Tech, as president of the university, you’re probably not necessarily a technologist. But how do you not be a technologist nowadays when you’re delivering everything as remote learning? How has your relationship with technology evolved? Have you had to learn a lot more tech?

Ralls: Oh, absolutely, yes. And even my own use of technology has evolved quite a bit. I was always aware and had broad goals. But, as I mentioned, we started sprinting very quickly, and when you are sprinting you want to know what’s happening.

We are very fortunate to have a great IT team that is both thoughtful in its direction and very urgent in their movement. So those two things gave me a lot of confidence. It’s also allowed us to sprint to places that we wouldn’t have been able to had these circumstances not come along.

We are very fortunate to have a great IT team that is both thoughtful in its direction and very urgent in their movement. Those two things gave me a lot of confidence. It also allowed us to sprint to places that we wouldn’t have been able to.

I will use an example. We have six campuses. I would do face-to-face forums with faculty, staff, and students, so three meetings on every campus but once a semester. Now, I do those kinds of forums most days with students, faculty, or staff using the technology. Many of us have found that with the directions we were going that there are greater efficiencies to be achieved in many ways that we would not have tried had it not been for the [pandemic] circumstances.

And I think after we get past the issues we are facing with the pandemic; our world will be completely changed because this has accelerated our movement in this direction and accelerated our utility of the usage as well.

Gardner: Tim, we have seen over the years that the intersection between business and technology is not always the easiest relationship. Is what we’re seeing now as a result of the pandemic helping organizations attain the agility that they perhaps struggled to find before?

Minahan: Yes, indeed, Dana. As you just heard, another thing the pandemic has taught us is that agility is key. Fixed infrastructure — whether it’s real estate, the work-hub-centric models, data centers with loads of servers, and on-premise applications — has proven to be an anchor during the pandemic. Organizations that rely heavily on such fixed infrastructure have had a much more difficult time shifting to a remote work or remote learning model to keep their employees and students safe and productive.

In fact, by an anecdote, we had one financial services customer, a CIO, recently say, “Hey, we can’t add servers and capacity fast enough.” And so, similar to Scott and Chris, we’re seeing an increasing number of our customers moving to adopt more variable operating models in everything they do. They are rethinking the real estate, staffing, and their IT infrastructure. As a result, we’re seeing customers take their measured plans for a one- to three-year transition to the cloud and accelerated that to three months, or even a few weeks.

They’re also increasing adoption of digital workspaces so that they can provide a consistent and secure work or learning experience for employees or students across any channel or location. It really boils down to organizations building agility into their operations so they can scale up quickly in the face of the next inevitable, unplanned crisis — or opportunity.

Gardner: We’ve been talking about this through the lens of the higher education institute and the technology provider. But what’s been the experience over the past several months for the user? How are your students at Northampton adjusting to this, Chris? Is this rapid shift a burden or is there a silver lining to more blended and remote learning?

Easy-to-use options for student adoption

Foward: I’ll be honest, I think our students have yet to adopt it fully.

There are always challenges with new technology when it comes in. The uptake will be mainly driven in October when we see our mainstream student cohorts come onboard. I do think the types of technologies we have chosen are key, because making technology simple to use and easy to access will drive further adoption of those products.

What we have seen is that our staff’s uptake on our Citrix environment was phenomenal. And if there’s one positive to take from the COVID-19 situation it is the adoption of technology. Our staff has taken to it like ducks to water. Our IT team has delivered something exceptional, and I think our students will also see a massive benefit from these products, and especially the ease of use of these products.

So, yes, the key thing is making the products easily accessible and easy to use. If we overcomplicate it, you won’t get adoption and you won’t get an experience that customers need when they come to our education institutions.

Gardner: Dr. Ralls, have the students adjusted to these changes in a way that gives them agility as they absorb education?

Ralls: They have. All of us — whether we work, teach, or are students at Wake Tech — have gained more confidence in these environments than we had before. I have regular conversations with these students. There was a lot of uncertainty, just like for many of us working remotely. How would that all work?

And we’ve now seen that we can do it. Things will still change around the notions of making the adjustments we need to. And for many of our students, it isn’t just how things will it change in the class, but in all of the things that they need around that class. For example, we have tutoring centers in our libraries. How do we make those work remotely and by appointment? We all wondered how that would work. And now we’ve seen that it can work, and it does work; and there’s an ease of doing that.

Reimagining Education

In a Remote World

Because we are a community college, we’re an open-admissions college. Many of our students haven’t had the level of academic preparation or opportunity that others have had. And so for some of our students who have a sense of uncertainty or anxiety, we have found that there is a challenge for them to move to remote learning and to have confidence initially.

Sometimes we can see that in withdrawals, but we’ve also found that we can rally around our students using different tools. We have found the value of different types of remote learning that are effective. For example, we’re doing a lot of the HyFlex model now, which is a combination of hybrid and remote, online-based education.

Over time we have seen in many of our classes that where classes started as hybrid, students then shifted to more fully remote and online. So you see the confidence grow over time.

Gardner: Scott, another benefit of doing more online is that you gain a data trail. When it comes to retention, and seeing how your programs are working, you have a better sense of participation — and many other metrics. Does the data that comes along with remote learning help you identify students at risk, and are there other benefits?

Remote learning delivers data

Ralls: We’re a very data-focused college. For instance, even before we moved to more remote learning, every one of our courses had an online shell. We had already moved to where every course was available online. So we knew when our students were interacting.

One of the shifts we’ve seen at Wake Tech with more remote services is the expansion of those hours, as well as the ability to access counseling — and all of our services remotely — and through answer centers and other things.

But that means we had to change our way of thinking. Before, we knew when students took our courses, because they took them when you scheduled the courses. Now, as they are working remotely, we can also tell when they are working. And we know from many of our students that they are more likely to be online and engaged in our coursework between the hours of 5 pm and 10 pm, as opposed to 8 am and noon. Most of when we had been operating, from just having physical sites, was 8 am to 5 pm. Consequently, we have had to move the hours, and I think that’s something that will always be different about us and so that does give us that indication.

We had to change our way of thinking. Before, we knew when students took our courses because they took them when you scheduled the courses. Now, remotely we can also tell when they are working. We have had to move the hours to when they are actually operating.

One other thing about us that has been unique is because of who we are, because we do so much technical education — that’s why we are called Wake Tech — and much of that is hands-on. You can’t do it fully remotely. But every one of our programs has found out the value of remote-based access through the support.

For example, we have a remarkable baking and pastry program. They have figured out how help the students get all of their hands-on resources at home in their own kitchens. They no longer have to come into the labs for what they do. Every program has found that value, the best aspects of their program being remote, even if their full program cannot be remote because of the hands-on matrix.

Gardner: Chris, is the capability to use the data that you get along the way at Northampton a benefit to you, and how?

Foward: Data is key for us in IT Services. We like to try and understand how people are using our systems and which applications they are using. It allows us to then fix the delivery of our applications more effectively. Our courses are also very data-driven. In our games art courses, for example, data allows us to design the materials more effectively for our students.

Gardner: Tim, when you are providing more value back through your technology, the data seems to be key as well. It’s about optimization and even reducing costs with better business and education outcomes. How does the data equation benefit Citrix’s customers, and how do you expect to improve on that?

Data enhances experiences

Minahan: Dana, data plays a major role in every aspect of what we do. When you think about the need to deliver digital workspaces by providing consistent and secure access to the resources — whether it’s employees or students — they need to be able to perform at their best wherever that work needs to get done. The data that we are gathering is applied in a number of different ways.

Number one is around the security model. I use the analogy of not just having security access in — the bouncer at the front door to make sure you have authenticated and are on the list to be access the resources you need — but also having the bodyguard that follows you around the club, if you will, to constantly monitor your behavior and apply additional security policies.

The data is valuable for that because we understand the behavior of the individual user, whether they are typically accessing from a particular device or location or via the types of information or applications they access.

The second area is around performance. If we move to a much more distributed model, or a flexible or a blended model, vital to that is ensuring that those employees or students have reliable access to the applications and information they need to perform at their best. Being able to constantly monitor that environment allows for increasing bandwidth, or moving to a different channel as needed, so they get the best experience.

And then the last one gets very exciting. It is literally about productivity. Being able to push the right information or the right tasks, or even automate a particular task or remove it from their work stream in real time is vital to ensuring that we are not drowning in this cacophony of different apps and alerts — and all the noise that gets in the way of us actually doing our best work or learning. And so data is actually vital to our overall digital workspace strategy at Citrix.

Gardner: Chris, to attain an improved posture around ABL, that can mean helping students pick up wherever they left off — whether in a classroom, their workplace, at a bakery or in a kitchen at home. It requires a seamless transition regardless of their network and end device. How important is it to allow students to not have to start from scratch or find themselves lost in this collaboration environment? How is Citrix an important part of that?

Foward: With our ABL approach, we have small collaborative groups that work together to deliver or gain their learning.

We also ensure that the students have face-to-face contact with tutors, other distance learning, or while on campus. And with the technology, we store all of the academic materials in one location, called our mail site, which allows students to be able to access and learn as and when they need to.

Citrix plays a key part in that because we can deliver applications into that state quickly and seamlessly. It allows students to always be able to understand and see the applications they need for their specific courses. It allows them to experiment, discuss ideas, and get more feedback from our lecturers because they understand what materials are being stored and how to access them.

Gardner: Dr. Ralls, how do you at Wake Tech prevent learning gaps from occurring? How does the technology help students move seamlessly throughout their education process, regardless of the location or device?

Seamless tracking lets students thrive

Ralls: There are different types of gaps. In terms of courses, one of the things we found recently is our students are looking for different types of access. Many of our students are looking for additional types of access — perhaps replicating our seated courses to gain the value of synchronous experiences. We have had to make sure that all of our courses have that capacity, and that it works well.

Then, because many of our students are also in a work environment, they want an asynchronous capability. And so we are now working on making sure students know the difference and how to match those expectations.

Also, because we are an open access college — and as I like to say, we take the top 100 percent of our applicant students — for many of our students, gaps come not just within a course, but between courses or toward their goals. For many of our students who are first-generation students, higher education is new. They may have also been away from education for a period of time.

We have to be much more intrusive and to help students and monitor to make sure our students are making it from one place to the next. We need to make sure that learning makes sense to them.

So we have to be much more intrusive and to help students and monitor to make sure our students are making it from one place to the next. We need to make sure that learning makes sense to them and that they are making it to whatever their ultimate goals are.

We use technology to track that and to know when our students are getting close to leaving. We call that being like rumble strips on the side of the road. There are gaps that we are looking at, not just within courses, but between courses, on the way to our students’ academic goals.

Gardner: Tim, when I hear Chris and Scott describe these challenges in education, I think how impactful this can be for other businesses in general as they increasingly have blended workforces. They are going to face similar gaps too. What, from Citrix’s point of view, should businesses be learning from the experiences at University of Northampton and Wake Tech?

Minahan: I think Winston Churchill summed it up best: “Never let a good crisis go to waste.” Smart organizations are using the current crisis — not just to survive, but to thrive. They are using the opportunity to accelerate their digital transformation and rethink long-held work and operating models in ways they probably hadn’t before.

So as demonstrated both at Wake Tech and Northampton, and as Scott and Chris both said, for both school and work the future is definitely going to be blended.

We have, for example, another higher education customer, the University of Sydney that was able to get 20,000 students and faculty transition to an online learning environment last March, literally within a week. But that’s not the real story, it’s where they are going next with this.

As they entered the new school year in Sydney, they now have 100 core and software as a service (SaaS) applications that students can access through the digital workspace regardless of the type of device or their location. And they can ensure they have that consistent and secure and reliable experience with those apps. They say the student experience is as good, and sometimes even better, than what a student would have when using a locally installed app on a physical computer.

And now the university, most importantly, has used this remote learning model as an opportunity to reach new students — and even new faculty — in locations that they couldn’t have supported before due to geographic limitations of largely classroom-based models.

These are the types of things that businesses also have to think through. And as we hear from Wake Tech and Northampton, businesses can take a page from the courseware from many forward-thinking higher education organizations that are already in a blended learning model and see how that applies to their own business.

Gardner: Dr. Ralls, when you look to the future, what comes next? What would you like to see happen around remote learning, and what can the technologists like Citrix do to help?

Blended learning without walls

Ralls: Right now, there is so much greater efficiency than we had before. I think there is a way to bring that greater efficiency even more into our classrooms. For years we have talked about a flipped classroom, which really means those things that are better accomplished outside in a lab or in a shop, to do those outside of the classroom.

We have to all get to a place where the learning process just doesn’t happen within the walls of the classrooms. So the ability for students to go back and review work, to pick up on work, to use multiple different tools to add and supplement what they are getting through a classroom-based experience, a shop-based experience — I think that’s what we are moving to.

See How Leading Institutes Use

Technology to Transform Education Delivery

For Wake Tech, this really hit us about March 15, 2020 when we went fully remote. We don’t want to go back to the way we were in April. We don’t want to be a fully remote, online college. But we also don’t want to be where we were in February.

This pandemic crisis has presented to us a greater acceleration of where we want to be, of where we can be. It’s what we aspire to be in terms of better education — not just more convenient access of education — but better educational opportunities through the multiple different opportunities that are brought to us by technology to supplement the core work that we have always done through our seat-based environment.

Gardner: Chris, at Northampton, what’s the next step for the technology enabling these higher goals that Dr. Ralls just described? Where would you like to see the technology take Northampton students next?

Foward: The technology is definitely key to what we are trying to do as education providers, to provide the right skill sets wherein students move from higher education into business. Certainly, with the likes of Citrix, with what was originally a commercial-focused application, and bringing it into our institution, we have allowed our students to gain access and understand how the system works — and understand how to use it.

And that’s similar with most of our technologies that we have brought in. It gives students more of a commercial feel for how operations should be running, how systems should be accessed, and the ways to use those systems.

Gardner: Tim, graduates from Wake Tech and from University of Northampton a year or two from now, they are going to be well-versed in these technologies, and this level of collaboration and seamless transitions between blended approaches. How are the companies they go to going to anticipate these new mindsets? What should businesses be doing to take full advantage of what these students have already been doing in these universities?

Students become empowered employees

Minahan: That’s a great point, and it is certainly something that business is grappling with now as we move beyond hiring Millennials to the next generation of highly educated, grown-up-on-the-Internet students with high expectations who are coming out of universities today.

For the next few years, it all boils down to the need to deliver a superior employee experience, to empower employees to perform at their best, and to do the jobs they were hired to do. We should not burden them, as we have in a lot of corporate America, with a host of different distractions, apps, and rules and regulations that keep them away from doing their core jobs.

We need to deliver a superior employee experience. We should not burden them with a host of different distractions, apps, and rules that keep them from doing their core jobs.

And key to that, not surprisingly, is going to require a digital workspace environment that empowers and provides unified access to all of the resources and information that the employee needs to perform at their best across any work channel or location. They need a behind-the-scenes security model that ensures the security of the corporate assets, applications, and information — as well as the privacy of individuals — without getting in the way of work.

And then, at a higher level, as we talked about earlier, we need an intelligence model with more analytics built into that environment. It will then not just offer up a launch pad to access the resources you need, but will actually guide you through your day, presenting the right tasks and insights as you need them, and allowing you to get the noise out of your day so you can really create, innovate, and do your best work. And that will be whether work is in an office, on the road, or work as we have seen recently, in the home.

Gardner: I wouldn’t be surprised if the students coming out of these innovative institutes of higher learning are going to be the instigators of change and innovation in their employment environments. So a point on the arrow from education into the business realm.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

Posted in application transformation, Citrix, Cloud computing, digital transformation, Enterprise transformation, Security, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

The path to a digital-first enterprise is paved with an Emergence Model and Digital Transformation Playbook

The next BriefingsDirect digital business optimization discussion explores how open standards help support a playbook approach for organizations to improve and accelerate their digital transformation.

As companies chart a critical journey to become digital-first enterprises, they need new forms of structure to make rapid adaptation a regular recurring core competency. Stay with us as we explore how standards, resources, and playbooks around digital best practices can guide organizations through unprecedented challenges — and allow them to emerge even stronger as a result. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.  

Here to explain how to architect for ongoing disruptive innovation is our panel, Jim Doss, Managing Director at IT Management and Governance, LLC, and Vice Chair of the Digital Practitioner Work Group (DPWG) at The Open Group; Mike Fulton, Associate Vice President of Technology Strategy and Innovation at Nationwide and Academic Director of Digital Education at The Ohio State University, and Dave Lounsbury, Chief Technical Officer at The Open Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. 

Here are some excerpts: 

Gardner: Dave, the pressure from the COVID-19 pandemic response has focused a large, 40-year gathering of knowledge into a new digitization need. What is that new digitization need, and why are standards a crucial part of it?

Lounsbury: It’s not just digitization, but also the need to move to digital. That’s what we’re seeing here. The sad fact of this terrible pandemic is that it has forced us all to live a more no-contact, touch-free, and virtual life.

We’ve all experienced having to be on Zoom, or not going into work, or even when you’re out doing take-out at a restaurant. You don’t sign a piece of paper anymore; you scan something on your phone, and all of that is based on having the skills and the business processes to actually deliver some part of your business’s value digitally. 

This was always an evolution, and we’ve been working on it for years. But now, this pandemic has forced us to face the reality that you have to adopt digital in order to survive. And there’s a lot of evidence for that. I can cite McKinsey studies where the companies that realized this early and pivoted to digital delivery are reaping the business benefits. And, of course, that means you have to have both the technology, the digitization part, but also embrace the idea that you have to conduct some part of your business, or deliver your value, digitally. This has now become crystal clear in the focus of everyone’s mind.

Gardner: And what is the value in adopting standards? How do they help organizations from going off the rails or succumbing to complexity and chaos?

Lounsbury: There’s classically been a split between information technology (IT) in an organization and the people who are in the business. And, something I picked up at one of the Center for Information Research meetings was, the minute an IT person talks about “the business” you’ve gone off the rails.

If you’re going to deliver your business value digitally — even if it’s something simple like contactless payments or an integrated take-out order system — that knowledge might have been previously in an IT shop or something that you outsourced. Now it has to be in the line of business. 

Pandemic survival is digital

There has to be some awareness of these digital fundamentals at almost all levels of the business. And, of course, to do that quickly, people need a structure and a guidebook for what digital skills they need at different points of their organizational evolution. And that is where standards, complemented by education and training, play a big role.

Fulton: I want to hit on this idea of digitization versus digital. Dave made that point and I think it’s a good one. But in the context of the pandemic, it’s incredibly critical that we understand the value that digitization brings — as well as the value that digital brings.

When we talk about digitization, typically what we’re talking about is the application of technology inside of a company to drive productivity and improve the operating model of the company. In the context of the pandemic, that value becomes much more important. Driving internal productivity is absolutely critical.

We’re seeing that here at Nationwide. We are taking steps to apply digitization internally to increase the productivity of our organization and help us drive the cost down of the insurance that we provide to our customers very specifically. This is in response to the adjustment in the value equation in the context of the pandemic.

But then, the digital context is more about looking externally. Digital in this context is applying those technologies to the customer experience and to the business model. And that’s where the contact list, as Dave was talking about, is so critically important.

There are so many ways now to interact with our customers, and in ways that don’t involve human beings. How to get things done in this pandemic, or to involve human beings in a different way — in a digital fashion — that’s where both digitization and digital are so critically important in this current context.

Gardner: Jim Doss, as organizations face a survival-of-the-fittest environment, how do we keep this a business transformation with technology pieces — and not the other way around?

Project management to product journey

Doss: The days of architecting IT and the business separately, or as a pure cascade or top-down thing; those days are going. Instead of those “inside-out” approaches, “outside-in” architectural thinking now keenly focuses on customer experiences and the value streams aligned to those experiences. Agile Architecture promotes enterprise segmentation to facilitate concurrent development and architecture refactoring, guided by architectural guardrails, a kind of lightweight governance structure that facilitates interoperability and keeps people from straying into dangerous territory.

If you read books like Team Topologies, The Open Group Digital Practitioner Body of Knowledge™️ (DPBoK), and Open Agile Architecture™️ Standards, they are designed for team cognitive load, whether they are IT teams or business teams. And doing things like the Inverse Conway Maneuver segments the organization into teams that deliver a product, a product feature, a journey, or a sub-journey.

Those are some really huge trends and the project-to-product shift is going on in business and IT. These trends have been going on for a few years. But when it comes to undoing 30 or 40 years of project management mentality in IT — we’re still at the beginning of the project-to-product shift. It’s massive. 

To summarize what David was saying, the business can no longer outsource digital transformation. As matter of fact, by definition, you can’t outsource digital transformation to IT anymore. This is a joint-effort going forward.

Gardner: Dave, as we’re further defining digital transformation, this goes beyond just improving IT operations and systems optimization. Isn’t digital transformation also about redefining their total value proposition?

Lounsbury: Yes, that’s a very good point. We may have brushed over this point, but when we say and use the word digital, at The Open Group we really mean a change in the mindset of how you deliver your business.

This is not something that the technology team does. It’s a reorientation of your business focus and how you think about your interactions with the customer, as well as how you deliver value to the customer. How do you give them more ways of interacting with you? How do you give them more ways of personalizing their experience and doing what they want to do?

This goes very deep into the organization, to how you think about your value chains, in business model leverage, and things like that.

One of the things we see a lot of is people thinking about is trying to do old processes faster. We have been doing that incremental improvement and efficiency forever and applying machines to do part of the value-delivery job. But the essential decision now is thinking about the customers’ view as being primarily a digital interaction, and to give them customization, web access, and let them do the whole value chain in digital. That goes right to the top of the company and to how structure your business model or value delivery.

Balanced structure for flexibility

Gardner: Mike Fulton, more structure comes with great value in that you can manage complexity and keep things from going off of the rails. But some people think that too much structure slows you down. How do you reach the right balance? And does that balance vary from company to company, or there are general rules about finding that Nirvana between enough structure and too little?

Fulton: If we want to provide flexibility and speed, we have to move away from rules and start thinking more about guardrails, guidelines, and about driving things from a principled perspective.

That’s one of the biggest shifts we’re seeing in the digital space related to enterprise architecture (EA). Whereas, historically, architecture played a directional, governance role, what we’re seeing now is that architecture in a digital context provides guardrails for development teams to work within. And that way, it provides more room for flexibility and for choice at the lower levels of an organization as you’re building out your new digital products.

Historically, architecture played a directional, governance role. Now architecture in a digital context provides guardrails for development teams to work within. It provides more room for flexibility and for choice at the lower levels of an organization as you’re building out your new digital products.

Those digital products still need to work in the context of a broader EA, and an architecture that’s been developed leveraging potentially new techniques, like what’s coming out of The Open Group with the Open Agile Architecture standard. That’s new, different, and critically important for thinking about architecture in a different way. But, I think, that’s where we provide flexibility — through the creation of guardrails.

Doss: The days are over for “Ivory Tower” EA – the top-down, highly centralized EA. Today, EA is responding to right-to-left and outside-in versus inside-out pressures. It has to be more about responding, as Mike said, to the customer-centric needs using market data, customer data, and continuous feedback.

EA is really different now. It responds to product needs, market needs, and all of the domain-driven design and other things that go along with that. 

Lounsbury: Sometimes we use the term agile, and it’s almost like a religious term. But agile essentially means you’re structured to respond to changes quickly and you learn from your mistakes through repeatedly refining your concepts. That’s actually a key part of what’s in the Open Agile Architecture Standard that Mike referred to.

The reason for this is fundamental to why people need to worry about digital right now. With digital, your customer interface is no longer your fancy storefront. It’s that black mirror on your phone, right? You have exactly the same six-by-two-and-a-half-inch screen that everybody else has to get your message across.

And so, the side effect of that, is that the customer has much more power to select among competitors than they did in the past. There’s been plenty of evidence that customers will pick convenience or safety over brand loyalty in a heartbeat these days.

Internally that means as a business that you have to have your team structured so they can quickly respond to the marketplace, and not have to go all the way up the management chain for some big decision and then bring it all way back down again. You’ll be out-competed if you do it that way. There is a hyper-acceleration to “survival of the fittest” in business and IT; this has been called the Red Queen effect.

That’s why it’s essential to have agile not as a religion, but as the organizational agility to respond to outside-in customer pressures as a competitive factor in how you run your business. And, of course, that then pulls along the need to be agile in your business practices and in how you empower your agile teams. How do you give them the guardrails? How do you give them the infrastructure they need to succeed at all of those things?

It’s almost as if the pyramid has been turned on its head. It’s not a pyramid that comes down from the top of some high-level business decisions, but the pyramid grows backward from a point of interaction with the customers.

Gardner: Before we drill down on how to attain that organizational agility, let’s dwell on the challenges. What’s holding up organizations from attaining digital transformation now that they face an existential need for it?

Digital delivers agile advantage

Doss: We see a lot of companies try to bring in digital technologies but really aren’t employing the needed digital practices to bring the fuller intended value, so there’s a cultural lag. 

The digital technologies are often used in combination and mashed up in amazing ways to bring out new products and business models. But you need digital practices along with those digital technologies. There’s a growing body of evidence that the difference between companies that actually get that are not just outperforming their industry peers by percentages — it’s almost exponential.

The findings from the “State of DevOps” Reports for the last few years gives us clear evidence on this. Product teams are really driving a lot of the work and functionality across the silos, and increasingly into operations.

And this is why the standards and bodies knowledge are so important — because you need these ideas. With The Open Group DPBoK, we’ve woven all of this together in one Emergence Model and kept these digital practices connected. That’s the “P” in DPBoK, the practitioner. It’s those digital practices that bring in the value.

Fulton: Jim makes a great point here. But in my context with Digital Executive Education at Ohio State, when we look at that journey to a digital enterprise we think of it in three parts: The vision, the transformation, and the execution.

The piece that Jim was just talking about talks to execution. Once you’re in a digital enterprise, how do you have the right capabilities and practices to create new digital products day to day?  And that’s absolutely critical.

But you also have to set the vision upfront. You have to be able to envision, as a leadership team of an organization, what a digital enterprise looks like. What is your blueprint for that digital enterprise? And so, you have to be able to figure that out. Then, once you have aligned that blueprint with your leadership team, you have to lead that digital transformation journey.

You have to be able to envision, as a leadership team of an organization, what a digital enterprise looks like. What is your blueprint for that digital enterprise? Once you have aligned that blueprint with your leadership team, you have to lead that digital transformation journey.

And that transformation takes you from the vision to the execution. And that’s what I really love about The Open Group and the new direction around an open digital portfolio, the portfolio digital standards that work together in concert to take you across that entire journey. 

These are the standards help you envision the future. Standards that help you drive that digital transformation like the Open Agile Architecture Standard. Standards that help you with digital delivery such as IT4IT. A critically important part of this journey is rethinking your digital delivery because the vast majority of products that companies produce today are digital products.

But then, how do you actually deliver the capabilities and practices, and uplift the organization with the new skills to function in this digital enterprise once you get there? And you can’t wait. You have to bring people along that journey from the very start. The entire organization needs to think differently, and it needs to act differently, once you become a digital enterprise.

Lounsbury: Right. And that’s an important point, Mike, and one that’s come out of the digital thinking going on at The Open Group. A part of the digital portfolio is understanding the difference between “what a company is” and “what a company does” — that vision that you talked about – and then how we operate to deliver on that vision.

Dana, you began this with a question about the barriers and what’s slowing progress down. Those things used to be vertically aligned. What the business is and does used to be decomposed through some top-down, reductionist, refactor or delegate, decompose and delegate of all of the responsibilities. And if everybody does their job at the edge, then the vision will be realized. That’s not true anymore because of the outside-in digital reality.

A big part of the challenge for most organizations is the old idea that, “Well, if we do that all faster, we’ll somehow be able to compete.” That is gone, right? That fundamental change and challenge for top- and middle-management is, “How do we make the transition to the structure that matches the new competitive environment of outside-in?”

“What does it mean to empower our team? What is the culture we need in our company to actually have a productive team at the edge?” Things like, “Are you escalating every decision up to a higher level of management?” You just don’t have time for that anymore.

Are people free to choose the tools and interfaces with the customers that they believe will maximize the customer experience? And if it doesn’t work out, how do you move on to the next step without being punished for the failure of your experiment? If it reflects negatively on you, that’s going to inhibit your ability to respond, too.

All of these techniques, all of these digital ways of working, to use Jim’s term, have to be brought into the organization. And, as Mike said, that’s where the power of standards comes in. That’s where the playbooks that The Open Group has created in the DPBoK Standard, the Open Agile Architecture Standard, and the IT4IT Reference Architecture actually give you the guidance on how to do that.

Part of the Emergence Model is knowing when to do what, at the right stage in your organization’s growth or transformation.

Gardner: And leading up to the Emergence Model, we’ve been talking about standards and playbooks. But what is a “playbook” when it comes to standards?  And why is The Open Group ahead of the curve to extend the value when you have multiple open standards and playbooks?

Teams need playbook to win

Lounsbury: I’ll be honest, Dana, The Open Group is at a very exciting time. We’re in a bit of a transition. When there was a clear division between IT and business, there were different standards and different bodies of knowledge for how you adapt to each of those. A big part of the role of the enterprise architect was in bridging those two worlds.

The world has changed, and The Open Group is in the process of adapting to that. We’re looking to build on the robust and proven standards and build those into a much more coherent and unified digital playbook, where there is easy discoverability and navigability between the different standards. 

People today want to have quick access. They want to say, “Oh, what does it mean to have an agile team? What does it mean to have an outside-in mindset?” They want to quickly discover that and then drill in deeper. And that’s what we pioneered with the DPBoK, with the architecture of the document called the Emergence Model, and that’s been picked up by other standards of The Open Group. It’s clearly the direction we need to do more in.

Gardner: Mike, why are multiple standards acting in concert good?

Fulton: For me, when I think about why you need multiple standards, it’s because if you were to try to create a single standard that covered everything, that standard would become incomprehensible.

If you want an industry standard, you need to bring the right subject matter experts together, the best of the best, the right thought leaders — and that’s what The Open Group does. It brings thought leaders from across the world together to talk about specific topics to develop the best information that we have as an industry and to put that into our standards.

The Open Group, with the digital portfolio, is intentionally bringing the standards together to make sure that the standards align. That brings the standards together to make sure we’re thinking about big, broad concepts in the same way and then dig down into the details with the right subject matter experts.

But it’s a rare bird, indeed, that can do that across multiple parts of an organization, or multiple capabilities, or multiple practices. And so by building these standards up individually, it allows us to tap into the right subject matter experts, the right passions, and the right areas of expertise.

But then, what The Open Group is now doing with the digital portfolio is intentionally bringing those standards together to make sure that the standards align. It brings the standards together to make sure that they have the same messaging, that we’re all working on the same definitions, and that we’re all thinking about big, broad concepts together in the same way and then allow us to dig down into the details with the right subject matter experts at the level of granularity needed to provide the appropriate levels of benefits for industry.

Gardner: And how does the Emergence Model help harmonize multiple standards, particularly around the Digital Practitioner’s Workgroup?

Emergence Model scales

Lounsbury: We talked about outside-in, and there are a couple of ways you can approach how you organize such a topic. As Mike just said, there’s a lot of detail that you need to understand to fully grasp it.

But you don’t always have to fully grasp everything at the start. And there are different ways you can look at organizations. You can look at the typical stack, decomposition, and the top-down view. You can look at lifecycles, that when you start at the left and you go to the right, what are all the steps in-between?

And the third dimension, which we’re picking up on inside The Open Group, is the concept of scale through the Emergence Model. And that’s what we’ve tried to do, particularly in the DPBoK Standard. It’s the best example we have right now. And that approach is coming into other parts of our standards. The idea comes out of lean startup thinking, which comes out of lean manufacturing.

When you’re a startup, or starting a new initiative, there are a few critical things you have to know. What is your concept of digital value? What do you need to deliver that value? Things like that.

Then you ideally succeed and grow and then, “Wow, I need more people.” So now you have a team. Well, that brings in the idea of, “What does team management mean? What do I have to do to make a team productive? What infrastructure does it need?”

And then, with that, the success goes on because of the steps you’ve taken from the beginning. As you get into more complexity, you get into multiple teams, which brings in budgeting. You soon have large-scale enterprises, which means you have all sorts of compliance, accounting, and auditing. These things go on and on.

But you don’t know those things at the start. You do have to know them at the end. What you need to know at the start is that you have a map as to how to get there. And that’s the architecture, and the process to that is what we call the Emergence Model.

It is how you map to scale. And I should say, people think of this quite often in terms of, “Oh it’s just for a startup. I’m not a startup, I’m in a big company.” But many big companies — Mike, I think you’ve had some experience with this – have many internal innovation centers. You do entrepreneurial funding for a small group of people and, depending on their success, feed them more resources. 

So you have the need for an Emergence Model even inside of big companies. And, by the way, there are many use cases for using a pattern for success in how to do digital transformation. Don’t start from the top-down; start with some experiments and grow from the inside-out.

Doss: I refer to that as downscale digital piloting. You may be a massive enterprise, but if you’re going to adapt and adopt new business models, like your competitors and smaller innovators who are in your space, you need to think more like them.

Though I’m in a huge enterprise, I’m going to start some smaller initiatives and fence them off from governance and other things that slow those teams down. I’m going to bring in only lean aspects for those initiatives.

You may be a massive enterprise, but if you’re going to adapt and adopt new business models, like your competitors and smaller innovators, you need to think more like them. In a huge enterprise, you need to start some smaller initiatives and fence them off from the governance that could slow them down and bring in lean aspects. 

And then, you amplify what works and scale that to the enterprise. As David said, you have the smaller organizations that have a great guidebook now for what’s right around the corner. They’re growing now, they don’t have just one product anymore, they have two or three products and so the original product owner can’t be in every product meeting.

So, all of those things are happening as a company grows and the DPBoK and Emergence Model is great for, “Hey, this is what’s around the corner.”

With a lot of other frameworks, you’d have to spend a lot of time extracting for scale-specific guidance on digital practices. So, you’d have to extract all that scale-specific stuff and it’s a lot of work, to be honest, and it’s hard to get right. So, in the DPBoK, we built the guidance so it’s much easier to move in either direction — going up- and down-scale digital piloting as well.

Gardner: Mike, you’re on the pointy end of this, I think, in one of your jobs. 

Intentional innovation

Fulton: Yes, at Nationwide, in our technology innovation team, we are doing exactly what Dave and Jim have described. We create new digital products for the organization and we leverage a combination of lean startup methodologies, agile methodologies, and the Emergence Model from The Open Group DPBoK to help us think about what we need at different points in time in that lifecycle of a digital product.

And that’s been really effective for us as we have brought new products to market. I shared the full story at The Open Group presentation about six months ago. But it is something that I believe is a really valuable tool for big enterprises trying to innovate. It helps you think about being very intentional about what are you using. What capabilities and components are you using that are lean versus more robust? What capabilities are you using that are implicit versus explicit, and what point in time do you actually need to start writing things down?

At what point in time do you absolutely need to start leveraging those slightly bigger, more robust enterprise processes to be able to effectively bring a digital product to market versus using processes that might be more appropriate in a startup world? And I found the DPBoK to be incredibly helpful and instructive as we went through that process at Nationwide. 

Gardner: Are there any other examples of what’s working, perhaps even in the public sector? This is not just for private sector corporations. A lot of organizations of all stripes are trying to align, become more agile, more digital, and be more responsive to their end-users through digital channels. Any examples of what is working when it comes to the Emergence Model, rapid digitization, and leveraging of multiple standards appropriately?

Good governance digitally 

Doss: We’re really still in the early days with digital in the US federal government. I do a lot of work in the federal space, and I’ve done a lot of commercial work as well. 

They’re still struggling in the federal space with the project-to-product shift. 

There is still a huge focus on the legacy project management mentality. When you think about the legacy definition of a deliverable, the project is done at the deliverable. So, that supports “throw it over the wall and run the other way.”

Various forms of the plan-build-operate (PBO) IT organization structure still dominate in the federal space. Orgs that are PBO-aligned tend to push work from left to right across the P, B & O silos, and the space between these siloes are heavily stage-gated. So, this inside-out thinking and the stage-gating also supports “throw it over the wall and run the other way.” In the federal space, waterfall is baked into nearly everything.

These are two huge digital anti-patterns that the federal space is really struggling with. 

Product management, for example, employs a single persistent team that remains with the work across the lifecycle and ties together those dysfunctional silos. Such “full product lifecycle teams” eliminate a lot of the communication and hand-off problems associated with such legacy structures.

The other problem in the federal space with the PBO IT org structure is that the real power resides in these silos and these silos’ management focus is downward into their silo….not as much across the silos; so there are a lot of cross functional initiatives such as EA, service ownership, product ownership or digital initiative that might get some traction for a while but such initiatives of functions have no real buying power or “go/no-go” decision authority so they get squashed eventually by the silo heads, where the real power resides in such organizations. 

In the US, I look over time for Congressional, via new laws or Office of Management and Budget (OMB) via policy, to bring in some needed changes and governance about how IT orgs get structured and governed. 

Ironically, these two digital anti-patterns also lead to the creation of lots of over-baked governance over decades to try to assure that the intended value was still captured, which is like chasing more bad money after that other bad money.

This is not just true in federal this is also true in the commercial world. Such over-baked governance just happens to be really, really bad in the federal space.

For federal IT, you have laws like Clinger-CohenFederal Information Technology Acquisition Reform Act (FITARA), policies and required checks by the OMB, Capital Planning and Investment controlAcquisition RegulationsDoD Architecture Framework, and I could go on — all which require tons of artifacts and evidence of sound decision making.

The problem is nobody is rationalizing these together… like figuring out what supersedes what when something new comes out. So, the governance just gets more and more un-lean, over-bloated and what you have at the end is agencies are either misguided by out-of-date guidance or overburdened by over-bloated governance.

Fulton: I don’t have nearly the level of depth in the government space that Jim does, but I do have a couple examples I want to point people to if they are trying to look for more government-related examples. I point you to a couple here in Ohio, both Doug McColloughand his work with the City of Dublin in Ohio. He’s done a lot of work with digital technologies; digital transformation at the city level. 

And then again here in Ohio – and I’m just using Ohio references because I live in Ohio and I know a little bit more intimately what some of these folks are doing — Ervan Rodgers, CEO of the State of Ohio, has done a really nice job of focusing on digital capabilities and practices to build up across state employees.

The third I’ll point to is the work going on in India. There’s been a tremendous amount of really great work in India related to government, architecture, and getting to the digital transformation conversation at the government level. So, if folks are interested in more examples, more stories, I’d recommend you look into those three as places to start.

Lounsbury: The thing, I think, you’re referring to there, Mike, is the IndEA India Enterprise Architecture initiative and the pivot to digital that several of the Indian provinces are making. We can certainly talk about that more on a different podcast.

Transformation is almost always driven by a Darwinian force. Something has changed in your environment that causes you to evolve, and we’ve seen that in the federal and defense sectors in things like avionics where the cost of software is unaffordable. They then turned to modular, decomposable systems based on standards just to stay in business.

I will toss in one ray of light to what Jim said. Transformation is almost always driven by an almost Darwinian force. There’s something changed in your environment that causes you to evolve and we’ve seen that in the federal sector and the defense sector in particular where things like in avionics, the cost of software is becoming unaffordable. They turned to modular, decomposable systems based on standards in order to achieve the necessary cost savings to just stay in business.

Similarly, in India, the utter need to deliver to a very diverse, large rural population, and grow that needed digitization. And certainly, the U.S. federal sector and the defense sector are very aware of the disparity. And I think, things like, the defense budget changes or changes in mission will drive some of these changes that we’ve talked about that are driven by the pandemic urgently in the commercial sector.

So, it will happen, but it is, I’ll agree with Jim, probably the most challenging ultimate top-down environment that you could possibly imagine doing a transformation.

Gardner: In closing, what’s coming next from The Open Group, particularly around digital practitioner resources? How can organizations best exploit these resources?

Harmony on the horizon

Lounsbury: We’ve talked about the evolution The Open Group is going through, about the digital portfolio and the digital playbooks having all of our standards speak common language and working together.

A first step in that is to develop a set of principles by which we’re going to do that evolution and the documents is called, Principles for Open Digital Standards. You can get that from The Open Group bookstore and if you want to find it quickly, you go to The Open Group’s The Digital-First Enterprise page that links to all of these standards.

Looking forward, there are activities going on in all of the forums of The Open Group and the forums are voluntary organizations. But certainly, the IT4IT Forum, the Digital Practitioner Workgroup, in these large swaths of our architecture activity they are working on how we can harmonize the language and bring common knowledge to our standards.

And then, to look beyond that, I think we need to address the problems of discoverability and navigability that I mentioned earlier to give that coherent and an easy-to-access picture of where a person can find out what they need when they need it.

Fulton: Dave, I think probably one of the most important pieces of work that will be delivered soon by The Open Group is putting a stake in the ground around what it means to be a digital product. And that’s something that I don’t think we’ve seen anywhere else in the industry. I think it will really move the ball forward and be a unifying document for the entire open digital portfolio.

And so, we have some great work that’s already gone on in the DPBoK and the Open Agile Architecture standard, but I think that digital product will be a rallying cry that will make all of the standards even more cohesive going forward.

Doss: And I’ll just add my final two cents here. I think a lot of it, Dana, is just awareness. People need to just understand that there’s a DPBoK Standard out there for digital practitioners. 

If you’re in IT, you’re not just an IT practitioner anymore, you’re using digital technology and digital practices to bring lean, user-centric value to your business or mission. So, digital is the new best practice. So, there’s a framework in a body of knowledge out there now that supports and helps people transform in their careers. The same thing with Agile Architecture. And so it’s just the awareness that these things are out there. The most powerful thing to me is, both of these works that I just mentioned have more than 500 references from most of the last 10 years of leading digital thinkers. So, again, the way these are structured, the way these are built, bringing in just the scale-specific guidance and that sort of stuff is hugely powerful. There needs to be an increasing awareness that this stuff is out there.

Lounsbury: And if I can pick up on that awareness point, I do want to mention, as always, The Open Group publishes the standards as freely available to all. You can go to that digital enterprise page or The Open Group Library to find these. We also have an active training ecosystem that you can find these days. Everybody does that digital training. 

There are ways of learning the standards in depth and getting certified that you’re proficient in the knowledge of that. But I also should mention, we have at least two U.S. universities and more interest on the international sector for graduate work in executive-level education. And Mike has mentioned his executive teaching at Ohio State, and there are others as well.

Gardner: Right, and many of these resources are available at The Open Group website. There are also many events, many of them now virtual, as well as certification processes and resources. There’s always something new, it’s a very active place.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

Posted in Cloud computing, enterprise architecture, Enterprise transformation, Platform 3.0, Security, The Open Group | Tagged , , , , , , , , , , , , , , | Leave a comment

How The Open Group enterprise architecture portfolio enables an agile digital enterprise

The next BriefingsDirect agile business enablement discussion explores how a portfolio approach to standards has emerged as a key way to grapple with digital transformation.

As businesses seek to make agility a key differentiator in a rapidly changing world, applying enterprise architecture (EA) in concert with many other standards has never been more powerful. Stay with us here to explore how to define and corral a comprehensive standards resources approach for making businesses intrinsically agile and competitive. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about attaining agility via an embrace of a broad toolkit of standards, we are joined by our panel, Chris Frost, Principal Enterprise Architect and Distinguished Engineer, Application Technology Consulting Division, at FujitsuSonia Gonzalez, The Open Group TOGAF® Product Manager, and Paul Homan, Distinguished Engineer and Chief Technology Officer, Industrial, at IBM Services.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Sonia, why is it critical to modernize businesses in a more comprehensive and structured fashion? How do standards help best compete in this digital-intensive era?

Gonzalez: The question is more important than ever. We need to be very quickly responding to changes in the market.

It’s not only that we have more technology trends and competitors. Organizations are also changing their business models — the way they offer products and services. And there’s much more uncertainty in the business environment.

The current situation with COVID-19 has made for a very unpredictable environment. So we need to be faster in the ways we respond. We need to make better use of our resources and to be able to innovate in how we offer our products and services. And since everybody else is also doing that, we must be agile and respond quickly. 

Gardner: Chris, how are things different now than a year ago? Is speed all that we’re dealing with when it comes to agility? Or is there something more to it?

Frost: Speed is clearly a very important part of it, and market trends are driving that need for speed and agility. But this has been building for a lot more than a year.

We now have, with some of the hyperscale cloud providers, the capability to deploy new systems and new business processes more quickly than ever before. And with some of the new technologies — like artificial intelligence (AI), data analytics, and 5G – there are new technological innovations that enable us to do things that we couldn’t do before.

Faster, better, more agile

A combination of these things has come together in the last few years that has produced a unique need now for speed. That’s what I seek in the market, Dana.

Gardner: Paul, when it comes to manufacturing and industrial organizations, how do things change for them in particular? Is there something about the data, the complexity? Why are standards more important than ever in certain verticals?

Homan: The industrial world in particular, focusing on engineering and manufacturing, has brought together the physical and digital worlds. And whilst these industries have not been as quick to embrace the technologies as other sectors have, we can now see how they are connected. That means connected products, connected factories and places of work, and connected ecosystems.

There are still so many more things that need to be integrated, and fundamentally EA comes back to the how – how do you integrate all of these things? A great deal of the connectivity we’re now seeing around the world needs a higher level of integration.

Gardner: Sonia, to follow this point on broader integration, does applying standards across different parts of any organization now make more sense than in the past? Why does one part of the business need to be in concert with the others? And how does The Open Group portfolio help produce a more comprehensive and coordinated approach to integration?

Integrate with standards

Gonzalez: Yes, what Paul mentioned about being able to integrate and interconnect is paramount for us. Our portfolio of standards, which is more than just [The Open Group Architectural Forum (TOGAF®)]  Standard, is like having a toolkit of different open standards that you can use to address different needs, depending upon your particular situation.

For example, there may be cases in which we need to build physical products across an extended industrial environment. In that case, certain kinds of standards will apply. Also critical is how the different standards will be used together and pursue interoperability. Therefore, borderless information flow is one of our trademarks at The Open Group.

 Other more intangible cases, such as digital services, need standards. For example, the Digital Practitioner Body of Knowledge (DPBoK™) supports a scale model to support the digital enterprise.

Other standards are coming around agile enterprises and best practices. They support how to make interconnections and interoperability faster — but at the same time having the proper consistency and integration to align with the overall strategy. At the end of the day, it’s not enough to integrate for just a technical point of view. You need bring new value to your businesses. You need to be aligned with your business model, and with your business view, to your strategy.

Therefore, the change is not only to integrate technical platforms, even though that is paramount, but also to change your business and operational model and to go deeper to cover your partners and the way your company is put together.

So, therefore, we have different standards that cover all of those different areas. As I said at the beginning, these form a toolkit with which you can choose different standards and make them work together conforming a portfolio of standards.

Gardner: So, whether we look to standards individually or together as a toolkit, it’s important that they have a real-world applicability and benefits. I’m curious, Paul and Chris, what’s holding organizations back from using more standards to help them?

Homan: When we use the term traditional enterprise architecture, it always needs to be adapted to suit the environment and the context. TOGAF, for example, has to be tailored to the organization and for the individual assignment.

But I’ve been around in the industry long enough to be familiar with a number of what I call anti-patterns that have grown up around EA practices and which are not helping with the need for agility. This comes from the idea that EA has heavy governance.

We have all witnessed such core practices — and I will confess to having being part of some of them. And these obviously fly in the face of the agility, flexibility, of being able to push decisions out to the edge and pivot quickly, and to make mistakes and be allowed to learn from them. So kind of an experimental attitude.

And so gaining such adaptation is more than just promoting good architectural decision-making within a set of guide rails — it allows decision-making to happen at the point of need. So that’s the needed adaption that I see.

Gardner: Chris, what challenges do you see organizations dealing with, and why are standards be so important to helping them attain a higher level of agility?

Frost: The standards are important, not so much because they are a standard but because they represent industry best practices. The way standards are developed in The Open Group are not some sort of theoretical exercise. It’s very much member-driven and brought together by the members drawing on their practical experiences.

Automation of business workflows and processes with a businessman in background touching a button

To me, the point is more about industry best practice, and not so much the standard. There are good things about standard ways of working, being able to share things, and everybody having a common understanding about what things mean. But that aspect of the standard that represents industry best practices — that’s the real value right now.

Coming back to what Paul said, there is a certain historical perspective here that we have to acknowledge. EA projects in the past — and certainly things I have been personally involved in — were often delivered in a very waterfall fashion. That created a certain perception that somehow EA means big-design-upfront-waterfall-style projects — and that absolutely isn’t the case.

That is one of the reasons why a certain adaptation is needed. Guidance about how to adapt is needed. The word adapt is very important because it’s not as if all of the knowledge and fundamental techniques that we have learned over the past few years are being thrown away. It’s a question of how we adapt to agile delivery, and the things we have been doing recently in The Open Group demonstrate exactly how to do that.

Gardner: And does this concept of a minimum viable architecture fit in to that? Does that help people move past the notion of the older waterfall structure to EA?

Reach minimum viable architecture

Frost: Yes, very much it does. It’s something that you might regard as reaching first base. In architectural terms, that minimum viable architecture is like reaching first base, and that emphasizes a notion of rapidly getting to something that you can take forward to the next stage. You can get feedback and also an acknowledgment that you will improve and iterate in the future. Those are fundamental about agile working. So, yes, that minimum viable architecture concept is a really important one. 

Gardner: Sonia, if we are thinking about a minimum viable architecture we are probably also working toward a maximum value standards portfolio. How do standards like TOGAF work in concert with other open standards, standards not in The Open Group? How do we get to that maximum value when it comes to a portfolio of standards?

Gonzalez: That’s very important. First, it has to do with adapting the practice, and not only the standard. In order to face new challenges, especially ones with agile and digital, the practices need to evolve and therefore, the standards – including the whole portfolio of The Open Group standards which are constantly in evolution and improvement. Our members are the ones contributing with the content that follows the new trends, best practices, and uses for all of those practices.

The standards need to evolve to cover areas like digital and agile. And with the concept of minimal viable architecture, the standards are evolving to provide guidance on how EA as a practice supports agile. Actually, nothing in the standard says it has to be used in the waterfall way, even though some people may say that.

TOGAF is now building guidance for how people can use the standards supporting the agile enterprise, delivering that in an agile way, and also supporting an agile approach, which is having a different view of how the practice is applied following this new shift and this new adaption.

Adapt to sector-specific needs

The practice needs to be adapted, the standards need to evolve to fulfill that, and need to be applied to specific situations. For example, it’s not the same to architect organizations in which you have ground processes, especially in a back office than other ones that are more customer facing. For the first ones, their processes are heavier, they don’t need to be that agile. That agile architecture is for upfront customers that need to support a faster pace.

So, you might have cases in which you need to mix different ways to apply the practices and standards. Less agile approach for the back office and a more agile approach for customer facing applications such as, for example, online banking.

Adaptation also depends on the nature of companies. The healthcare industry is one example. We cannot experiment that much in that area because that’s more risk assessment and less subject to experimentation. For these kinds of organizations a different approach is needed. 

There is work in progress in different sectors. For example, we have a very good guide and case study about how to use the TOGAF standard along with the ArchiMate® modeling notation in the banking industry using the BIAN®  Reference Model. That’s a very good use case in The Open Group library. We also have a work in progress in the forum around how governments architect. The IndEA Reference Model is another example of a reference model for that government and has been put together based on open standards.

We also have work in progress around security, such as with the SABSA [framework for Business Security Architecture], for example. We have developed guidance about standards and security along with SABSA. We also have a partnership with the Object Management Group (OMG), in which we are pioneers and have a liaison to build products that will go to market to help practitioners use external standards along with our own portfolio.

Gardner: When we look at standards as promoting greater business agility, there might be people who look to the past and say, “Well, yes, but it was associated with a structured waterfall approach for so long.”

But what happens if you don’t have architecture and you try to be agile? What’s the downside if you don’t have enough structure; you don’t put in these best practices? What can happen if you try to be agile without a necessary amount of architectural integrity?

Guardrails required

Homan: I’m glad that you asked, because I have a number of organizations that I have worked with that have experienced the results of diminishing their architectural governance. I won’t name who they are for obvious reasons, but I know of organizations that have embraced agility. They had great responses to being able to do things quickly, find things out, move fleet-of-foot, and then combined with that cloud computing capabilities. They had great freedom to exercise where they choose to source commodity cloud services.

And, as an enterprise architect, if I look in, that freedom created a massive amount of mini-silos. As soon as those need to come together and scale — and scale is the big word — that’s where the problems started. I’ve seen, for example, around common use of information and standards, processes and workflows that don’t cross between one cloud vendor and another. And these are end-customer-facing services and deliveries that frankly clash from the same organization, from the same brand.

And those sorts of things came about because they weren’t using common reference architectures. There wasn’t a common understanding of the value propositions that were being worked toward, and they manifested because you could rapidly spin stuff out.

When you have a small, agile model of everybody co-located in a relatively contained space — where they can readily connect and communicate — great. But unfortunately as soon as you go and disperse the model, have a round of additional development, distribute to more geographies and markets, with lots of different products, you behave like a large organization. It’s inevitable that people are going to plough their own furrow and go in different directions. And so, you need to have a way of bringing it back together again.

And that’s typically where people come in and start asking how to reintegrate. They love the freedom and we want to keep the freedom, but they need to combine that with a way of having some gentle guardrails that allow them to exercise freedom of speed but not diverge too much.

Frost: The word guardrails is really important because that is very much the emphasis of how agile architectures need to work. My observation is that, without some amount of architecture and planning, what tends to go wrong is some of the foundational things – such as using common descriptions of data or common underlying platforms. If you don’t get those right, different aspects of an overall solution can diverge and fail to integrate. 

Some of those things may include what we generally refer to as non-functional requirements, things like capacity, performance, and possibly safety or regulatory compliance. These rules are often things that easily tend to get overlooked unless there is some degree of planning and architecture, surrounding architecture definitions that think through how to incorporate some of those really important features.

A really important judgment point is what’s just enough architecture upfront to set down those important guardrails without going too far and going back into the big design upfront approach, which we want to avoid to still create the most freedom that we can.

Gardner: Sonia, a big part of the COVID-19 response has been rapidly reorganizing or refactoring supply chains. This requires extended enterprise cooperation and ultimately integration. How are standards like TOGAF and the toolkit from The Open Group important to allow organizations to enjoy agility across organizational boundaries, perhaps under dire circumstances?

COVID-19 necessitates holistic view

Gonzalez: That is precisely when more architecture is needed, because you need to be able to put together a landscape, a whole view of your organization, which is now a standard organization. Your partners, customers, customer alliances, all of your liaisons, are a part of your value chain and you need to have visibility over this.

You mentioned suppliers and providers. These are changing due to the current situation. The way they work, everything is going more digital and virtual, with less face-to-face. So we need to change processes. We need to change value streams. And we need to be sure that we have the right capabilities. Having standards, it’s spot-on, because one of the advantages of having standards, and open standards especially, is that you facilitate communication with other parties. If you are talking the same language it will be easier to integrate and get people together.

Now that most people are working virtually, that implies the need for very good management or your whole portfolio of products and lifecycle. For addressing all this complexity and to gain a holistic view of your capabilities you need to have an architecture focus. Therefore, there are different standards that can fit together in those different areas.

For example, you may need to deliver more digital capabilities to work virtually. You may need to change your whole process view to become more efficient and allow such remote work, and to do that you use standards. In the TOGAF standard we have a set of very good guidance for our business architecture, business models, business capabilities, and value streams; all of them are providing guidance on how to do that.

Another very good guide under the TOGAF standard umbrella for their organization is called Organization Map Guide. It’s much more than having a formal organizational chart to your company. It’s how you map to different resources to respond quickly to changes in your landscape. So, having a more dynamic view, having a cross-coding view of your working teams, is required to be agile and to have interdisciplinary teams work together. So you need to have architecture, and you need to have open standards to address those challenges.

Gardner: And, of course, The Open Group is not standing still, along with many other organizations, in trying to react to the environment and help organizations become more digital and enhance their customer and end-user experiences. What are some of the latest developments at The Open Group?

Standards evolve steadily

Gonzalez: First, we are evolving our standards constantly. The TOGAF standard is evolving to address more of these agile-digital trends, how to adopt new technology trends in a way that they will be adopted in accord with your business model for your strategy and organizational culture. That’s an improvement that is coming. Also, the structure of the standard has evolved to be easier to use and more agile. It has been designed to evolve through new and improved versions more frequently than in the past.

We also have other components coming into the portfolio. One of them is the Agile Architecture Standard, which is going to be released soon. That one is going straight into the agile space. It’s proposing a holistic view of the organization. This coupling between agile and digital is addressed in that standard. It is also suitable to be used along with the TOGAF standard. Both complement each other. The DPBoK is also evolving to address new trends in the market.

We also have other standards. The Microservice Architecture is a very active working group that is delivering guidance on microservices delivered using the TOGAF standard. Another important one is the Zero Trust Architecturein the security space. Now more than ever, as we go virtual and rely on platforms, we need to be sure that we are having proper consistency in security and compliance. We have, for example, the General Data Protection Regulation (GDPR) considerations, which are stronger than ever. Those kinds of security breaches are addressed in that specific context.

The IT4IT standard, which is another reference architecture, is evolving toward becoming more oriented to a digital product concept to precisely address all of those changes.

All of these standards, all the pieces, are moving together. There are other things coming, for example, delivering standards to serve specific areas like oil, gas, and electricity, which are more facility-oriented, more physically-oriented. We are also working toward those to be sure that we are addressing all of the different possibilities.

Another very important thing here is we are aiming for every standard we deliver into the market to have a certification program along with it. We have that for the TOGAF standard, ArchiMate standard, IT4IT, and DPBoK. So the idea is to continue increasing our portfolio of certification along with the portfolio of standards.

Furthermore, we have more credentials as part of the TOGAF certification to allow people to go into specializations. For example, I’m TOGAF-certified but I also wanted to go for a Solution Architect Practitioner or a Digital Architect. So, we are combining the different products that we have, different standards, to have these building blocks we’re putting together for this learning curve around certifications, which is an important part of our offering.

Gardner: I think it’s important to illustrate where these standards are put to work and how organizations find the right balance between a minimum viable architecture and a maximum value portfolio for agility.

So let’s go through our panel for some examples. Are there organizations you are working with that come to mind that have found and struck the right balance? Are they using a portfolio to gain agility and integration across organizational boundaries?

More tools in the toolkit

Homan: The key part for me is do these resources help people do architecture? And in some of the organizations I’ve worked with, some of the greatest successes have been where they have been able to pick and choose – cherry pick, if you like — bits of different things and create a toolkit. It’s not about following just one thing. It’s about having a kit.

The reason I mentioned that is because one of the examples I want to reference has to do with development of ecosystems. In ecosystems, it’s about how organizations work with each other to deliver some kind of customer-centric propositions. I’ve seen this in the construction industry in particular, where lots of organizations historically have had to come together to undertake large construction efforts.

And we’re now seeing what I consider to be an architected approach across those ecosystems. That helps build a digital thread, a digital twin equivalent of what is being designed, what is being constructed for safety reasons, both in terms of what is being built at the time for the people that are building it, but also for the people that then occupy it or use it, for the reasons of being able to share common standards and interoperate across the processes from end-to-end to be able to do these thing in a more agile way of course, but in a business agile way.

So that’s one industry that always had ecosystems, but IT has come in and therefore architects have had to better collaborate and find ways to integrate beyond the boundary of their organization, coming back to the whole mission of boundaryless information flow, if you will.

Gardner: Chris, any examples that put a spotlight on some of the best uses of standards and the best applicability of them particularly for fostering agility?

Frost: Yes, a number of customers in both the private and public sector are going through this transition to using agile processes. Some have been there for quite some time; some are just starting on that journey. We shouldn’t be surprised by this in the public and private sectors because everybody is reacting to the same market fundamentals driving the need for agile delivery.

We’ve certainly worked with a few customers that have been very much at the forefront of developing new agile practices and how that integrates with EA and benefits from all of the good architectural skills and experiences that are in frameworks like the TOGAF standard.

Paul talked about developing ecosystems. We’ve seen things such as organizations embarking on large-scale internal re-engineering where they are adjusting their own internal IT portfolios to react to the changing marketplace that they are confronted by.

I am seeing a lot of common problems about fitting together agile techniques and architecture and needing to work in these iterative styles. But overwhelmingly, these problems are being solved. We are seeing the benefits of this iterative way of working with rapid feedback and the more rapid response to changing market techniques.

I would say even inside The Open Group we’re seeing some of the effects of that. We’ve been talking about the development of some of the agile guidance for the TOGAF standard within The Open Group, and even within the working group itself we’ve seen adaption of more agile styles of working using some of the tools that are common in agile activities. Things like GitLab and Slack and these sorts of things. So it really is quite a pervasive thing we are seeing in the marketplace.

Gardner: Sonia, are there any examples that come to mind that illustrate where organizations will be in the coming few years when it comes to the right intersection of agile, architecture, and the use of open standard? Any early adopters, if you will, or trendsetters that come to mind that illustrate where we should be expecting more organizations to be in the near future?

Steering wheel for agility

Gonzalez: Things are evolving rapidly. In order to be agile and a digital enterprise there are different things that need to change around the organization. It’s a business issue, it’s not something related to only platforms of technology, or technology adoption. It’s going ahead of that to the business models.

For example, we now see more-and-more the need to have an outside-in view of the market and trends. Being efficient and effective is not enough anymore. We need to innovate to figure out what the market is asking for. And sometimes to even generate that demand and generate new digital offerings for your market.

That means more experimentation and more innovation, keeping in mind that in order to really deliver that digital offering you must have the right capabilities, so changes in your business and operational models, your backbone, need to be identified and then of course connected and delivered through technical platforms.

Data is also another key component. We have several working groups and Forums working around data management and data science. If you don’t have the information, you won’t be able to understand your customers. That’s another trend, having a more customer journey-oriented view. At the end, you need to give your value to your end users and of course also internally to your company.

That’s why even internally, at The Open Group, we are considering having our own standards get a closer view of the customer. That is something that companies need to be addressing. And for them to do that, practitioners need to be able to develop new skills and to evolve rapidly. They will need to study not only the new technology trends, but how you can communicate that to your business, so more communications, marketing, and a more aggressive approach through innovation.

Sustainability is another area we are considering at The Open Group, being able to deliver standards that will support organizations make better use of resources internally and externally and selecting the tools to be sustainable within their environments.  

Those are some of the things we see for the coming years. As we have all learned this year, we should be able to shift very quickly. I was recently reading a very good blog that said agile is not only having a good engine, but also having a good steering wheel to be able to change direction quickly. That’s a very good metaphor for how you should evolve. It’s great to have a good engine, but you need to have a good direction, and that direction is precisely what they need to pay attention to, not being agile only for the sake of being agile.

So, that’s the direction we are taking with our whole portfolio. We are also considering other areas. For example, we are trying to improve our offering in vertical industry areas. We have other things on the move like Open Communities, especially for the ArchiMate Standard, which is one of our executable standards easier to be implemented using architecture tools.

So, those are the kinds of things in our strategy at The Open Group as we work to serve our customers.

Gardner: And what’s next when it comes to The Open Group events? How are you helping people become the types of architects who reach that balance between agility and structure in the next wave of digital innovation?

New virtual direction for events

Gonzalez: We have many different kinds of customers. We have our members, of course. We have our trainers. We have people that are not members but are using our standards and they are very important. They might eventually become members. So, we have been identifying those different markets on building customer journeys for all of them in order to serve them properly.

Serving them, for example, means providing better ways for them to find information on our website and to get access to our resources. All of our publications are free to be downloaded and used if you are and end user organization. You only need a commercial license if you will apply them to deliver services to others.

In terms of events, we have had a very good experience with virtual events. The good thing about our crisis is that you can use it for learning, and we have learned that virtual events are very good. First, because we can address more coverage. For example, if you organize a face-to-face event in Europe, probably people from Europe will attend, but it’s very unlikely that people from Asia or even the U.S. or Latin America will attend. But a virtual event, also being free events, are attracting people from different countries, different geographies.

We have very good attendance on those virtual events. This year, all four events, except the one that we had in San Antonio have been virtual. Besides the big ones that we have every three months, we also have organized other smaller ones. We had a very good one in Brazil, we have another one from the Latin-American community in Spanish, and we’re organizing more of these events.

For next year, probably we are going to have some kind of a mix of virtual and face-to-face, because, of course, face-to-face is very important. And for our members, for example, sharing experiences as a network is a value that you can only have if you’re physically there. So, probably for next year, depending on how the situation is evolving, it will be a mix of virtual and face-to-face events.

We are trying to get a closer view what the market is demanding from us, not only in the architecture space but in general.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

Copyright Interarbor Solutions, LLC and The Open Group, 2005-2020. All rights reserved.

Posted in Cloud computing, Cyber security, Data center transformation, digital transformation, enterprise architecture, Enterprise transformation, open source, The Open Group | Tagged , , , , , , , , , , , , , , | Leave a comment

The IT intelligence foundation for digital business transformation rests on HPE InfoSight AIOps

The next BriefingsDirect podcast explores how artificial intelligence (AI) increasingly supports IT operations.

One of the most successful uses of machine learning (ML) and AI for IT efficiency has been the InfoSight technology developed at Nimble Storage, now part of Hewlett Packard Enterprise (HPE).

Initially targeting storage optimization, HPE InfoSight has emerged as a broad and inclusive capability for AIOps across an expanding array of HPE products and services.

Please welcome a Nimble Storage founder, along with a cutting-edge machine learning architect, to examine the expanding role and impact of HPE InfoSight in making IT resiliency better than ever.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest IT operations solutions that help companies deliver agility and edge-to-cloud business continuity, we’re joined by Varun Mehta, Vice President and General Manager for InfoSight at HPE and founder of Nimble Storage, and David Adamson, Machine Learning Architect at HPE InfoSight. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Varun, what was the primary motivation for creating HPE InfoSight? What did you have in mind when you built this technology?

Mehta: Various forms of call home were already in place when we started Nimble, and that’s what we had set up to do. But then we realized that the call home data was used to do very simple actions. It was basically to look at the data one time and try and find problems that the machine was having right then. These were very obvious issues, like a crash. If you had had any kind of software crash, that’s what call home data would identify.

We found that if instead of just scanning the data one time, if we could store it in a database and actually look for problems over time in areas wider than just a single use, we could come up with something very interesting. Part of the problem until then was that a database that could store this amount of data cheaply was just not available, which is why people would just do the one-time scan.

The enabler was that a new database became available. We found that rather than just scan once, we could put everyone’s data into one place, look at it, and discover issues across the entire population. That was very powerful. And then we could do other interesting things using data science such as workload planning from all of that data. So the realization was that if the databases became available, we could do a lot more with that data.

Gardner: And by taking advantage of that large data capability and the distribution of analytics through a cloud model, did the scope and relevancy of what HPE InfoSight did exceed your expectations? How far has this now come?

Mehta: It turned out that this model was really successful. They say that, “imitation is the sincerest form of flattery.” And that was proven true, too. Our customers loved it, our competitors found out that our customers loved it, and it basically spawned an entire set of features across all of our competitors.

The reason our customers loved it — followed by our competitors — was that it gave people a much broader idea of the issues they were facing. We then found that people wanted to expand this envelope of understanding that we had created beyond just storage.

Data delivers more than a quick fix

And that led to people wanting to understand how their hypervisor was doing, for example. And so, we expanded the capability to look into that. People loved the solution and wanted us to expand the scope into far more than just storage optimization.

Gardner: David, you hear Varun describing what this was originally intended for. As a machine learning architect, how has HPE InfoSight provided you with a foundation to do increasingly more when it comes to AIOps, dependability, and reliability of platforms and systems?

The database is full of data that not only tracks everything longitudinally across the installed base, but also over time. The richness of that data gives us features we otherwise could not have conceived of. Many issues can now be automated away. 

Adamson: As Varun was describing, the database is full of data that not only tracks everything longitudinally across the installed base, but also over time. The richness of that data set gives us an opportunity to come up with features that we otherwise wouldn’t have conceived of if we hadn’t been looking through the data. Also very powerful from InfoSight’s early days was the proactive nature of the IT support because so many simple issues had now been automated away. 

That allowed us to spend time investigating more interesting and advanced problems, which demanded ML solutions. Once you’ve cleaned up the Pareto curve of all the simple tasks that can be automated with simple rules or SQL statements, you uncover problems that take longer to solve and require a look at time series and telemetry that’s quantitative in nature and multidimensional. That data opens up the requirement to use more sophisticated techniques in order to make actionable recommendations.

Gardner: Speaking of actionable, something that really impressed me when I first learned about HPE InfoSight, Varun, was how quickly you can take the analytics and apply them. Why has that rapid capability to dynamically impact what’s going on from the data proved so successful? 

Support to succeed

Mehta: It turned out to be one of the key points of our success. I really have to compliment the deep partnership that our support organization has had with the HPE InfoSight team.

The support team right from the beginning prided themselves on providing outstanding service. Part of the proof of that was incredible Net Promoter scores (NPS), which is this independent measurement of how satisfied customers are with our products. Nimble’s NPS score was 86, which is even higher than Apple. We prided ourselves on providing a really strong support experience to the customer.

Whenever a problem would surface, we would work with the support team. Our goal was for a customer to see a problem only once. And then we would rapidly fix that problem for every other customer. In fact, we would fix it preemptively so customers would never have to see it. So, we evolved this culture of identifying problems, creating signatures for these problems, and then running everybody’s data through the signatures so that customers would be preemptively inoculated from these problems. That’s why it became very successful.

Gardner: It hasn’t been that long since we were dealing with red light-green light types of IT support scenarios, but we’ve come a long way. We’re not all the way to fully automated, lights-out, machines running machines operations.

David, where do you think we are on that automated support spectrum? How has HPE InfoSight helped change the nature of systems’ dependability, getting closer to that point where they are more automated and more intelligent?

Adamson: The challenge with fully automated infrastructure stems from the variety of different components in the environments — and all of the interoperability among those components. If you look at just a simple IT stack, they are typically applications on top of virtual machines (VMs), on top of hosts — they may or may not have independent storage attached – and then the networking of all these components. That’s discounting all the different applications and various software components required to run them.

There are just so many opportunities for things to break down. In that context, you need a holistic perspective to begin to realize a world in which the management of that entire unit is managed in a comprehensive way. And so we strive for observability models and services that collect all the data from all of those sources. If we can get that data in one place to look at the interoperability issues, we can follow the dependency chains.

But then you need to add intelligence on top of that, and that intelligence needs to not only understand all of the components and their dependencies, but also what kinds of exceptions can arise and what is important to the end users.

So far, with HPE InfoSight, we go so far as to pull in all of our subject matter expertise into the models and exception-handling automation. We may not necessarily have upfront information about what the most important parts of your environment are. Instead, we can stop and let the user provide some judgment. It’s truly about messaging to the user the different alternative approaches that they can take. As we see exceptions happening, we can provide those recommendations in a clean and interpretable way, so [the end user] can bring context to bear that we don’t necessarily have ourselves.

Gardner: And the timing for these advanced IT operations services is very auspicious. Just as we’re now able to extend intelligence, we’re also at the point where we have end-to-end requirements – from the edge, to the cloud, and back to the data center.

And under such a hybrid IT approach, we are also facing a great need for general digital transformation in businesses, especially as they seek to be agile and best react to the COVID-19 pandemic. Are we able yet to apply HPE InfoSight across such a horizontal architecture problem? How far can it go?

Seeing the future: End-to-end visibility 

Mehta: Just to continue from where David started, part of our limitation so far has been from where we began. We started out in storage, and then as Nimble became part of HPE, we expanded it to compute resources. We targeted hypervisors; we are expanding it now to applications. To really fix problems, you need to have end-to-end visibility. And so that is our goal, to analyze, identify, and fix problems end-to-end.

That is one of the axis of development we’re pursuing. The other axis of development is that things are just becoming more-and-more complex. As businesses require their IT infrastructure to become highly adaptable they also need scalability, self-healing, and enhanced performance. To achieve this, there is greater-and-greater complexity. And part of that complexity has been driven by really poor utilization of resources.

Go back 20 years and we had standalone compute and storage machines that were not individually very well-utilized. Then you had virtualization come along, and virtualization gave you much higher utilization — but it added a whole layer of complexity. You had one machine, but now you could have 10 VMs in that one place.

Now, we have containers coming out, and that’s going to further increase complexity by a factor of 10. And right on the horizon, we have serverless computing, which will increase the complexity another order of magnitude.

Complexity is increasing, interconnectedness is increasing, and yet the demands on the business to stay agile, competitive, and scalable are also increasing. It’s really hard for IT administrators to stay on top of this. That’s why you need end-to-end automation.

So, the complexity is increasing, the interconnectedness is increasing, and yet the demands on businesses to stay agile and competitive and scalable are also increasing. It’s really hard for IT administrators to stay on top of this. And that’s why you need end-to-end automation and to collect all of the data to actually figure out what is going on. We have a lot of work cut out for us. There is another area of research, and David spends a lot of time working on this, which is you really want to avoid false positives. That is a big problem with lots of tools. They provide so many false positives that people just turn them off. Instead, we need to work through all of your data to actually say, “Hey, this is a recommendation that you really should pay attention to.” That requires a lot of technology, a lot of ML, and a lot of data science experience to separate the wheat from the chaff.

One of the things that’s happened with the COVID-19 pandemic response is the need for very quick response stats. For example, people have had to quickly set up web sites for contact tracing, reporting on the diseases, and for vaccines use. That shows an accelerated manner in how people need digital solutions — and it’s just not possible without serious automation.

Gardner: Varun just laid out the complexity and the demands for both the business and the technology. It sounds like a problem that mere mortals cannot solve. So how are we helping those mere mortals to bring AI to bear in a way that allows them to benefit – but, as Varun also pointed out, allows them to trust that technology and use it to its full potential?

Complexity requires automated assistance

Adamson: The point Varun is making is key. If you are talking about complexity, we’re well beyond the point where people could realistically expect to log-in to each machine to find, analyze, or manage exceptions that happen across this ever-growing, complex regime.

Even if you’re at a place where you have the observability solved, and you’re monitoring all of these moving parts together in one place — even then, it easily becomes overwhelming, with pages and pages of dashboards. You couldn’t employ enough people to monitor and act to spot everything that you need to be spotting.

You need to be able to trust automated exception [finding] methods to handle the scope and complexity of what people are dealing with now. So that means doing a few things.

People will often start with naïve thresholds. They create manual thresholds to give alerts to handle really critical issues, such as all the servers went down.

But there are often more subtle issues that show up that you wouldn’t necessarily have anticipated setting a threshold for. Or maybe your threshold isn’t right. It depends on context. Maybe the metrics that you’re looking at are just the raw metrics you’re pulling out of the system and aren’t even the metrics that give a reliable signal.

What we see from the data science side is that a lot of these problems are multi-dimensional. There isn’t just one metric that you could set a threshold on to get a good, reliable alert. So how do you do that right?

For the problems that IT support provides to us, we apply automation and we move down the Pareto chart to solve things in priority of importance. We also turn to ML models. In some of these cases, we can train a model from the installed base and use a peer-learning approach, where we understand the correlations between problem states and indicator variables well enough so that we can identify a root cause for different customers and different issues.

Sometimes though, if the issue is rare enough, scanning the installed base isn’t going to give us a high enough signal to the noise. Then we can take some of these curated examples from support and do a semi-supervised loop. We basically say, “We have three examples that are known. We’re going to train a model on them.” Maybe it’s a few tens of thousands of data points, but it’s still in the three examples, so there’s co-correlation that we are worried about. 

In that case we say: “Let me go fishing in that installed base with these examples and pull back what else gets flagged.” Then we can turn those back over to our support subject matter experts and say, “Which of these really look right?” And in that way, you can move past the fact that your starting data set of examples is very small and you can use semi-supervised training to develop a more robust model to identify the issues.

Gardner: As you are refining and improving these models, one of the benefits in being a part of HPE is to access growing data sets across entire industries, regions, and in fact the globe. So, Varun, what is the advantage of being part of HPE and extending those datasets to allow for the budding models to become even more accurate and powerful over time?

Gain a global point of view

Mehta: Being part of HPE has enabled us to leapfrog our competition. As I said, our roots are in storage, but really storage is just the foundation of where things are located in an organization. There is compute, networking, hypervisors, operating systems, and applications. With HPE, we certainly now cover the base infrastructure, which is storage followed by compute. At some point we will bring in networking. We already have hypervisor monitoring, and we are actively working on application monitoring.

HPE has allowed us to radically increase the scope of what we can look at, which also means we can radically improve the quality of the solutions we offer to our customers. And so it’s been a win-win solution, both for HPE where we can offer a lot of different insights into our products, and for our customers where we can offer them faster solutions to more kinds of problems.

Gardner: David, anything more to offer on the depth, breadth, and scope of data as it’s helping you improve the models?

Adamson: I certainly agree with everything that Varun said. The one thing I might add is in the feedback we’ve received over time. And that is, one of the key things in making the notifications possible is getting us as close as possible to the customer experience of the applications and services running on the infrastructure.

Gaining additional measurements from the applications themselves is going to give us the ability to differentiate ourselves, to find the important exceptions to the end user, what they really want us to take action on, the events that are truly business-critical. 

We’ve done a lot of work to make sure we identify what look like meaningful problems. But we’re fundamentally limited if the scope of what we measure is only at the storage or hypervisor layer. So gaining additional measurements from the applications themselves is going to give us the ability to differentiate ourselves, to find the important exceptions to the end user, what they really want to take action on. That’s critical for us — not sending people alerts they are not interested in but making sure we find the events that are truly business-critical. 

Gardner: And as we think about the extensibility of the solution — extending past storage into compute, ultimately networking, and applications — there is the need to deal with the heterogeneity of architecture. So multicloud, hybrid cloud, edge-to-cloud, and many edges to cloud. Has HPE InfoSight been designed in a way to extend it across different IT topologies?

Across all architecture

Mehta: At heart, we are building a big data warehouse. You know, part of the challenge is that we’ve had this explosion in the amount of data that we can bring home. For the last 10 years, since InfoSight was first developed, the tools have gotten a lot more powerful. What we now want to do is take advantage of those tools so we can bring in more data and provide even better analytics.

The first step is to deal with all of these use cases. Beyond that, there will probably be custom solutions. For example, you talked about edge-to-cloud. There will be locations where you have good bandwidth, such as a colocation center, and you can send back large amounts of data. But if you’re sitting as the only compute in a large retail store like a Home Depot, for example, or a McDonald’s, then the bandwidth back is going to be limited. You have to live within that and still provide effective monitoring. So I’m sure we will have to make some adjustments as we widen our scope, but the key is having a really strong foundation and that’s what we’re working on right now.

Gardner: David, anything more to offer on the extensibility across different types of architecture, of analyzing the different sources of analytics?

Adamson: Yes, originally, when we were storage-focused and grew to the hypervisor level, we discovered some things about the way we keep our data organized. If we made it more modular, we could make it easier to write simple rules and build complex models to keep turnaround time fast. We developed some experience and so we’ve taken that and applied it in the most recent release of recommendations into our customer portal.

We’ve modularized our data model even further to help us support more use cases from environments that may or may not have specific components. Historically, we’ve relied on having Nimble Storage, they’re a hub for everything to be collected. But we can’t rely on that anymore. We want to be able to monitor environments that don’t necessarily have that particular storage device, and we may have to support various combinations of HPE products and other non-HPE applications.

Modularizing our data model to truly accommodate that has been something that we started along the path for and I think we’re making good strides toward.

The other piece is in terms of the data science. We’re trying to leverage longitudinal data as much as possible, but we want to make sure we have a sufficient set of meaningful ML offerings. So we’re looking at unsupervised learning capabilities that we can apply to environments for which we don’t have a critical mass of data yet, especially as we onboard monitoring for new applications. That’s been quite exciting to work on.

Gardner: We’ve been talking a lot about the HPE InfoSight technology, but there also has to be considerations for culture. A big part of digital transformation is getting silos between people broken down.

Is there a cultural silo between the data scientists and the IT operations people? Are we able to get the IT operations people to better understand what data science can do for them and their jobs? And perhaps, also allow the data scientists to understand the requirements of a modern, complex IT operations organization? How is it going between these two groups, and how well are they melding?

IT support and data science team up

Adamson: One of the things that Nimble did well from the get-go was have tight coupling between the IT support engineers and the data science team. The support engineers were fielding the calls from the IT operations guys. They had their fingers on the pulse of what was most important. That meant not only building features that would help our support engineers solve their escalations more quickly, but also things that we can productize for our customers to get value from directly.

Gardner: One of the great ways for people to better understand a solution approach like HPE InfoSight is through examples.  Do we have any instances that help people understand what it can do, but also the paybacks? Do we have metrics of success when it comes to employing HPE InfoSight in a complex IT operations environment? 

Mehta: One of the examples I like to refer to was fairly early in our history but had a big impact. It was at the University Hospital of Basel in Switzerland. They had installed a new version of VMware, and a few weeks afterward things started going horribly wrong with their implementation that included a Nimble Storage device. They called VMware and VMware couldn’t figure it out. Eventually they called our support team and using InfoSight, our support team was able to figure it out really quickly. The problem turned out to be a result of a new version of VMware. If there was a hold up in the networking, some sort of bottleneck in their networking infrastructure, this VMware version would try really hard to get the data through.

We were able to preemptively alert other people who had the same combinations of VMware and Nimble Storage and say, “Guys, your should either upgrade to this new patch that VMware has made or just be aware that you are susceptible to this problem.”

So instead of submitting each write once to the storage array once, it would try 64 times. Suddenly, their traffic went up by 64 times. There was a lot of pounding on the network, pounding on the storage system, and we were able to tell with our analytics that, “Hey this traffic is going up by a huge amount.” As we tracked it back, it pointed to the new version of VMware that had been loaded. We then connected with the VMware support team and worked very closely with all of our partners to identify this bug, which VMware very promptly fixed. But, as you know, it takes time for these fixes to roll out to the field.

We were able to preemptively alert other people who had the same combination of VMware on Nimble Storage and say, “Guys, you should either upgrade to this new patch that VMware has made or just be aware that you are susceptible to this problem.”

So that’s a great example of how our analytics was able to find a problem, get it fixed very quickly — quicker than any other means possible — and then prevent others from seeing the same problem.

Gardner: David, what are some of your favorite examples of demonstrating the power and versatility of HPE InfoSight?

Adamson: One that comes to mind was the first time we turned to an exception-based model that we had to train. We had been building infrastructure designed to learn across our installed base to find common resource bottlenecks and identify and rank those very well. We had that in place, but we came across a problem that support was trying to write a signature for. It was basically a drive bandwidth issue.

But we were having trouble writing a signature that would identify the issue reliably. We had to turn to an ML approach because it was fundamentally a multidimensional problem. If we looked across, we have had probably 10 to 20 different metrics that we tracked per drive per minute on each system. We needed to, from those metrics, come up with a good understanding of the probability that this was the biggest bottleneck on the system. This was not a problem we could solve by just setting a threshold.

So we had to really go in and say, “We’re going to label known examples of these situations. We’re going to build the sort of tooling to allow us to do that, and we’re going to put ourselves in a regime where we can train on these examples and initiate that semi-supervised loop.”

We actually had two to three customers that hit that specific issue. By the time we wanted to put that in place, we were able to find a few more just through modeling. But that set us up to start identifying other exceptions in the same way.

We’ve been able to redeploy that pattern now several times to several different problems and solve those issues in an automated way, so we don’t have to keep diagnosing the same known flavors of problems repeatedly in the future.

Gardner: What comes next? How will AI impact IT operations over time? Varun, why are you optimistic about the future?

Software eats the world 

Mehta: I think having a machine in the loop is going to be required. As I pointed out earlier, complexity is increasing by leaps and bounds. We are going from virtualization to containers to serverless. The number of applications keeps increasing and demand on every industry keeps increasing. 

Andreessen Horowitz, a famous venture capital firm once said, “Software is eating the world,” and really, it is true. Everything is becoming tied to a piece of software. The complexity of that is just huge. The only way to manage this and make sure everything keeps working is to use machines.

That’s where the challenge and opportunity is. Because there is so much to keep track of, one of the fundamental challenges is to make sure you don’t have too many false positives. You want to make sure you alert only when there is a need to alert. It is an ongoing area of research.

There’s a big future in terms of the need for our solutions. There’s plenty of work to keep us busy to make sure we provide the appropriate solutions. So I’m really looking forward to it.

There’s also another axis to this. So far, people have stayed in the monitoring and analytics loop and it’s like self-driving cars. We’re not yet ready for machines to take over control of our cars. We get plenty of analytics from the machines. We have backup cameras. We have radars in front that alert us if the car in front is braking too quickly, but the cars aren’t yet driving themselves.

It’s all about analytics yet we haven’t graduated from analytics to control. I think that too is something that you can expect to see in the future of AIOps once the analytics get really good, and once the false positives go away. You will see things moving from analytics to control. So lots of really cool stuff ahead of us in this space.

Gardner: David, where do you see HPE InfoSight becoming more of a game changer and even transforming the end-to-end customer experience where people will see a dramatic improvement in how they interact with businesses?

Adamson: Our guiding light in terms of exception handling is making sure that not only are we providing ML models that have good precision and recall, but we’re making recommendations and statements in a timely manner that come only when they’re needed — regardless of the complexity.

A lot of hard work is being put into making sure we make those recommendation statements as actionable and standalone as possible. We’re building a differentiator through the fact that we maintain a focus on delivering a clean narrative, a very clear-cut, “human readable text” set of recommendations. 

And that has the potential to save a lot of people a lot of time in terms of hunting, pecking, and worrying about what’s unseen and going on in their environments.

Gardner: Varun, how should enterprise IT organizations prepare now for what’s coming with AIOps and automation? What might they do to be in a better position to leverage and exploit these technologies even as they evolve?

Pick up new tools

Mehta: My advice to organizations is to buy into this. Automation is coming. Too often we see people stuck in the old ways of doing things. They could potentially save themselves a lot of time and effort by moving to more modern tools. I recommend that IT organizations make use of the new tools that are available.

HPE InfoSight is generally available for free when you buy an HPE product, sometimes with only the support contract. So make use of the resources. Look at the literature with HPE InfoSight. It is one of those tools that can be fire-and-forget, which is you turn it on and then you don’t have to worry about it anymore.

It’s the best kind of tool because we will come back to you and tell you if there’s anything you need to be aware of. So that would be the primary advice I would have, which is to get familiar with these automation tools and analytics tools and start using them.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in Cloud computing, Hewlett Packard Enterprise, machine learning, server | Tagged , , , , , , , , , , | Leave a comment

How Unisys ClearPath mainframe apps now seamlessly transition to Azure Cloud without code changes

When applications are mission-critical, where they are hosted matters far less than keeping them operating smoothly.

As many organizations face a ticking time bomb to modernize mainframe applications, one solution is to find a dependable, repeatable way to transition to a public cloud without degrading these vulnerable and essential systems of record.

The next BriefingsDirect cloud adoption discussion explores the long-awaited means to solve the mainframe to cloud transition for essential but aging applications and data. We’re going to learn how Unisys and Microsoft can deliver ClearPath Forward assets to Microsoft Azure cloud without risky code changes.

Listen to the podcast. Find it on iTunes. Read a full transcript or downloada copy. 

To learn more about the latest on-ramps to secure and agile public cloud adoption, we welcome Chuck Lefebvre, Senior Director of Product Management for ClearPath Forward at Unisys, and Bob Ellsworth, Worldwide Director of Mainframe Transformation at Microsoft. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts: 

Gardner: Bob, what’s driving the demand nowadays for more organizations to run more of their legacy apps in the public cloud?

Ellsworth: We see that more and more customers are embracing digital transformation, and they are finding the cloud an integral part of their digital transformation journey. And when we think of digital transformation, at first it might begin with optimizing operations, which is a way of reducing costs by taking on-premises workloads and moving them to the cloud. 

But the journey just starts there. Customers now want to further empower employees to access the applications they need to be more efficient and effective, to engage with their customers in different ways, and to find ways of using cloud technologies to transform products, such as machine learning (ML)artificial intelligence (AI), and business intelligence (BI).

Gardner: And it’s not enough to just have some services or data in the cloud. It seems there’s a whole greater than the sum of the parts for organizations seeking digital transformation — to get more of their IT assets into a cloud or digitally agile environment.

Destination of choice: Cloud

Ellsworth: Yes, that’s absolutely correct. The beauty is that you don’t have to throw away what you have. You can take legacy workloads such as ClearPath workloads and move those into the cloud, but then continue the journey by embracing new digital capabilities such the advanced services such as ML, AI, or BI so you can extend the usefulness and benefits of those legacy applications. 

Gardner: And, of course, this has been a cloud adoption journey for well over 10 years. Do you sense that something is different now? Are there more means available to get more assets into a cloud environment? Is this a tipping point?

Ellsworth: It is a tipping point. We’ve seen — especially around the mainframe, which is what I focus on — a huge increase in customer interest and selection of the cloud in the last 1.5 years as the preferred destination. And one of the reasons is that Azure has absolutely demonstrated its capability to run these mission- and business-critical workloads.

Gardner: Are these cloud transitions emerging differently across the globe? Is there a regional bias of some sort? Is the public sector lagging or leading? How about vertical industries? Where is this cropping up first and foremost?

Ellsworth: We’re seeing it occur in all industries; in particular, financial services. We find there are more mainframes in financial services, banking capital markets, and insurance than in any other industries.

So we see a propensity there where, again, the cloud has become a destination of choice because of its capability to run mission- and business-critical workloads. But in addition, we’re seeing this in state and local governments, and in the US Federal Government. The challenge in the government sector is the long cycle it takes to get funding for these projects. So, it’s not a lack of desire, it’s more the time it takes to move through the funding process.

Gardner: Chuck, I’m still surprised all these years into the cloud journey that there’s still such a significant portion of data and applications that are not in the cloud environment. What’s holding things back? What’s preventing enterprises from taking advantage of cloud benefits?

Lefebvre: A lot of it is inertia. And in some cases, incorrect assumptions about what would be involved in moving. That’s what’s so attractive about our Unisys ClearPath solution. We can help clients move their ClearPath workloads without change. We take that ClearPath software stack from MCP initially and move it and re-platform it on Microsoft Azure.Learn How to TransitionClearPath Workloads
To the Cloud And that application and data comes across with no re-compilation, no refactoring of the data; it’s a straightforward transition. So, I think now that we have that in place, that transition is going to go a lot smoother and really enable that move to occur. 

I also second what Bob said earlier. We see a lot of interest from our financial partners. We have a large number of banking application partners running on our ClearPath MCP environment, and those partners are ready to go and help their clients as an option to move their workloads into the Azure public cloud.

Pandemic puts focus on agility

Gardner: Has the disruption from the coronavirus and the COVID-19disease been influencing this transition? Is it speeding it up? Slowing it down? Maybe some other form of impact?

Lefebvre: I haven’t seen it affecting any, either acceleration or deceleration. In our client-base most of the people were primarily interested initially in ensuring their people could work from home with the environments they have in place.

I think now that that’s settled in, they’ve sorted out their virtual private networks (VPNs) and their always-on access, processes, that perhaps now we’ll see some initiatives evolving. I think, initially, it was just businesses supporting their employees working from home. 

My perspective is that that should be enabled equally as well, whether they are running their backend systems of record in a public cloud or on-premises. Either way would work for them.

Gardner: Bob, at Microsoft, are you seeing any impact from the pandemic in terms of how people are adopting cloud services?

Ellsworth: We’re actually seeing an increase in customer interest and adoption of cloud services because of COVID-19. We’re seeing that in particular in some of our solutions such as Teams for doing collaboration and webinars, and connecting with others remotely. We’re seeing a big increase there.

And Office 365, we’ve seen a huge increase in deployments of customers using the Office 365 technology. In addition, Azure; we’ve also seen a big increase in Azure consumption as customers are dealing with the application growth and requirements of running these applications.

As far as new customers that are considering moving to the cloud, I had thought originally, back in March when this was starting to hit, that our conversations would slow down as people dealt with more immediate needs. But, in fact, it was about a two-to-three-week slow down. But now, we’re seeing a dramatic increase in interest in having conversations about what are the right solutions and methods to move the workloads to the cloud.

So, the adoption is accelerating as customers look for ways to reduce cost, increase agility, and find new ways of running the workloads that they have today.

Gardner: Chuck, another area of impact in the market is around skills. There is often either a lack of programmers for some of these older languages or the skills needed to run your own data centers. Is there a skill factor that’s moving the transition to cloud?

Lefebvre: Oh, there certainly is. One of the attractive elements of a public cloud is the infrastructure layer of the IT environment is managed by that cloud provider. So as we see our clients showing interest in moving to the public cloud — first with things like, as Bob said, Office 365 and maybe file shares with SharePoint – they are now looking at doing that for mainframe applications. And when they do that, they no longer have to be worried about that talent to do the care and feeding of that infrastructure. As we move those clients in that direction, we’re going to take care of that ClearPath infrastructure, the day-to-day management of that environment, and that will be included as part of our offering. 

We expect most clients – rather than managing it themselves in the cloud – will defer to us, and that will free up their staff to do other things. They will have retirements, but less risk.

Gardner: Bob, another issue that’s been top-of-mind for people is security. One of the things we found is that security can be a tough problem when you are transitioning, when you change a development environment, go from development to production, or move from on-premises to cloud. 

How are we helping people remain secure during a cloud transition, and also perhaps benefit from a better security posture once they make the transition?

Time to transition safely, securely

Ellsworth: We always recommend making security part of the planning process. When you’re thinking of transforming from a datacenter solution to the cloud, part of that planning is for the security elements. We always look to collaborate with our partners, such as Unisys, to help define that security infrastructure and deployment.

What’s great about the Azure solutions is we’ve focused on hybrid as the way of deploying customers’ workloads. Most customers aren’t ready to move everything to the cloud all at the same time. For that reason, and with the fact that we focus on hybrid, we allow a customer to deploy portions of the workload to the cloud and the other portions in their data center. Then, over time, they can transition to the cloud.

But during that process supporting your high levels of security for user access, identity management, or even controls of access to the right applications and data — that’s all done through the planning and using technologies such as Microsoft Active Directory and synchronization with Azure Active Directory. So with that planning it’s so important to ensure successful deployments and ensure the high levels of security that customers require.

Gardner: Chuck, anything to offer on the security?

Lefebvre: Yes, we’ll be complementing everything Bob just described with our Unisys Stealth technology. It allows always-on access and isolation capabilities for deployment of any of our applications from Unisys, but in particular the ClearPath environment. And that can be completely deployed in Azure or, as Bob said, in a hybrid environment across an enterprise. So we are excited about that deployment of Stealth to complement the rigor that Microsoft applies to the security planning process.

Gardner: We’ve described what’s driving organizations to the cloud, the fact that it’s accelerating, that there’s a tipping point in what adoption can be accomplished safely and reliably. We’ve also talked about what’s held people back and their challenges.

Let’s now look at what’s different about the latest solutions for the cloud transition journey. For Unisys, Chuck, how are your customers reconciling the mainframe past with the cloud future?

No change in digital footprint

Lefebvre: We are able to transition ClearPath applications with no change. It’s been roughly 10 years since we’ve been deploying these systems on Intel platforms, and in the case of MCP hosting it on a Microsoft Windows Server kernel. That’s been in place under our Unisys Libra brand for more than 10 years now.

In the last couple of years, we’ve also been allowing clients to deploy that software stack on virtualized servers of their choice: on Microsoft Hyper-Vand the VMware virtualization platforms. So it’s a natural transition for us to move that and offer that in Azure cloud. We can do that because of the layered approach in our technology. It’s allowed us to present an approach to our clients that is very risk-free, very straightforward.

Learn How to TransitionClearPath Workloads
To the Cloud 

The ClearPath software stack sits on a Windows kernel, which is also at the foundation level offered by the Azure hybrid infrastructure. The applications therefore don’t change a bit, literally. The digital footprint is the same. It’s just running in a different place, initially as platform-as-a-service (PaaS).

The cloud adoption transition is really a very low-risk, safe, and efficient journey to the public cloud for those existing solutions that our clients have on ClearPath.

Gardner: And you described this as an ongoing logical, cascading transition — standing on the shoulders of your accomplishments — and then continuing forward. How was that different from earlier migrations, or a lift-and-shift, approach? Why is today’s transition significantly different from past migrations?

Lefebvre: Well, a migration often involves third-parties doing a recompilation, a refactoring of the application, so taking the COBOL code, recompiling it, refactoring it into Java, and then breaking it up, and moving the data out of our data formats and into a different data structure. All of those steps have risk and disruption associated with them. I’m sure there are third-parties that have proven that. That can work. It just takes a long time and introduces risk.

For Unisys ClearPath clients who have invested years and years in those systems of record, that entire stack can now run in a public cloud using our approach — as I said before — with absolutely not a single bit of change to the application or the data.

Gardner: Bob, does that jibe with what you are seeing? Is the transition approach as Bob described it an advantage over a migration process as he described?

Ellsworth: Yes, Chuck described it very well. We see the very same thing. What I have found, — and I’ve been working with Unisys clients since I joined Microsoft in 2001, early on going to the Unisys UNITE conference — was that Unisys clients are very committed and dedicated to their platform. They like the solutions they are using. They are used to using those developer tools. They have built up the business-critical, mission-critical applications and workloads.

For those customers that continue to be committed to the platform, absolutely, this kind of what I call “re-platforming” could easily be called a “transition.” You are taking what you currently have and simply moving it onto the cloud. It is absolutely the lowest risk, the least cost, and the quickest time-to-deployment approach.

For those customers, just like with every platform, when you have an interest to transform to a different platform, there are other methods available. But I would say the vast majority of committed Unisys customers want to stay on the platform, and this provides the fastest way to get to the cloud — with the less risk and the quickest benefits.

Gardner: Chuck, the process around cloud adoption has been going on for a while. For those of us advocating for cloud 10 or 12 years ago, we were hoping that it would get to the point where it would be a smooth transition. Tell me about the history and the benefits of how ClearPath Forward and Azure had come together specifically? How long have Microsoft and Unisys been at this? Why is now, as we mentioned earlier, a tipping point?

Lefebvre: We’ve been working on this for a little over a year. We did some initial work with two of our financial application partners and North America banking partners and the initial testing was very positive. Then as we were finishing our engineering work to do the validation, our internal Unisys IT organization, which operates about 25 production applications to run the business, went ahead in parallel with us and deployed half of those on MCP in Azure, using the very process that I described earlier.

Today, they are running 25 production applications. About half of them have been there for nine months and the other half for the last two months. They are supporting things like invoicing our customers, tracking our supply chain status, and so, a number of critical applications. 

We have taken that journey not just from an engineering point of view, but we’ve proven it to ourselves. We drank our own champagne, so to speak, and that’s given us a lot of confidence. It’s the right way to go, and we expect our clients will see those benefits as well.

Gardner: We haven’t talked about the economics too much. Are you finding, now that you’ve been doing this for a while, that there is a compelling economic story? A lot of people are fearful that a transition or migration would be very costly, that they won’t necessarily save anything by doing this, and so maybe are resistant. But what’s the dollars’ and cents’ impact that you have been seeing now that you’ve been doing this while transitioning ClearPath to Azure?

Rapid returns

Lefebvre: Yes, there are tangible financial benefits that our IT organization has measured. In these small isolated applications, they calculated about a half-a-million dollars in savings across three years in their return on investment (ROI) analysis. And that return was nearly immediate because the transition for them was mostly about planning the outage period to ensure a non-stop operation and make sure we always supported the business. There wasn’t actually a lot of labor, just more planning time. So that return was almost immediate.

Gardner: Bob, anything to offer on the economics of making a smooth transition to cloud?

Ellsworth: Yes, absolutely. I have found a couple of catalysts for customers as far as cost savings. If a customer is faced with a potential hardware upgrade — perhaps the server they are running on is near end-of-life — by moving the workload to the cloud and only paying for the consumption of what you use, it allows you to avoid the hardware upgrade costs. So you get some nice and rapid benefits in cost avoidance.

In addition, for workloads such as test and development environments, or user acceptance testing environments, in addition to production uses, the beauty of the cloud pricing is you only pay for what you are consuming.

So for those test and development systems, you don’t need to have hardware sitting in the corner waiting to be used during peak periods. You can spin up an environment in the cloud, do all of your testing, and then spin it back down. You get some very nice cost savings by not having dedicated hardware for those test and development environments.

Gardner: Let’s dig into the technology. What’s under the hood that’s allowing this seamless cloud transition, Chuck?

Secret sauce

Lefebvre: Underneath the hood is the architecture that we have transformed to over the last 10 years where we are already running our ClearPath systems on Intel-based hardware on a Microsoft Windows Server kernel. That allows that environment to be used and re-platformed in the same manner.

To accomplish that, originally, we had some very clever technology that allows the Unisys compilers generating unique instructions to be emulated on an Intel-based, Windows-based server.

That’s really the fundamental underpinning that first allowed those clients to run on Intel servers instead of on proprietary Unisys-designed chips. Once that’s been completed, we’re able to be much more flexible on where it’s deployed. The rigor to which Microsoft has ensured that Windows is Windows — no matter if it’s running on a server you buy, whether it’s virtualized on Hyper-V, or virtualized in Azure — really allows us to achieve that seamless operation of running in any of those three different models and environments.

Gardner: Where do you see the secret sauce, Bob? Is the capability to have Windows be pure, if you will, across the hybrid spectrum of deployment models?

Learn How to TransitionClearPath Workloads
To the Cloud 

Ellsworth: Absolutely, the secret sauce as Chuck described was that transformation from proprietary instruction sets to standard Intel instruction sets for their systems, and then the beauty of running today on-premises on Hyper-V or VMware as a virtual machine (VM). 

And then the great thing is with the technologies available, it’s very, very easy to take VMs running in the data center and migrate them to infrastructure as a service (IaaS) VMs running in the cloud. So, seamless transformation and making that migration.

You’re taking everything that’s running in your production system, or test and development systems, and simply deploying them up in the cloud’s VM instead of on-premises. So, a great process. Definitely, the earlier investment that was made allows that capability to be able to utilize the cloud.

Gardner: Do you have early adopters who have gone through this? How do they benefit? 

Private- and public-sector success stories

Lefebvre: As I indicated earlier, our own Unisys IT operation has many production applications running our business on MCP. Those have all been moved from our own data center on an MCP Libra system to now running in the Microsoft Azure cloud. Our Unisys IT organization has been a long-time partner and user of Microsoft Office 365 and SharePoint in the cloud. Everything has now moved. This, in fact, was one of the last remaining Unisys IT operations that was not in the public cloud. That was part of our driver, and they are achieving the benefits that we had hoped for.

We also have two external clients, a banking partner is about to deploy a disaster recovery (DR) instance of their on-premises MCP banking application. That’s coming from our partner, Fiserv. Fiserv’s premier banking application is now available for running in Azure on our MCP systems. One of the clients is choosing to host a DR instance in Azure to support their on-premises production workload. They like that because, as Bob said, they only have to pay for it when they fire it up if they need to use that DR environment.

We have another large state government project that we’re just about to sign, where that client will be doing some of their ClearPath MCP workload and transition to and manage that in an Azure public cloud.

Once that contract is signed and we get agreement from that organization, we will be using that as one of our early use case studies.

Gardner: The public sector, with all of their mainframe apps, seems like a no-brainer to me for these transitions to the cloud. Any examples from the public sector that illustrate that opportunity?

Ellsworth: We have a number of customers, specifically on the Unisys MCP platform, that are evaluating moving their workloads from their data centers into the cloud. We don’t have a production system as far as I know yet, but they’re in the late stages of making that decision. 

There are so many ways of utilizing the cloud, for things like DR, at a very low cost, instead of having to have a separate data center or failover system. Customers can even leave their production on-premises in the short-term and stand up their test and development in the cloud and run MCP system in that way.

And then, once they’re in the cloud, they gain the capability to set up a high-availability DR system or high-availability production system, either within the same Azure data center, or failover from one system to another if they have an outage, and all at a very low cost. So, there are great benefits.

One other benefit is elasticity. When I talk about customers, they say, “Well, gee, I have this end-of-month process and I need a larger mainframe then because of those occasional higher capacity requirements. Well, the beauty of the cloud is the capability to grow and shrink those VMs when you need more capacity for such end-of-month process, for example.

Again, you don’t have to pre-purchase the hardware. You really only pay for the consumption of the capacity when you need it. So, there are great advantages and that’s what we talk to customers about. They can get benefits from considering deploying new systems in the cloud. Those are some great examples of why we’re in late-stage conversations with several customers about deploying the solution.

Increased data analytics

Gardner: I supposed it’s a little bit early to address this, but are there higher-order benefits when these customers do make the cloud transition? You mentioned earlier AI, ML, and getting more of your data into an executable environment where you can take advantage of analytics across more and larger data sets. 

Is there another shoe to drop when it comes to the ROI? Will they be able to do things with their data that just couldn’t have been done before, once you make a transition to cloud?

Ellsworth: Yes, that’s absolutely correct. When you move the systems up to the cloud, you’re now closer to all the new workloads and the advanced cloud services you can utilize to, for example, analyze all that data. It’s really about turning more data into intelligent action.

Now, if you think of back to the 1980s and 1990s, or even 2000s, when you were building custom applications, you had to pretty much code everything yourselves. Today, the way you build an application is to consume services. There’s no reason for a customer to build a ML application from scratch. Instead, you consume ML services from the cloud. So, once you’re in the cloud, it opens up a world of possibilities to being able to continue that digital business transformation journey.

LefebvreAnd I can confirm that that’s a key element for our product proposition as well as from a ClearPath point of view. We have some existing technology, a particular component called Data Exchange, that does an outstanding change and data capture model. We can pump the data coming into that backend system of record and using, Kafka, for example, feed that data directly into an AI or ML application that’s already in place.

One of the key areas for future investment — now that we have done the re-platforming to PaaS and IaaS – is extending our ePortal technology and other enabling software to ensure that these ClearPath applications really fit in well and leverage that cloud architecture. That’s the direction we see a lot of benefit in as we bring these applications into the public Azure cloud.

The cloud journey begins

Gardner: Chuck, if you are a ClearPath Forward client, you have these apps and data, what steps should you be taking now in order to put yourself in an advantageous position to make the cloud transition? Are there pre-steps before the journey? Or how should you be thinking in order to take advantage of these newer technologies?

Lefebvre: First of all, they should engage with their Unisys and Microsoft contacts that work with your organization to begin consultation on that journey. Data backup, data replication, DR, those elements around data and your policy with respect to data are the things that are likely going to change the most as you move to a different platform — whether that’s from a Libra system to an on-premises virtualized infrastructure or to Azure.

What you’ve done for replication with a certain disk subsystem probably won’t be there any longer. It’ll be done in a different way, and likely it’ll be done in a better way. The way you do your backups will be done differently.

Now, we have partnered with Dynamic Solutions International (DSI) and they offer a virtualized virtual tape solution so that you can still use your backup scripts on MCP to do backups in exactly the same way in Azure. But you may choose to alter the way you do backups.

So, your strategy for data and how you handle that, which is so very important to these enterprise class mainframe applications, that’s probably the place where you’ll need to do the most thinking and planning, around data handling.

Gardner: For those familiar with BriefingsDirect, we like to end our discussions with a forward-looking vision, an idea of what’s coming next. So when it comes to migrating, transitioning, getting more into the cloud environments — be that hybrid or pure public cloud — what’s going to come next in terms of helping people make the transition but also giving them the best payoff when they get there?

The cloud journey continues

Ellsworth: It’s a great question, because you should think of the world of opportunity, of possibility. I look back at my 47 years in the industry and it’s been incredible to see the transformations that have occurred, the technology advancements that have occurred, and they are coming fast and furious. There’s nothing slowing it down.

And so, when we see the cloud today, a lot of customers are still considering the cloud for strategy and for building any new solutions. You go into the cloud first and have to justify staying on-premises, and then customers move to a cloud-only strategy where they’re able to not only deploy new solutions but migrate their existing workloads such as ClearPath up to the cloud. They get to a point where they would be able to shut down most of what they run in their data centers and get out of that business of operating IT infrastructure and having operation support provided for them as-a-service.

Learn How to TransitionClearPath Workloads
To the Cloud 

Next, they move into transforming through cultural change in their own staff. Today the people that are managing, maintaining, and running new systems will have an opportunity to learn new skills and new ways of doing things, such as cloud technology. What I see over the next two to three years is a continuation of that journey, the transformation not just of the solutions the customers use, but also the culture of the people that operate and run those solutions.

Gardner: Chuck, for your installed base across the world, why should they be optimistic about the next two or three years? What’s your vision for how their situation is only going to improve?

Lefebvre: Everything that we’ve talked about today is focused on our ClearPath MCP market and the technology that those clients use. As we go forward into 2021, we’ll be providing similar capabilities for our ClearPath OS 2200 client base, and we’ll be growing the offering.

Today, we’re starting with the low-end of the customer base: development, test, DR, and the smaller images. But as the Microsoft Azure cloud matures, as it scales up to handle our scaling needs for our larger clients, we’ll see that maturing. We’ll be offering the full range of our products in the Azure cloud, right on up to our largest systems.

That builds confidence across the board in our client base; in Microsoft and in Unisys. We want to crawl, then walk, and then run. That journey, we believe, is the safest way to go. And as I mentioned earlier, this initial workload transformation is occurring through a re-platforming approach. The real exciting work is bringing cloud-native capabilities to do better integration of those systems of record, with better systems of engagement, that the cloud-native technology is offering. 

And we have some really interesting pieces under development now that will make that additional transformation straightforward. Our clients will be able to leverage that – and continue to extend that back-end investment in those systems. So we’re really excited about the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or downloada copy. Sponsors: Unisys and Microsoft.

A discussion on how many organizations face a reckoning to move mainframe applications to a cloud model without degrading the venerable and essential systems of record. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.

Posted in AIOps, application transformation, Cloud computing, Cyber security, data center, Data center transformation, digital transformation, Enterprise transformation, Microsoft, Security, Unisys | Tagged , , , , , , , , , , , , , | 1 Comment

Digital transformation enables an unyielding and renewable value differentiation capability

The next edition of the BriefingsDirect Voice of Innovation podcast series explores architecting businesses for managing ongoing disruption.

As enterprises move past crisis mode in response to the COVID-19 pandemic, they require a systemic capability to better manage shifting market trends.

Stay with us to examine how Hewlett Packard Enterprise (HPE) Pointnext Services advises organizations on using digital transformation to take advantage of new and emerging opportunities.

Listen to the podcast. See the video. Find it on iTunes. Read a full transcript or download a copy. 

To share the Pointnext view on transforming businesses to effectively innovate in the new era of pervasive digital business, BriefingsDirect welcomes Craig Partridge, Senior Director Worldwide, Digital Advisory and Transformation Practice Lead, at HPE Pointnext Services.The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Craig, how has the response to the pandemic accelerated the need for comprehensive digital transformation?

Partridge: We speak to a lot of customers around the world. And the one thing that we are picking up very commonly is a little bit counter-intuitive.

At the beginning of the pandemic — in fact, at the beginning of any major disruption — there is a sense that companies will put the brakes on and slow everything down. And that happened as we went through this initial period. Preserving cash and liquidity kicked in and a minimum viable operating model emerged. People were reluctant to invest.

But as they now begin to see the shifting landscape in industries, we are beginning to see a recognition that those pivoting out of these disruptive moments the quickest — with sustained, long-term viability built behind how they accelerate — those organizations are the ones driving new experiences and new insights. They are pushing hard on the digital agenda. In other words, digitally active companies seem to be the ones pivoting quicker out of these disruptions — and coming out stronger as well.

So although there was an initial pause as people pivoted to the new normal, we are seeing now acceleration of initiatives or projects, underpinned by technology, that are fundamentally about reshaping the customer experience. If you can do that through digital engagement models, you can continue to drive revenue and customer loyalty because you are executing those valued transactions through digital platforms.

Gardner: Has the pandemic and response made digital transformation more attractive? If you have to do more business digitally, if your consumers and your supply chain have become more digital, is this a larger opportunity?

Partridge: Yes, it’s not only more attractive – it’s more essential. That’s what we are learning.

A good example here in the UK, where I am based, is that big retailers have traditionally been deeply into the brick world experience of walking into a retail store or supermarket, those kinds of big, physical spaces. They figured out during this period of disruption that the only way to continue to drive revenue and take orders was on digital platforms. Well, guess what? Those digital platforms were only scaled and sized for a certain kind of demand, and that demand was based on a pre-pandemic normal.

This transformation is not just an attractive thing to do. For many organizations pivoting hard to digital engagement and digital revenue streams is their new normal. That’s what they have to focus on — not just to survive but for beyond that.

Now, they have to double or treble the capacity of their transactions across those digital platforms. They are having to increase massively their capability to not only buy online, but to get deliveries out to those customers as well.

So this transformation is not just an attractive thing to do. For many organizations pivoting hard to digital engagement and digital revenue streams is their new normal. That’s what they have to focus on — and not just to survive but for beyond that. It’s the direction to their new normal as well.

Gardner: It certainly seems that the behavior patterns of consumers, as well as employees, have changed for the longer term when it comes to things like working at home, using virtual collaboration, bypassing movie theaters for online releases, virtual museums, and so forth.

For those organizations that now have to cater to those online issues and factor in the support of their employees online, it seems to me that this shift in user behavior has accelerated what was already under way. Do companies therefore need to pick up the pace of what they are doing for their own internal digital transformation, recognizing that the behaviors in the market have shifted so dramatically?

Safety first 

Partridge: Yes, in the past digital transformation focused on the customer experience, the digital engagement channel, and building out that experience. You can relate that in large parts to the shift toward e-commerce. But increasingly people are aware of the need to integrate information about the physical space as well. And if this pandemic taught us anything, it’s that they need to not only create great experiences – they must create safe, great experiences.

What does that mean? I need to understand about my physical space so I can augment my service offerings in a way that’s safe. We are looking at scenarios where using video recognition and artificial intelligence (AI) will begin to work out whether that space being used safely. Are there measurements we can put in place to protect people better? Are people keeping certain social distancing rules?

All of that is triggering the next wave of customer experience, which isn’t just the online digital platform and digital interactions, but — as we get back out into the world and as we start to occupy those spaces again — how do I use the insight about the physical space to augment that experience and make sure that we can emerge safer, better, and enjoy those digital experiences in a way that’s also physically safe.

Beyond just the digital transactions side, now it’s much more about starting to address the movement that was already long on the way — the digitization of the physical world and how that plays into making these experiences more beneficial.

Gardner: So if the move to digitally transform your organization is an imperative, if those who did it earlier have an advantage, if those who haven’t done it want to do it more rapidly — what holds organizations back? What is it about legacy IT architectures that are perhaps a handicap?

Pivoting from the cloud 

Partridge: It’s a great question because when I talk to customers about moving into the digital era, that triggers the question, “Well, what was there before this digital era?” And we might argue it was the cloud era that preceded it.

Now, don’t get me wrong. These aren’t sequential. I’m not saying that the cloud era is over and the digital era has replaced it. As you know, these are waves. And they rise on top of each other. But organizations that are able to go fast and accelerate on the digital agenda are often the same organizations.

The biggest constraint we see as organizations try to stress-test their digital age adoption is to see if they actually have agility in the back end. Are the systems set up to be able to scale on-demand as they start to pivot toward digital channels to engage their customers? Does a recalibration of the supply chain mean applications and data are placed in the right part of on- or off-premises cloud architecture supply chains?

The biggest constraint we see as organizations try to stress-test their digital age adoption is to see if they actually have agility in the back end. Are the systems set up to be able to scale on-demand as they pivot to digital channels to engage with their customers? 

If you haven’t gone through a modernization agenda, if you haven’t tackled that core innovation issue, if you haven’t embraced cloud architectures, cloud-scale, and software-defined – and, increasingly, by the way, the shift to things like containerization, microservices, and decomposing big monolithic applications into manageable chunks that are application programming interface

(API)-connected — if you haven’t gone through that cloud-enabled exploration prior to the digital era, well, it looks like you still have some work to do before you can get the gains that some of those other modern organizations are now able to express.

There’s another constraint, which is really key. For most of the customers we speak to, it tends to be in and around the operating model. In a lot of conversations that I have with customers, they over-invested in technology. They are on every cloud platform available. They are using every kind of digital technology to gain a level of competitive advantage.

Yet, at the heart of any organization are the people. It’s the culture of the people and the innovation of your people that really makes the difference. So, not least of all, the supply chain agility, right in the heart of this conversation. It is the fundamental operating model — not just of IT, but the operating model of the entire organization.

So have they unticked their value chain? Have they looked at the key activities? Have they thought when they implement new technology, and how that might replace or augment activities? And what does that mean to the staff? Can you bring them with you, and have you empowered them? Have you re-skilled them along the way? Have you driven those cultural change programs to force that digital-first mindset, which is really the key to success in all of this?

Gardner: So many interdependencies, so much complexity, frankly, when we’re thinking about transacting across the external edge to cloud, to consumer, and to data center. And we’re talking about business processes that need to extend into new supply chains or new markets.

Given that complexity, tell us how to progress beyond understanding how difficult this all can be and to adopt proven ways that actually work.

Universal model has the edge 

Partridge: For everything that we’ve talked about, we have figured out that there is a universal model that organizations can use to methodologically go off into this space.

We found out that organizations are very quickly pivoting to exploring their digital edge. I think the digital agenda is an edge-in conversation. Again, I think that marks it out from the preceding cloud era, which was much more about core-out. That was get scale, efficiency, and cost optimization out of service delivery models in-place. But that was a very core-out conversation. When you think digital, you have to begin to think about the use case of where value is created or exchanged. And, that’s an edge-in conversation.

And we managed to find that there are two journeys behind that discussion. The first one is about deciding to whom you are looking to deliver that digital experience. So when you think about digital engagement, really caring passionately about who the beneficiary persona is behind that experience. You need to describe that person in terms of what’s their day-in-the-life. What pains do they face today? What gains could you develop that could deliver better outcomes for them? How can you walk in their shoes, and how do you describe that?

We found that is a key journey, typically led by kind of chief digital officer-type character who is responsible for driving new digital engagement with customers. If the persona is external to the customer, if it’s a revenue-generating persona, we might think of revenue as the essential key performance indicator (KPI). But you can apply similar techniques to drive internal personas’ productivity. So productivity becomes the KPI.

That journey is inspired by initiatives that are trying to use digital to connect to people in new, innovative, and differentiated ways. And you’ll find different stakeholders behind that journey.

And we found another journey, which is reshaping the edge. And that’s much more about using technology to digitize the physical world. So let’s hear about the experience, about business efficiency and effectiveness at the edge — and using the insights of instrumenting and digitizing the physical world to give you a sense of how that space is being used. How is my manufacturing floor performing? The KPI is overall equipment effectiveness (OEE) in the manufacturing space and it becomes key. Behind this journey you’ll see big Industry 4.0-type and Internet of Things (IoT)-type of initiatives under way.

If organizations are able to stitch these two journeys together — rather than treat them as siloed sandpits for innovation – and if they can connect them together, they tend to get compound benefits. 

You asked about where the constraint comes in. As we said, it is about getting agility into the supply chain. And again, we’ve actually found that there are two connected journeys, but with very different stakeholders behind them, which drive that agenda. 

We have a journey, too, that describes a core renovation agenda that will occupy 70 to 80 percent of every IT budget every year. It’s the constant need to challenge the price performance of legacy environments and constantly optimize and move the workloads and data into the right part of the supply chain for strategic advantage. 

That is coupled with yet another journey, that of the cloud-enabled constraint and that’s very much developer-led more than it is led by IT. IT is typically holding the legacy footprint, the technical debt footprint, but the developer is out there looking to exploit cloud-native architectures to write the next wave of applications and experiences. And they are just as impactful when it comes to equipping the organization with the cloud scale that’s necessary to mine those opportunities on the edge. 

So, there is a balance in this equation, to your point. There is innovation at the edge, very much line of business-driven, very much about business efficiency and effectiveness, or revenue and productivity, the real tangible dollar value outcomes. And on the other side, it’s more about agility and the supply chain. It’s getting that balance right so that I have my agility and that allows me to go and explore the world digitally at the edge. 

So they sort of overlap. And the implication there is that there are three core enablers and they are true no matter which of the big four agenda items customers are trying to drive through their initiative programs. 

In digital, data is everything 

Two of those enablers very much relate to data. Again, Dana, I know in the digital era data is everything. It is the glue that holds this new digital engagement model together. In there we found two key enablers that constantly come up, no matter which agenda you are driving. 

The first one is surely you need intelligence from that data; data for its own sake is of no use, it’s about getting intelligence from that dataset. And that’s not just to make better decisions, but actually to innovate, to create differentiated value propositions in your market. That’s really the key agenda behind that intelligence enabler. 

And the second thing, because we are dealing with data, is a huge impact and emphasis on being trusted with that data. And that doesn’t just mean being compliant to regulatory standards or having the right kind of resiliency and cybersecurity approach, it means going beyond that. 

You need to gain intelligence from the data; data for its own sake is of no use, it’s about getting intelligence for the datasets. And that’s not just to make better decisions, it’s to innovate and create differentiated value propositions in your market.

In this digitally enabled world, we want to trust brands with our data because often that data is now extremely personal. So beyond just General Data Protection Regulation

(GDPR) compliance, trust here means, “Am I being ethical? Am I being transparent about how I use that data?” We all saw the Cambridge Analytica-type of impact and what happens when you are not transparent and you are not ethical about how you use data. 

Now, one thing we haven’t touched on and I will just throw it up as a bit of context, Dana. There is a consideration, a kind of global consideration behind all of this agenda and that’s the shift toward everything-as-a-service (EaaS).

A couple of key attributes of that consideration includes the most obvious one; it’s the financial flexibility one. For sure, as you reassemble your supply chain — as you continue to press on that cloud-enabled side of the map — what you are paying, what you consume, and doing that in a strategic way helps get the right mix in that supply chain, and paying only for that as you consume, is kind of obvious.

But I think the more important thing to understand is that our customers are being equally innovative at the edge. So they are using that everything-as-a-service momentum to change their industry, their market, and the relationship they have with their customers. It helps especially as they pivot into a digital customer experience. Can that experience be constructed around a different business model?

We found that that’s a really useful way of deconstructing and simplifying what is actually quite a complex landscape. And if you can abstract — if you can use a model to abstract away the chaos and create some simplicity — that’s a really powerful thing. We all know that good models that abstract away complexity and create simplicity are hugely valuable in helping organizations reconstruct themselves. 

Gardner: Clearly, before the pandemic, some organizations dragged their feet on digital transformation as you’ve described it. They had a bit of inertia. But the pandemic has spurred a lot of organizations, both public and private, on. 

Hopefully, in a matter of some months or even a few years, the pandemic will be in the rearview mirror. But we will be left with the legacy of it, which is an emerging business paradigm of being flexible, agile, and more productive. 

Are we going to get a new mode of business agility where the payoff is it commensurate with all the work?

Agility augurs well post-pandemic 

Partridge: That’s the $6 million question, Dana. I would love to crystal ball gaze with you on that one because agility is key to any organization. We all know that there are constraints in traditional customer experiences — making widgets, selling products, transactional relationships, relationships that don’t lend themselves to having digital value added to them. I wonder how long that model goes on for as we are experiencing this shift toward digital value. And that means not just selling the widget or the product, but augmenting that with digital capabilities, with digital insights, and with new ways of adding value to the customer’s experience beyond just the capital asset. 

I think that was being fast-tracked before this global pandemic. And it’s the organizations now that are in the midst of doubling down on that — getting that digital experience right, ahead of product and prices – that’s the key differentiator when you go to market. 

And, for me, that customer experience increasingly now is the digital customer experience. I think that move was well under way before we hit this big crisis. And I can see customers now doubling down, so that if they didn’t get it right pre-pandemic, they are getting it right as they accelerate out of the pandemic. They recognize that that platform is the only way forward. 

You will hear a lot of commentators talk about the digital agenda as being driven by what they call the platform-driven economy. Can you create a platform in which your customers are willing to participate, maybe even your ecosystem of partners who are willing to participate and create that kind of shared experience and shared value? Again, that’s something that HPE is very much invested in. As we pivot our business model, to EaaS outcomes, we are having to double down on our customer experience and increasingly that means digitizing that experience through that digital platform agenda. 

Gardner: I would like to explore some examples of how this is manifesting itself. How are organizations adjusting to the new normal and leveraging that to a higher level of business capability?

Also, why is a third-party organization like HPE Pointnext Services working within an ecosystem model with many years of experience behind it? How are you specifically gearing up to help organizations manage the process we have been describing? 

HPE digital partnerships 

Partridge: This whole revolution requires different engagement models. The relationship HPE shares with its customers is becoming a technologically enabled partnership. Whenever you partner with a customer to help advance their business outcomes, you need a different way to engage with them.

We can continue to have our product-led engagement with customers, because many of them enjoy that relationship. But as we continue to move up the value stack we are going to need to swing to more of an advisory-led engagement model, Dana, where we are as co-invested in the customers’ outcomes as they are. 

We understand what they are trying to drive from a business perspective. We understand how technology is opening up and enabling those kinds of outcomes to be materialized, for the value to be realized. 

A year ago, we set out to reshape the way we engage with customers around this conversation. To drive that kind of digital partnership, that means sitting down with a customer and to co-innovate, going through workshops of how we as technologists can bring our expertise to the customer as the expert in their industry. Those two minds can meld to create more than one plus one equals two. By using design thinking techniques and co-design techniques, we can analyze the customers’ business problem and shape solutions that manufacture really, really big outcomes for our customers. 

For 15 years I have been a consultant inside of HP and HPE and we have always had that strong consulting engine. But now with HPE Pointnext Services we are gearing it around making sure that we are able to address the customers’ business outcomes, enabled through technology. 

Never has there been a time when technology has been so welded into a customer’s underlying value proposition. … There has never been a more open-door policy from our partners and customers.

And the timing is right-on. Never has there been a time when technology has been so welded into a customer’s underlying value proposition. I have been 25 years in IT. In the past, we could have gotten away with being a good partner to IT inside of our customer accounts. We could have gotten away with constantly challenging that price and performance ratio and renovating that agenda so that it delivers better productivity to the organization. 

But as technology makes its way into the underlying business model — as it becomes the differentiating business model — it’s no longer just a productivity question. Now it’s about how partners work to unlock new digital revenue streams. Well, that needs a new engagement model. 

And so that’s the work that we have been doing in my team, the Digital Advisory and Transformation Practice, to engage customers in that value-based discussion. Technology has made its way into that value proposition. There has never been a more open-door policy from our partners and customers who want to engage in that dialogue. They genuinely want to get the benefit of a large tech company applying itself to the customers’ underlying business challenges. That’s the partnership that they want, and there is no excuse for us not to walk through that door very confidently. 

Gardner: Craig, specifically at HPE Pointnext Services, what’s the secret sauce that allows you to take on this large undertaking of digital transformation? 

Mapping businesses’ DX ambition

Partridge: The development of this model has led to a series of unique pieces of intellectual property (IP) we use to help advance the customer ambition. I don’t think there has ever been a moment in time quite like this with the digital conversation. 

Customers recognize that technology is the fundamental weapon to transform and differentiate themselves in the market. They are reaching out to technology partners to say, “Come and participate with me using technology to fundamentally change my value proposition.” So we are being invited in now as a tech company to help organizations move that value proposition forward in a way that we never were before. 

In the past, HPE’s pedigree has been constantly challenging the optimization of assets and the price-performance, making sure that platform services are delivered in a very efficient and effective way. But now customers are looking to HPE to uniquely go underneath the covers of their business model — not just their operating model, but their business model. 

Now, we are not writing the board-level strategy for digital ambition because there is a great sweet spot for us, rather it’s where customers have a digital North Star, some digital ambition, but are struggling to realize it. They are struggling to land those initiatives that are, by definition, technology-enabled. That’s where tech companies like HPE are at the forefront of driving digital ambition. 

So we have this unique IP, this model we developed inside of HPE Pointnext Services, and the methodology of how to apply it. We can use it as a visualization tool, as a storytelling tool to be able to better communicate, and onward to further communicate your businesses’ digital ambitions.

We can use it to map out the initiatives and look at where those overlap and duplications occur inside organizations. We are truly looking at this from edge to cloud and as-a-service — that holistic side of the map helps us unpick the risks, dependencies, and prerequisites. We can use the map to inspire new ideas and advance a customer’s new thinking about how technology might be enabled. 

We can also deploy the map with our building blocks behind each of the journeys, knowing what digital capabilities need to be brought on-stream and in what sequence. Then we can de-risk a customer’s path to value. That’s a great moment in time for us and it’s uniquely ours. Certainly, the model is uniquely ours and the way we apply it is uniquely ours.

But it’s also a timing thing, Dana. There has never been a better time in the industry where customers are seeking advice from a technology giant like HPE. So it’s a mixture of having the right IP, having the right opportunity, and the right moment as well. 

Gardner: So how should such organizations approach this? We talked about the methodology but initiating something like this map and digital ambition narrative can be daunting. How do we start the process?

How to pose the right questions 

Partridge: It begins by understanding a description of this complex landscape, as we have explored in this discussion. Begin to visualize your own digital ambition. See if you can take two or three top initiatives that you are driving and explore them across the map. So what’s the overriding KPI? Where does it start? 

Then ask yourself the questions in the middle of the map. What are the key enablers? Am I addressing a shared intelligence backbone? How am I handling trust, security, and resiliency? What am I doing to look at the operating model and the people? How is the culture central to all of this? How am I going to provide it as-a-service? Am I going to consume component parts of the service? How to stress over into the supply chain? How is it addressing the experience? 

HPE Pointnext Services’ map is a beautiful tool to help any customer today start to plot their own initiatives and say, “Well, am I thinking of this initiative in a fully 360° way.”

If you are stuck, come and ask HPE. A lot of my advisors around the world map their customers initiatives over to this framework. And we start to ask questions. We start to unveil some of the risks and dependencies and prerequisites. As you put in more and more initiatives and programs, you can begin to see duplication in the middle of the model play out. That enables customers to de-risk and be quicker to path of value because they can deduplicate what they can now see as a common shared digital backbone. Often customers are running those in isolation but seeing it through this lens helps them deduplicate that effort. That’s a quicker path to value. 

We engage customers around one- to two-day ideation workshops. Those are very structured ways of having creative, outside-of-the-box type thinking and putting in enough of a value proposition behind the idea to excite people.

We do a lot around ideation and design thinking. If customers have yet to figure out a digital initiative, what’s their North Star, where should they start? We engage customers around one- to two-day ideation workshops. Those are very structured ways of having creative, outside-of-the-box-type thinking and putting in enough of a value proposition behind the idea to excite people.

We had a customer in Italy come to us and say, “Well, we think we need to do something with AI, but we are not quite sure where the value is.”

Then we have a way of engaging to help you accelerate, and that’s really about identifying what the critical digital capabilities are. Think of it at the functional level first. What digital functions do I need to be able to achieve some level of outcome? And then get that into some kind of backlog so you know how to sequence it. And again, we work with customers to help do that as well. 

There are lots of ways to slice this, but, ultimately, dive in, get an initiative on the map, and begin to look at the risks and dependencies as you map it through the framework. Are you asking the right questions? Is there a connection to another part of the map that you haven’t examined yet that you should be examining? Is there a part of the initiative that you have missed? That is the immediate get-go start point. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. See the video. Sponsor: Hewlett Packard Enterprise.

Posted in artificial intelligence, Business intelligence, Cloud computing, Cyber security, data analysis, Data center transformation, digital transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, machine learning, professional services, User experience | Tagged , , , , , , , , , , , , , , | Leave a comment

How an SAP ecosystem partnership reduces risk and increases cost-efficiency around taxes management

The next BriefingsDirect data-driven tax optimization discussion focuses on reducing risk and increasing cost efficiency as businesses grapple with complex and often global spend management challenges.

We’ll now explore how end-to-end visibility of almost any business tax, compliance, and audit functions allows for rapid adherence to changing requirements — thanks to powerful new tools. And we’ll learn directly from businesses how they are pursuing and benefiting from advances in intelligent spend and procurement management.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To uncover how such solutions work best, we welcome Sean Thompson, Executive Vice-President of Network and Ecosystem at SAP Procurement Solutions; Chris Carlstead, Head of Strategic Accounts and Partnerships and Alliances at Thomson Reuters, and Poornima Sadanandan, P2P IT Business Systems Lead at Stanley Black and Decker. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts: 

Gardner: Sean, what’s driving the need for end-to-end visibility when it comes to the nitty-gritty details around managing taxes? How can businesses reduce risk and increase cost efficiency — particularly in difficult, unprecedented times like these — when it comes to taxation?

Thompson: It’s a near-and-dear topic for me because I started off my career in the early ‘90s as a tax auditor, so I was doing tax accounting before I went into installing SAP ERP systems. And now here I am at SAP at the confluence of accounting systems and tax.

We used to talk about managing risk as making sure you’re compliant with the various different regulatory agencies in terms of tax. But now in the age of COVID-19 compliance is also about helping governments. Governments more than ever need companies to be compliant. They need solutions that drive compliancy because taxes these days are not only needed to fund governments in the future, but also to support the dynamic changes now in reacting to COVID-19 and encouraging economic incentives.

There’s also a dynamic nature to changes in tax laws. The cost-efficiency now being driven by data-driven systems helps ensure compliancy across accounting systems to all of the tax authorities. It’s a fascinating time because digitization brings together business processes thanks to the systems and data that feeds the continuing efficiency.

It’s a great time to be talking about tax, not only from a compliance perspective but also from a cost perspective. Now that we are in the cloud era — driving data and business process efficiency through software and cloud solutions — we’re able to drive efficiencies unlike ever before because of artificial intelligence (AI) and the advancements we’ve made in open systems and the cloud.

Gardner: Chris, tax requirements have always been with us, but what’s added stress to the equation nowadays?

Carlstead: Sean hit on a really important note with respect to balance. Oftentimes people think of taxation as a burden. It’s often overlooked that the other side of that is governments use that money to fund programs, conduct social welfare, and help economies run. You need both sides to operate effectively. In moments like COVID-19 — and Dana used the word “unprecedented,” I might say that’s an understatement.

I don’t know in the history of our time if we have ever had an event that affected the world so quickly, so instantly, and uniformly like we have had in the past few months. When you have impacts like that, they generally drive government reaction, whether it was 9/11, the dot-com bubble, or the 2008 financial crisis. And, of course, there are also other instances all over the globe when governments need to react.

But, again, this latest crisis is unprecedented because almost every government in the world is acting at the same time and has moved to change the way we interact in our economies to help support the economy itself. And so while pace of change has been increasing, we have never seen such a moment like we have in the last few months.

Think of all the folks working at home, and the empathy we have for them dealing with this crisis. And while the cause was uniform, the impact from country to country — or region to region — is not equal. To that end, anything we can do to help make things easier in the transition, we’re looking to do.

While taxes may not be the most important thing in people’s lives, it’s one last thing they have to worry about when they are able to take advantage of a system such as SAP Ariba and Thomson Reuters have to help them deal with that part of their businesses. 

Gardner: Poornima, what was driving the need for Stanley Black and Decker to gain better visibility into their tax issues even before the pandemic?

Visibility improves taxation compliance

Sadanandan: At Stanley Black and Decker, SAP Ariba procurement applications are primarily used for all indirect purchases. The user base spans across buyers who do procurement activities based on organizational requirements and on up to the C-level executives who look into the applications to validate and approve transactions based on specific thresholds.

So providing them with accurate data is of utmost importance for us. We were already facing a lot of challenges concerning our legacy applications due to numerous challenges like purchasing categories, federated process-controlled versions of the application integrated with multiple SAP instances, and a combination of solutions including tax rate files, invoice parking, and manual processing of invoices.

There were a lot of points where manual touch was necessary before an invoice could even get posted to the backend ERP application due to these situations, including all the payback on return, tax penalties, and supplier frustrations, and so on.

So we needed to have end-to-end visibility with accuracy and precision to the granular accounting and tax details for these indirect procurement transactions without causing any delay due to the manual involvement in this whole procurement transaction process.

Gardner: Poornima, when you do this right, when you get that visibility and you can be detail-oriented, what does that get for you? How does that improve your situation?

Sadanandan: There are many benefits out of these automated transactions and due to the visibility of data, but I’d like to highlight a few.

Basically, it helps us ensure we can validate the suppliers’ charge tax, that suppliers are adhering to their local tax jurisdiction rules, and that any tax exemptions are, in fact, applicable for tax policies at Stanley Black and Decker.

Secondly, there comes a lot of reduction of manual processes. That happened because of automation, the web services, and as part of the integration framework we adopted. So tax calculation and determination became automated, and the backend ERP application, which is SAP at our company, receives accurate posting information. That then helps the accounting team to capture accounting details in real-time. They gain good visibility on financial reconciliations as well.

Tax calculations became automated, and the backend ERP, which is SAP, receives accurate posting information. That helps the accounting team capture details in real-time. They gain good visibility on financial reconciliations as well.

We also achieved better exception handling. Basically any exceptions that happen due to tax mismatches are now handled promptly based on thresholds set up in the system. Exception reports are also available to provide better visibility, not just to the end users but even to the technical team who are validating any issues that helps them in the whole analysis process.

Finally, the tax calls happen twice in the application, whereas earlier in our legacy application that only happened at the invoicing stage. Now this happens during the requisition phase in the whole procurement transaction process so it provides more visibility to the requisitioners. They don’t have to wait until the invoice phase to gain visibility on what’s being passed from the source system. Essentially, requesters as well as the accounts payable team are getting good visibility into the accuracy and precision of the data.

Gardner: Sean, as Poornima pointed out, there are many visibility benefits to using the latest tools. But around the world, are there other incentives or benefits?

Thompson: One of the things the pandemic has shown is that whether you are a small, medium-size, or large company, your supply chains are global. That’s the way we went into the pandemic, with the complexity of having to manage all of that compliance and drive efficiency so you can make accounting easy and remain compliant.

The regional nature of it is both a cost statement and a statement regarding regional incentives.  Being able to manage that complexity is what software and data make possible.

Gardner: And does managing that complexity scale down as well up based on the size of the companies?

Thompson: Absolutely. Small- to medium-sized businesses (SMBs) need to save money. And oftentimes SMBs don’t have dedicated departments that can handle all the complexity.

And so from a people perspective, where there’s less people you have to think about the end-to-end nature of compliance, accounting, and efficiency. When you think about SMBs, if you make it easy there, you can make it easy all the way up to the largest enterprises. So the benefits are really size-agnostic, if you will.

Gardner: Chris, as we unpack the tax visibility solution, what are the global challenges for tax calculation and compliance? What biggest pain points are people grappling with?

Challenges span globe, businesses

Carlstead: If I may just take a second and compliment Poornima. I always love it when I hear customers speak about our applications better than we can speak about them ourselves; so, Poornima, thank you for that.

And to your question, because the impact is the same for SMBs and large companies, the pain centers around the volume of change and the pace of that change. This affects domestic companies, large and small, as well as multinationals. And so I thought I’d share a couple of data points we pulled together at Thomson Reuters.

There are more than 15,000 jurisdictions that impact just this area of tax alone. Within those 15,000 jurisdictions, in 2019 we had more than 50,000 changes to the tax rules needed to comply within those jurisdictions. Now extrapolate that internationally to about 190 countries. Within the 190 countries that we cover, we had more than two million changes to tax laws and regulations.

At that scale, it’s just impossible to maintain manual processes and many companies look to do that either decentralized or otherwise — and it’s just impossible to keep pace with that.

With the COVID-19 pandemic impact, we expect that supply chains are going to be reevaluated. You’re changing processes, moving to new jurisdictions, and into new supply routes. And that has huge tax implications.

And now you introduce the COVID-19 pandemic, for which we haven’t yet seen the full impact. But the impact, along the lines where Sean was heading, is that we also expect that supply chains are going to get reevaluated. And when you start to reevaluate your supply chains you don’t need government regulation to change, you are changing. You’re moving into new jurisdictions. You are moving into new supply routes. And that has huge tax implications.

And not just in the area of indirect tax, which is what we’re talking about here today on the purchase and sale of goods. But when you start moving those goods across borders in a different route than you have historically done, you bring in global trade, imports, duties, and tariffs. The problem just magnifies and extrapolates around the globe.

Gardner: How does the Thomson Reuters and SAP Ariba relationship come together to help people tackle this?

Thompson: Well, it’s been a team sport all along. One of the things we believe in is the power of the ecosystem and the power of partnerships. When it comes down to it, we at SAP are not tax data-centric in the way we operate. We need that data to power our software. We’re about procurement, and in those procurement, procure-to-pay, and sales processes we need tax data to help our customers manage the complexity. It’s like Chris said, an amazing 50,000 changes in that dynamic within just one country.

And so, at SAP Ariba, we have the largest business network of suppliers driving about $3 trillion of commerce on a global basis, and that is a statement regarding just the complexity that you can imagine in terms of a global company operating on a global basis in that trade footprint.

Now, when the power of the ecosystem and Thomson Reuters come together we can become the tax-centric authorities. We do tax solutions and help companies manage their tax data complexity. When you can combine that with our software, that’s a beautiful interaction because it’s the best of both worlds.

It’s a win, win, win. It’s not only a win for our two companies, Thomson Reuters and SAP, but also for the end customer because they get the power of the ecosystem. We like to think you choose SAP Ariba for its ecosystem, and Thomson Reuters is one of our — if not the most — successful extensions that we have.

Gardner: Chris, if we have two plus two equaling five, tell us about your two. What does Thomson Reuters bring in terms of open APIs, for example? Why is this tag team so powerful?

Partner to prioritize the customer

Carlstead: A partnership doesn’t always work. It requires two different parties that complement each other. It only works when they have similar goals, such as the way they look at the world, and the way they look at their customers. I can, without a doubt, say that when Thomson Reuters and SAP Ariba came together, the first and most important focus was the customer. That relentless focus on the customer really helped keep things focused and drive forward to where we are today.

When you bring two large organizations together to help solve a large problem it’s a complex relationship and takes a lot of hard work. I’m really proud of the work we have done.

And that doesn’t mean that we are perfect by any means. I’m sure we have made mistakes along the way, but it’s that focus that allowed us to keep the patience and drive to ultimately bring forth a solution that helps solve a customer’s challenges. That seems simple in its concept, but when you bring two large organizations together to help try to solve a large organization’s problems, it’s a very complex relationship and takes a lot of hard work.

And I’m really proud of the work that the two organizations have done. SAP Ariba has been amazing along the way to help us solve problems for customers like Stanley Black and Decker.

Gardner: Poornima, you are the beneficiary here, the client. What’s been powerful and effective for you in this combination of elements that both SAP Ariba and Thomson Reuters bring to the table?

Sadanandan: With our history of around 175 years, Stanley Black and Decker has always been moving along with pioneering projects, with a strong vision of adopting the intelligent solutions for society. As part of this, adopting advanced technologies that help us fulfill all of the company’s objectives has always been in the forefront. 

As part of that tradition, we have been leveraging the integration framework consisting of the SAP Ariba tax APIcommunicating with the Thomson Reuters ONESOURCE tax solution in real-time using web services. The SAP Ariba tax API is designed to make a web service call to the external tax service provider for tax calculations, and in turn it receives a response to update the transactional documents. 

During the procurement transactions, the API makes an external tax calculation. Once the tax gets determined, the response is converted back per the SAP Ariba message format and XML format and it gets passed on by the ONESOURCE integration and sends that over to the SAP application.

The SAP Ariba tax API receives the response and updates the transactional documents in real time and that provides a seamless integration between the SAP Ariba procurement solution and the global tax. That’s exactly what helps us in automating our procurement transactions.

Gardner: Sean, this is such a great use case of what you can do when you have cloud services and the right data available through open APIs to do real-time calculations. It takes such a burden off of the end user and the consumer. How is technology a fundamental underpinning of what ONESOURCE is capable of?

Cloud power boosts business outcomes

Thompson: It’s wonderful to hear Poornima as a customer. It’s music to my ears to hear the real-life use case of what we have been able to do in the cloud. And when you look at the architecture and how we are able to drive, not only a software solution in the cloud, but power that with real-time data to drive efficiencies, it’s what we used to dream of back in the days of on-premises systems and even, God bless us, paper reconciliations and calculations.

It’s an amazing time to be alive because of where we are and the efficiencies that we can drive on a global basis, to handle the kind of complexity that a global company like Stanley Black and Decker has to deal with. It’s an amazing time. 

And it’s still the early days of what we will doing in the future around predictive analytics, of helping companies understand where there is more risk or where there are compliance issues ahead.

That’s what’s really cool. We are going into an era now of data-driven intelligence, machine learning (ML), applying those to business processes that combine data and software in the cloud and automate the things that we used to have to do manually in the past.

And so it’s a really amazing time for us.

Gardner: Chris, anything more to offer on the making invisible the technology but giving advanced business outcomes a boost?

Carlstead: What’s amazing about where we are right now is a term I often use, I certainly don’t believe I coined it, but best-of-breed suite. In the past, you used to have to choose. You had to go best-of-breed or you could go with the suite, and there were pros and cons to both approaches.

Now, with the proliferation of APIs, cloud, and the adoption of API technology across software vendors, there’s more free flow of information between systems, applications, and platforms. You have the ability as a customer to be greedy — and I think that’s a great thing.

Stanley Black and Decker can go with the number-one spend management system in the world and they can go with the number -one tax content player in the world. And they can expect these two applications to work seamlessly together.

As a consumer, you are used to downloading an app and it just works. And we are a little bit behind on the business side of the house, but we are moving there very quickly so that now customers like Stanley Black and Decker can go with the number-one spend management system in the world. And they can also go with the number-one tax content player in the world. And they can have the expectation that those two applications will work seamlessly together without spending a lot of time and effort on their end to force those companies together, which is what we would have done in an on-premise environment over the last several decades.

From an outcome standpoint, and as I think about customers like Stanley Black and Decker, getting tax right, in and of itself is not a value-add. But getting it wrong can be very material to the bottom line of your business. So for us and with the partnership with SAP Ariba, our goal is to make sure that customers like Stanley Black and Decker get it right the first time so that they can focus on what they do best.

Gardner: Poornima, back to you for the proof. Do you have any anecdotes, qualitative or quantitative measurements, of how you have achieved more of what you have wanted to do around tax processing, accounts payable, and procurement?

Accuracy spells no delayed payments

Sadanandan: Yes, all the challenges we had with our earlier processes with respect to our legacy applications got diminished with respect to incorrect VAT returns, wrong payments, and delayed payments. It also strengthened the relationship between our business and our suppliers. Above all, troubleshooting any issues became so much easier for us because of the profound transparency of what’s being passed from the source system.

And, as I mentioned, this improves the supplier relationship in that payments are not getting delayed and there is improvement in the tax calculation. If there are any mismatches, we are able to understand easily how that happened, as the integration layer provides us with the logs for accurate analysis. And the businesses themselves can answer supplier queries on a timely manner as they have profound visibility to the data as well.

From a project perspective, we believe that the objective is fulfilled. Since we started and completed the initial project in 2018, Stanley Black and Decker has been moving ahead with transforming the source-to-pay process by establishing a core model, leveraging the leading practices in the new SAP Ariba realm, and integrated to the central finance core model utilizing SAP S/4HANA.

So the source-to-pay core model includes leading practices of the tax solution by leveraging ONESOURCE Determination by integrating to the SAP Ariba source-to-pay cloud application. So with a completion of the project, we were able to achieve that core model and now the future roadmaps are also getting laid out to have this model adopted for the rest of our Stanley Black and Decker entities.

Gardner: Poornima, has the capability to do integrated tax functionality had a higher-level benefit? Have you been able to see automation in your business processes or more strategic flexibility and agility?

Sadanandan: It has particularly helped us in these uncertain times. Just having an automated tax solution was the primary objective with the project, but in these uncertain times this automated solution is also helping us ensure business continuity.

Having real-time calls that facilitate the tax calculation with accuracy and precision without manual intervention helped the year-end accounts payable transactions to occur without any interruptions.

Having real-time calls that facilitate the tax calculation with accuracy and precision without manual intervention helped the year-end accounts payable transactions to occur without any interruptions.

And above all, as I was mentioning, even in this pandemic time, we are able to go ahead with any future projects already in the roadmap because they are not on a standstill, we are able to leverage the standard functionalities provided by ONESOURCE and that’s easier to adopt in our environment.

Gardner: Chris, when you hear how Stanley Black and Decker has been able to get these higher-order risk-reduction benefits, do you see that more generally? What are some of the higher-order business benefits you see across your clientele?

Risk-reduction for both humans and IT

Carlstead: There are two broad categories. I will touch on the one that Poornima just referenced, which is more the human capital, and then also the IT side of the house. 

The experience that Stanley Black and Decker is having is fairly uniform across our customer base. We are in a situation where in almost every single tax department, procurement department, and all the associated departments, nobody has extra capacity walking around. We are all constrained. So, when you can bring in applications that work together like SAP Ariba and Thomson Reuters, it helps to free up capacities. You can then shift those resources into higher-value-add activities such as the ones Poornima referenced. We see it across the board. 

We also see that we are able to help consolidate resourcing from a hardware and a technology standpoint, so that’s a benefit. 

And the third benefit on the resource side is that as you are better able to track your taxation, not only do you get it right the first time, when it comes to areas of taxation like VAT recovery, you have to show very stringent documentation in order to receive your money back from governments, so there is a cash benefit as well.

And then on the other side, more on the business side of the relationship, there is a benefit we have just started to better understand in the last couple of years. Historically folks either chose not to move forward with an application like because they felt they could handle it manually, or even worse, they would say, “We will just have it audited, and we will pay the fine because the cost to fix the problem is greater than the penalties or fines I might pay.”

But they didn’t take into consideration the impact on the business relationship that you have with your vendors and your suppliers. If you think about every time you have had a tax issue between them, and then in the case in many European countries and around the world, where VAT recovery would not allow that supplier to recover their taxation because of a challenge they might have had with their buyer, that hurts your relationship. That ultimately hurts your ability to do commerce with that partner and in general with any partner around the world.

So, the top-line impact is something we have really started to focus on as a value and it’s something that really drives business for companies.

Gardner: Poornima, what would you like to see next? Is there a level of more intelligence, more automation? 

Post-pandemic possibilities and progress

Sadanandan: Stanley Black and Decker is a global company spanning across more than 60 countries. We have a wide range of products, including tools, hardware, security, and so on. Irrespective of these challenging times, all our priorities regard the safety of the employees and the families and keeping the momentum of business continuity responding to the needs of the community … these all remain as the top consideration. 

We feel that we are already equipped technology-wise to keep the business up and running. What we are looking forward to is, as the world tries to come back to the earlier normal life, continuing to provide pioneering products with intelligent solutions.

Gardner: Chris, where do you see technology and the use of data going next in helping people reach a new normal or create entirely new markets?

Carlstead: From a Thomson Reuters standpoint, we largely focus on helping businesses work with governments at the intersection of regulation and commerce. As a result, we have, for decades, amassed an extensive amount of content in categories around risk, legal, tax, and several other functional areas as well. We are relentlessly focused on how to best open up that content and free it, if you will, from even our own applications.

When we can leverage ecosystems such as SAP Ariba, we can leverage APIs and provide a more free-flowing path for our content to reach our customers. The number of use cases and possibilities is infinite.

What we are finding is that when we can leverage ecosystems such as SAP Ariba, we can leverage APIs and provide a more free-flowing path for our content to reach our customers; and when they are able to use it in the way they would like, the number of use cases and possibilities is infinite. 

We see now all the time our content being used in ways we would have never imagined. Our customers are benefiting from that, and that’s a direct result of the corporations coming together and suppliers and software companies freeing up their platforms and making things more open. The customer is benefiting, and I think it’s great.

Gardner: Sean, when you hear your partner and your customer describing what they want to come next, how can we project a new vision of differentiation when you combine network and ecosystem and data?

Thompson: Well, let me pick up where Chris said, “free and open.” Now that we are in the cloud and able to digitize on a global basis, the power for us is that we know that we can’t do it all ourselves. 

We also know that we have an amazing opportunity because we have grown our network across the globe, to 192 countries and four million registered buyers or suppliers, all conducting a tremendous amount of commerce and data flow. Being able to open up and be an ecosystem, a platform way of thinking, that is the power.

Like Chris said, it’s amazing the number of things that you never realized were possible. But once you open up and once you unleash a great developer experience, to be able to extend our solutions, to provide more data — the use cases are immense. It’s an incredible thing to see.

That’s what it’s really about — unleashing the power of the ecosystem, not only to help drive innovation but ultimately to help drive growth, and for the end customer a better end-to-end process and end-to-end solution. So, it’s an amazing time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

Posted in Cloud computing | Leave a comment

AWS and Unisys join forces to accelerate and secure the burgeoning move to cloud

A powerful and unique set of circumstances are combining in mid-2020 to make safe and rapid cloud adoption more urgent and easier than ever.

Dealing with the novel coronavirus pandemic has pushed businesses to not only seek flexible IT hosting models, but to accommodate flexible work, hasten applications’ transformation, and improve overall security while doing so.

This next BriefingsDirect cloud adoption best practices discussion examines how businesses plan to further use cloud models to cut costs, manage operations remotely, and gain added capability to scale their operations up and down.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the latest on-ramps to secure an agile cloud adoption, please welcome  Anupam Sahai, Vice President and Cloud Chief Technology Officer at Unisys, and Ryan Vanderwerf, Partner Solutions Architect at Amazon Web Services (AWS). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Anupam, why is going to the public cloud an attractive option more now than ever? 

Sahai: There are multiple driving factors leading to these tectonic shifts. One is that the whole IT infrastructure is moving to the cloud for a variety of business and technology reasons. And then, as a result, the entire application infrastructure — along with the underlying application services infrastructure — is also moving to the cloud.

The reason is very simple because of what cloud brings to the table. It brings a lot of capabilities, such as providing scalability in a cost-effective manner. It makes IT and applications behave as a utility and obviates the need for every company to host local infrastructure, which otherwise becomes a huge operations and management challenge.

So, a number of business and technological factors, along with the COVID-19 pandemic situation, which essentially makes us work remotely, and having cloud-based services and applications available as a utility makes them easy to consume and use.

Public cloud on everyone’s horizon 

Gardner: Ryan, have you seen in your practice over the past several months more willingness to bring more apps into the public cloud? Are we seeing more migration to the cloud?

Vanderwerf: We’ve definitely had a huge uptick in migration. As people can’t be in an office, things like workspaces and doing remote desktops, have also seen a huge increase. People are trying to find ways to be elastic, cost-efficient, and make sure they’re not spending too much money.

Following up on what Anupam said, the reasons people are moving in the cloud haven’t changed. They have just been accelerated because they need agility and to speed-up access to the resources they need. They need cost savings by not having to maintain data centers by themselves.

By being more elastic, they can provision only for what they’re using and not have stuff running and costing money when you don’t need to. They can also deploy globally in minutes, which is a big deal across many regions, and allows people to innovate faster. 

And right now, there’s a need to innovate faster, get more revenue, and cut costs – especially in times where fluctuation in demand goes up and down. You have to be ready for it.

Gardner: Yes, I recently spoke with a CIO who said that when the pandemic hit, they had to adjust workloads and move many from a certain set of apps that they weren’t going to be using as much to a whole other set that they were going to be using a lot more. And if it weren’t for the cloud, they just never would have been able to do that. So agility saved them a tremendous amount of hurt.

Anupam, why when we seek such cloud agility do we also have to think about lower risk and better security?

Sahai: Risk and security are critical because you’re talking about commercial, mission-critical workloads that have potentially sensitive data. As we move to the cloud, you should think three different trajectories. And some of this, of course, is being accelerated because of the COVID-19 pandemic.

Learn More About 

Unisys CloudForte 

One of the cloud-migration trajectories, as Ryan said earlier, is the need for elastic computing, cost savings, performance, and efficiencies when building, deploying, and managing applications. But as we move applications and infrastructure to the cloud, there is a need to ensure that the infrastructure falls under what is called the shared responsibility model, where the cloud service provider protects and secures infrastructure up to a certain level and then the customers have their responsibility, a shared responsibility, to ensure that they’re protecting their workloads, applications, and critical data. They also have to comply with the regulations that those customers need to adhere to.  

In such a shared responsibility model, customers need to work very closely with the service providers, such as AWS, to ensure they are taking care of all security and compliance-related issues.

You know, security breaches in the cloud — while less than compared to on-premises-related deployments — are still pretty rampant. That’s because some of the cloud security hygiene-related issues are still not being taking care of. That’s why solutions have to manage security and compliance for both the infrastructure and the apps as they move from on-premises to the cloud.

Gardner: Ryan, shared responsibility in practice can be complex when it’s hard to know where one party’s responsibility begins and ends. It cuts across people, process, and even culture.

When doing cloud migrations, how should we make sure there are no cracks for things to fall through? How do we make sure that we segue from on-premises to cloud in a way that the security issues are maintained throughout?

Stay safe with best-practices

Vanderwerf: Anupam is exactly right about the shared responsibility model. AWS manages and controls the components from the host operating system and virtualization layer down to physically securing the facilities. But it is up to AWS customers to build secure applications and manage their hygiene.

We have programs to help customers make sure they’re using those best practices. We have a well-architected program. It’s available on the AWS Management Console, and we have several lenses if you’re doing specific things like serverless, Internet of things (IoT), or analytics, for example.

Solutions architects can help the customer review all of their best practices and do a deep-dive examination with their teams to raise any flags that people might not be aware of and help find solutions. 

Things like that have to be focused toward the business, but solutions architects can help the customer review all of their best practices and do a deep-dive examination with their teams to raise any flags that people might not be aware of and help them find solutions to remedy them.

We also have an AWS Technical Baseline Review that we do for partners. In it we make sure that partners are also following best practices around security and make sure that the correct things are in place for a good experience for their customers as well. 

Gardner: Anupam, how do we ensure security-as-a-culture from the beginning and throughout the lifecycle of an application, regardless of where it’s hosted or resides? DevSecOps has become part of what people are grappling with. Does the security posture need to be continuous?

Sahai: That’s a very critical point. But first I want to double-click on what Ryan mentioned about the shared responsibility model. If you look at the overall challenges that customers face in migrating or moving to the cloud, there is certainly the security and compliance part of it that we mentioned.

There is also the cost governance issue and making sure it’s a well-architected framework architecture. The AWS Well-Architected Framework (WAF), for example, is supported by Unisys.

Additionally, there are a number of ongoing issues around optimization, cost governance, security, compliance governance, and optimization of workloads that are critical for our customers. Unisys does a Cloud Success Barometer study every year and, and what we find is very interesting.

One thing is clear, about 90 percent of organizations are transitioned to the cloud. So no surprise there. But in the journey to the cloud what we also found is that 60 percent of the organizations are unable to move to the cloud, or hold on to their cloud migrations, because of some of these unexpected roadblocks. And so that’s where partners like Unisys and AWS are coming together to offer visibility and solutions to address them. Those challenges remain, and, of course, we are able to help address them.

Coming back to the DevSecOps question, let’s take a step back and understand why DevOps came into being. It was basically because of the migration to the cloud that we had the need to break down the silos between development and operations to deploy infrastructure-as-code. That’s why DevOps essentially brings about faster, shorter development cycles; faster deployment, faster innovation.

Studies have shown that DevOps leads to at least 60 percent faster innovation and turnaround time compared to traditional approaches, not to mention the cost savings and the IT headcount savings when you merge the dev and ops organizations.

As DevOps goes mainstream, and as cloud-centric applications are becoming mainstream, there is a need to inject security into the DevOps cycle. Having DevSecOps is key.

But as DevOps goes mainstream, and as cloud-centric applications are becoming mainstream, there is a need to inject security into the DevOps cycle. So, having DevSecOps is key. You want to enable developers, operations, and security professionals to work together on yet another silo, to break them down and merge with the DevOps team.

But we also need to provide tools that are amenable to the DevOps processes, continuous integration/continuous delivery (CI/CD) tools that enable the speed and agility needed for DevOps, but also injecting security — without slowing them down. It is a challenge, and that’s why the all-new field of DevSecOps enables security and compliance injection into the DevOps cycle. It is very, very critical.

Gardner: Right, you want to have security but without giving up agility and speed. How have Unisys and AWS come together to ease and reduce the risk of cloud adoption while greasing the skids to the on-ramps to cloud adoption?

Smart support on the cloud journey

Sahai: Unisys in December 2019 announced CloudForte capabilities with the AWS cloud. A number of capabilities were announced that help customers adopt cloud without worrying about security and compliance.

CloudForte today provides a comprehensive solution to help customers manage their customer cloud journeys, whether it’s greenfield or brownfield; and there is hybrid cloud support, of course, for the AWS cloud along with multi-cloud support from a deployment perspective.

The solution combines production services that enable three primary use cases: Cloud migration, as we talked about, and apps migration using DevSecOps. We’ve codified that in terms of best practices, reference architecture, and well-architected principles, and we have wrapped that in advisory services and deployment services as well.

Learn More About 

Unisys CloudForte 

The third use case is around cloud posture management, which is understanding and optimizing existing deployments, including hybrid cloud deployments, to ensure you’re managing costs, managing security and compliance, and also taking care of any other IT-related issues around governance of resources to make sure that you migrate to the cloud in a smart and secure manner.

Gardner: Ryan, why did AWS get on-board with CloudForte? What was it about it that was attractive to you in helping your customers?

Vanderwerf: We are all about finding solutions that help our customers and enabling our partners to help their customers. With the shared responsibility model, that’s on the customer, and CloudForte has really good risk management and a portfolio of applications and services to help people get ahold of that responsibility themselves.

Instead of customers trying to go on their own — or just following general best practices – Unisys also has the tooling in place to help customers. That’s pretty important because with DevSecOps, people suffer from a lack of business agility, security agility, and face the risks around change to their businesses. People fear that.

With the shared responsibility model, that’s on the customer, and CloudForte has really good risk management and a portfolio of apps and services to help people get ahold of that responsibility themselves.

These tools have really helped customers manage that journey. We have a good feeling about being secure and being compliant, and the dashboards they have inside of it are very informative, as a matter of fact.

Gardner: Of course, Unisys has been around for quite a while. They have had a very large and consistent installed base over the years. Are the tooling, services, and value in CloudForte bringing in a different class of organization, or different parts of organizations, into AWS?

Vanderwerf: I think so, especially in the enterprise area where they have a lot of things to wrangle on the journey to the cloud — and it’s not easy. When you’re migrating as much as you can to a cloud setting – seeking to keep control over assets and making sure there are no rogue things running — it’s a lot for an enterprise IT manager to handle. And so, the more tools they have in their tool-belt to manage that is way better than them trying to cook up their own stuff.

Gardner: Anupam, did you have a certain type of organization, or part of an organization, in mind when you crafted CloudForte for AWS?

Sahai: Let’s take a step back and understand the kind of services we offer. Our services are tailored and applicable for both enterprises and the public sector. We offer advisory services to begin with, which essentially allows us to pass-through products. You have the CloudForte Navigator product, which allows us to assess the current posture of the customer and understand the application capabilities the customer has, whether it needs a transformation, and, of course, this is all driven by business outcomes that the customers desires.

Second, through CloudForte we bring best practices, reference architectures, and blueprints for the various customer journeys that I mentioned earlier. Greenfield or brownfield opportunities, whatever the stage of adoption, we have created a template to help with the specific migration and customer journey.

Once customers are able and ready to get on-boarded, we enable DevSecOps using CI/CD tools, best practices, and tools to ensure the customers use a well-architected framework. We also have a set of accelerators provided by Unisys that enable customers to get on-boarded with guardrails provided. So, in short, the security policies, compliance policies, organizational framework, and the organizational architectures are all reflected in the deployment. 

Then, once it’s up and running, we manage and operate the hybrid cloud security and compliance posture to ensure that any deviations, any drifts, are monitored and remediated to ensure they are continuously having an acceptable posture. 

Finally, we also have AIOps capabilities, which include AI-enabled outcomes that the customer is looking for. We use artificial intelligence and machine learning (AI/ML) technologies to optimize the resources. We drive cost savings through resource optimization. We also have an instant management capability to bring down costs dramatically using some those analytics and AIOps capabilities. 

So our objective is to drive digital transformation for customers using a combination of products and services that CloudForte has, and working in close conjunction with what AWS offers, so that we create a compelling offering that’s complementary to each other, but very compelling from a business outcomes perspective. 

Gardner: The way you describe them, it sounds like these services would be applicable to almost any organization, regardless of where they are on their journey to the cloud. Tell us about some of the secret sauce under the hood. The Unisys Stealth technology, in particular, is unique in how it maintains cloud security.

Stealth solutions for hybrid security 

Sahai: The Unisys Stealth technology is very compelling, especially in the hybrid cloud security sense. As we discussed earlier, the shared responsibility model requires customers to take care of and share the responsibility to make sure that workloads in the cloud infrastructure are compliant and secure. 

And we have a number of tools in that regard. One is the CloudForte Cloud Compliance Director solution, which allows you to assess and manage your security and compliance posture for the cloud infrastructure. So it’s a cloud security posture management solution. 

Then we also have the Stealth solution, essentially a zero trustmicro-segmentation capability that leverages the identity, or the user roles, in an organization to establish a community that’s trusted and is capable of doing certain actions. It creates communities of interest that allow and secure through a combination of micro-segmentation and identity management. 

Think of that as a policy management and enforcement solution that essentially manipulates the OS native stacks to enforce policies and rules that otherwise are very hard to manage. 

If you take Stealth and marry that with CloudForte compliance, some of the accelerators, and Navigator, you have a comprehensive Unisys solution for hybrid cloud security, both on-premises and in the AWS cloud infrastructure and workloads environment. 

Gardner: Ryan, it sounds like zero trust and micro-segmentation augment the many services that AWS already provides around identity and policy management. Do you agree that the zero trust and micro-segmentation aspects of something like Stealth dovetail very well with AWS services? 

Vanderwerf: Oh, yes, absolutely. And in addition to that, we have a lot of other security tools like AWS WAFAWS ShieldSecurity HubMacieIAM Access Analyzer and Inspector. And I am sure under the hood they are using some of these services directly. 

The more power you have the better. And it’s tough to manage. Some people are just getting into cloud and they have challenges. It’s not always technical, sometimes it’s just communications issues at a company or lack of sponsorship or resource allocation or undefined key performance indicators (KPI). So all these things, or even just timing, those are all important for a security situation. 

Gardner: All those spinning parts, those services, that’s where the professional services come in so that organizations don’t have to feel like they are doing it alone. How does the professional services and technical support fit into helping organizations go about these cloud journeys? 

Sahai: Unisys is trusted by our customers to get things right. So we say that we do cloud correctly, and we do cloud right, and that includes a combination of trusted advisory services. That means everything from identifying legacy assets, to billing, and to governance, and then using a combination of products and services to help customers transform as they move to the cloud.

Our cloud-trained people and expertise speeds up the migrations, gives visibility, and provides operational improvements. Thereby we are able to do cloud right and in a secure fashion by establishing security practices, trust through security and compliance, and AIOps. 

Our cloud-trained people and expertise speeds up the migrations, gives visibility, and provides operational improvements. Thereby we are able to do the cloud right and in a secure fashion by establishing security practices, establishing trust through a combination of micro-segmentation, security, and compliance ops, AIOps, and that certainly is the combination of products and services that we offer today. 

And our customers tell us we are rated very highly, 95 percent-plus in terms of customer satisfaction. It’s a testament to the fact that our professional services — along with our products – complements the AWS services and products that customers need to deliver their business outcomes. 

Gardner: Anupam, do you have any examples of organizations that leveraged both AWS and Unisys CloudForte? What have they been doing and what did they get from it? 

Student success supported 

Sahai: I have a number of examples where a combination of CloudForte and AWS deployments are happening. One is right here where I live in the San Francisco Bay Area. The business challenge they faced was to enhance the student learning experience and deliver technology services critical to student success and graduation initiatives. And given the COVID-19 scenario, you can understand why cloud becomes an important factor in that. 

Unisys cloud and infrastructure services, using CloudForte, helped them deploy a hybrid cloud model with AWS. We had Ansible for automation, ServiceNow for IT service management (ITSM), AIOps, and we deployed a logarithm and a portfolio of tools and services. 

They were then able to accelerate their capability to offer critical administrative services, such as student scheduling and registration, to about half-a-million students and 52,000 faculty and staff members across 23 campuses. It delivered 30 percent better performance while realizing about 33 percent cost savings and 40 percent growth in usage of these services. So, great outcomes, great cost savings, and you are talking about reduction of about $4.5 million in computed storage costs and about $3 million in cost avoidance.

Learn More About 

Unisys CloudForte 

So this is an example of a customer who leveraged the power of the AWS Cloud and the CloudForte products and services to deliver these business outcomes, which is a win-win situation for us. So that’s one example.

Gardner: Ryan, what do you expect for the next level of cloud adoption benefits? Is the AIOps something that we are going to be doubling-down on? Or are there other services? How do you see the future of cloud adoption improving?

The future is integrated 

Vanderwerf: It’s making sure everything is able to integrate. Like, for example, with a hybrid cloud situation we now have AWS Outposts. Now people can run a rack of servers in their data center and  be connected directly to the cloud.

Some things don’t make sense always to go to cloud. Perhaps machinery running analytics, for example, has very low latency requirements. You could still write native applications to work with the cloud in AWS and run those apps locally. 

Also, AIOps is huge because so many people are doing AI/ML in their workloads, from deciding security posture threats, to finding whether machines are breaking down. There are so many options in data analytics and then wrangling all these things together with data lakes. Definitely, the future is about better integrating all of these things.

AI/MLOps is really popular now because there are so many data scientists and people integrating ML into things. They need some sort of organizational structure to keep that organized, just like CI/CD did for DevOps. And all of those areas continue to grow. At AWS, we have 175-plus services, and they are always coming up with new ones every day. I don’t see that slowing down anytime soon.

Gardner: Anupam, for your future outlook, to this point that Ryan raised about integration, how do you see organizations like Unisys helping to manage the still growing complexity around the adoption and operations in the cloud and hybrid cloud environments?

Sahai: Yes, that is a huge challenge. As Ryan mentioned, hybrid cloud is here to stay. Not everything will move to the cloud. And while cloud migration trends will continue, there will be some core set of apps that will be staying on-premises. So leveraging AWS Outposts, as he said, to help with the hybrid cloud journeys will be important. And Unisys offers hybrid cloud and multi-cloud offerings that we are certainly committed to.

Security and compliance issues are not going away, unfortunately. Cloud breaches are out there. And so there is a need to actively manage and be proactive about managing your security and compliance posture. Customers are going to work with AWS and Unisys to fortify both their defense and offense proactively.

The other thing is that security and compliance issues are not going away, unfortunately. Cloud breaches are out there. And so there is a need to actively manage and be proactive about managing your security and compliance posture. And so that’s another area that I think our customers are going to be working together with AWS and Unisys to help them fortify not just their defenses, but also the offense — to be proactive in dealing with these threats and breaches and preventing them.

The third area is around AIOps, and this whole notion of AI-enabled CloudForte, and we see AI and ML to be prorating every path of the customer journey. Not just in AIOps, which is the operations and management piece, which is a critical part of what we do, but AI in enabling the customer journeys in terms of predicting.

So let’s say a customer is trying to move to the cloud, we want to be able to use predictions to predict what their customer journey would look like if they move to the cloud and to be proactive about predicting and remediating issues that might come up.

And, of course, AI is fueled by the data revolution — the data lakes, the data buses — that we have today to transport data seamlessly across applications, across hybrid cloud infrastructures, and to tie all of this together. You have the app migration, the CI/CD, and the DevSecOps capabilities that are part of the CloudForte advisory and product services. 

We are enabling customers to move to the cloud without compromising speed, agility, and security and compliance, whether they are moving infrastructure to the cloud, using infrastructure as code, or moving applications to the cloud using applications as code by leveraging the micro-services infrastructure, the cloud native infrastructure that AWS provides — and Kubernetes included. 

We have support for a lot of these capabilities today, and we will continue to evolve them to make sure no matter where the customer is in their customer journey to the cloud — whatever the stage of evolution — we have a compelling set of production services that customers can use to get to the cloud and stay there with the help of Unisys and AWS.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Amazon Web Services.

Posted in AIOps, application transformation, AWS, Cloud computing, containers, Cyber security, data center, Data center transformation, DevOps, enterprise architecture, managed services, Security, Unisys | Tagged , , , , , , , , , , , , , , | Leave a comment

How REI used automation to cloudify infrastructure and rapidly adjust its digital pandemic response

Like many retailers, Recreational Equipment, Inc. (REI) was faced with drastic and rapid change when the COVID-19 pandemic struck. REI’s marketing leaders wanted to make sure that their online e-commerce capabilities would rise to the challenge. They expected a nearly overnight 150 percent jump in REI’s purely digital business.

Fortunately REI’s IT leadership had already advanced their systems to heightened automation, which allowed the Seattle-based merchandiser to turn on a dime and devote much more of its private cloud to the new e-commerce workload demands.

The next BriefingsDirect Voice of Innovation interview uncovers how REI kept its digital customers and business leadership happy, even as the world around them was suddenly shifting.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To explore what works for making IT agile and responsive enough to re-factor a private cloud at breakneck speed, we’re joined by Bryan Sullins, Senior Cloud Systems Engineer at REI in Seattle. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: When the pandemic required you to hop-to, how did REI manage to have the IT infrastructure to actually move at the true pace of business? What put you in a position to be able to act as you did?

Digital retail demands rise 

Sullins: In addition to the pandemic stay-at-home orders a couple months ago, we also had a large sale previously scheduled for the middle of May. It’s the largest sale of the year, our anniversary sale.

And ramping up to that, our marketing and sales department realized that we would have a huge uptick in online sales. People really wanted to get outside, because people could go outside without breaking any of the social distancing rules.

For example, bicycle sales were up 310 percent compared to the same time last year. So in ramping up for that, we anticipated our online presence at was going to go up by 150 percent, but we wanted to scale up by 200 percent to be sure. In order to do that, we had to reallocate a bunch of ESXi hosts in VMware vSphere. We either had to stand up new ones or reallocate from other clusters and put them into what we call our digital retail presence.

As a result of our fully automated process, using Hewlett Packard Enterprise (HPE) OneViewSynergy, and Image Streamer, we were able to reallocate 6 out of the 17 total hosts needed. We were able to do that in 18 minutes, all at once — and that’s single touch, that’s launching the automation and then pulling them from one cluster and decommissioning them and placing them all the way into the digital retail clusters.

We also had to move some from our legacy platform, they aren’t at HPE Synergy yet, and those took an additional three days. But those are in transition, we are moving through to that fully automated platform all around. 

Gardner: That’s amazing because just a few years ago that sort of rapid and automated transition would have been unheard of. Even at a slow pace you weren’t guaranteed to have the performance and operations you wanted.

If you were not able to do this using automation – if the pandemic had hit, heaven forbid, five or seven years ago – what would have been the outcome?

We needed to make sure we had the infrastructure capacity so that nothing failed under a heavy load. We were able to do it in the time-frame, and be able to get some sleep.

Sullins: There were actually two outcomes from this. The first is the fairly obvious issue of not being able to handle the online traffic on our retail presence. It could have been that people weren’t able to put stuff into a shopping cart, or inventory decrement, and so on. It could have been a very broad range of things. We needed to make sure we had the infrastructure capacity so that none of that fails under a heavy load. That was the first part.

Gardner: Right, and when you have people in the heat of a purchasing moment, if you’re not there and it’s not working, they have other options. Not only would you lose that sale, you might lose that customer, and your brand suffers as well.

Sullins: Oh, without a doubt, without a doubt.

The other issue, of course, would have been if we did not meet our deadline. We had just under a week to get this accomplished. And if we had to do this without a fully automated approach, we would have had to return to our managers and say, “Yeah, so like we can’t do it that quickly.” But with our approach, we were able to do it all in the time frame — and be able to get some sleep in the interim. So it was a win-win.

Gardner: So digital transformation pays off after all?

Sullins: Without a doubt.

Gardner: Before we learn more about your journey to IT infrastructure automation, tell us about REI, your investments in advanced automation, and why you consider yourself a data-driven digital business?

Automation all the way 

Sullins: Well, a lot of that precedes me by quite a bit. Going back to the early 2000s, based on what my managers tell me, there was a huge push for REI become an IT organization that just happens to do retail. The priority is on IT being a driving force behind everything we do, and that is something that, at the time, REI really needed to do. There are other competitors, which we won’t name, but you probably know who they are. REI needed to stay ahead of that curve.

So since then there have been constant sweeping and cyclical changes for that digital transformation. The most recent one is the push for automating all things. So that’s the priority we have. It’s our marching orders.

Gardner: In addition to your company, culture, and technology, tell us about yourself, Bryan. What is it about your background and personal development that led you to be in a position to act so forthrightly and swiftly?

Sullins: I got my start in IT back in 1999. I was a public school teacher before that, and then I made the transition to doing IT training. I did IT training from 1999 to about 2012. During those years, I got a lot of technology certifications,because in the IT training world you have to.

I began with what was, at the time, called the Microsoft Certified Solutions Expert (MCSE) certification. Then I also did the Linux Professional Institute. I really glommed on to Linux. I wanted to set myself apart from the rest of the field back then, so I went all-in on Linux.

And then, 2008-2009-ish, I jumped on the VMware train and went all-in on VMware and did the official VMware curriculum. I taught that for about three years. Then, in 2012, I made the transition from IT training into actually doing this for real as an engineer working at Dell. At the time, Dell had an infrastructure-as-a-service (IaaS) healthcare cloud that was fairly large – 1,200-plus ESXi hosts. We were also responsible for the storage and for the 90-plus storage area network (SAN) arrays as well.

In a large environment, you really have to automate. It’s been the focus of my career. I typically jump right into new technology. 

In an environment that large, you really have to automate. I cut my teeth on automating through PowerCLI and Ansible. Since then, about 2015, it’s been the focus of my career. I’m not saying I’m a guru, by any means, but it’s been a focus of my career.

Then, in 2018, REI came calling. I jumped on that opportunity because they are a super-awesome company, and right off the bat I got free reign over: if you want to automate it, then you automate it. And I have been doing that ever since August of 2018.

Gardner: What helped you make the transition from training to cloud engineer? 

Sullins: I typically jump right into new technology. I don’t know if that comes from the training or if that’s just me as a person. But one of the positives I’ve gotten from the training world is that you learn a 100 percent of the feature base that’s available with said technology. I was able to take what I learned and knew from VMware and then say, “Okay, well, now I am going to get the real-world experience to back that up as well.” So it was a good transition.

Gardner: Let’s look at how other organizations can anticipate the shift to automation. What are some of the challenges that organizations typically face when it comes to being agile with their infrastructure?

Manage resistance to cloud 

Sullins: The challenges that I have seen aren’t usually technical. Usually the technology that people use to automate things are ready at hand. Many are free; like Ansible, for example, is free. PowerCLI is free. Jenkins is free. 

So, people can start doing that tomorrow. But the real challenge is in changing people’s mindset about a more automated approach. I think that it’s tough to overcome. It’s what I call provisioning by council. More traditional on-premises approaches have application owners who want to roll out x number of virtual machines (VMs), with all their particular specs and whatnot. And then a council of people typically looks at that and kind of scratches their chin and says, “Okay, we approve.” But if you need to scale up, that council approach becomes a sort of gate-keeping process.

With a more automated approach, like we have at REI, we use a cloud management platform to automate the processes. We use that to enable self-service VMs instead of having a roll out by council, where some of the VMs can take days or weeks roll out because you have a lot of human beings touching it along the way. We have a lot of that process pre-approved, so everybody has already said, “Okay, we are okay with the roll out. We are okay with the way it’s done.” And then we can roll that out in 7 to 10 minutes rather than having a ticket-based model where somebody gets to it when they can. Self-service models are able to do that much better.

But that all takes a pretty big shift in psychology. A lot of people are used to being the gatekeeper. It can make them uncomfortable to change. Fortunately for me, a lot of the people at REI are on-board with this sort of approach. But I think that resistance can be something a lot of people run into.

Gardner: You can’t just buy automation in a box off of a shelf. You have to deal with an accumulation of manual processes and habits. Why is moving beyond the manual processes culture so important?

Sullins: I call it a private cloud because that means there is a healthy level of competition between what’s going in the public cloud and what we do in that data center.

The public cloud team has the capability of “selling” their solution side-by-side with ours. When you have application owners who are technically adept — and pretty much all of them are at REI — they can be tempted to say, “Well, I don’t want to wait a week or two to get a VM. I want to create one right now out on the public cloud.”

There is a healthy level of competition between what’s going in the public cloud and what we do in the date center. We offer our customers a spectrum of services. And now they can do that in an automated way. That’s a big win. 

That’s a big challenge for us. So what we are trying to accomplish — and we have had success so far through the transition – is to offer our customers a spectrum of services. So that’s great.

The stakeholders consuming that now gain flexibility. They can say, “Okay, yeah, I have this application. I want to run it in the public cloud, but I can’t based on the needs for that application. We have to run it on-premises.” And now they can do that in an automated way. That’s a big win, and that’s what people expect now, quite honestly.

Gardner: They want the look and feel of a public cloud but with all the benefits of the private cloud. It’s up to you to provide that. Let’s find out how you did.

How did you overcome the challenges that we talked about and what are the investments that you made in tools, platforms, and an ecosystem of players that accomplished it?

Sullins: As I mentioned previously, a lot of our utilities are “free,” the Ansibles of the world, PowerCLI, and whatnot. We also use Morpheus to do self-service and the implications behind automating things on what I call the front end, the customer face. The issue you have there is you don’t get that control of scaling up before you provision the VM. You have to monitor and then roll it out on the backend. So you have to monitor for usage and then scale up on the backend, and seamlessly. The end users aren’t supposed to know that you are scaling up. I don’t want them to know. It’s not their job to know. I want to remain out of their way.

In order to do that, we’ve used a combination of technologies. HPE actually has a GitHub link for a lot of Ansible playbooks that plug right in. And then the underlying hardware adjacent management ecosystem platform is HPE OneView with HPE Synergy and Image Streamer. With a combination of all of those technologies we were able to accomplish that 18-minute roll-out of our various titles.

Gardner: Even though you have an integrated platform and solutions approach, it sounds like you have also made the leap from ushering pets through the process into herding cattle. If you understand my metaphor, what has allowed you to stop treating each instance as a pet into being able to herd this stuff through on an automated basis?

From brittle pets to agile cattle 

Sullins: There is a psychological challenge with that. In the more traditional approach – and the VMware shop listeners are going to be very well aware of this — I may need to have a four-node cluster with a number of CPUs, a certain amount of RAM, and so on. And that four-node cluster is static. Yes, if I need to add a fifth down the line I can do that, but for that four-node cluster, that’s its home, sometimes for the entire lifecycle of that particular host.

With our approach, we treat our ESXi hosts as cattle. The HPE OneView-Synergy-Image Streamer technology allows us to do that in conjunction with those tools we mentioned previously, for the end point in particular.

So rather than have a cluster, and it’s static and it stays that way — it might have a naming convention that indicates what cluster it’s in and where — in reality we have cattle-based DNS names for ESXi hosts. At any time, the understanding throughout the organization, or at least for the people who need to know, is that any host can be pulled from one cluster automatically and placed into another, particularly when it comes to resource usage on that cluster. My dream is that the robots will do this automatically.

So if you had a cluster that goes into the yellow, with its capacity usage based on a threshold, the robot would interpret that and say, “Oh, well, I have another cluster over here with a host that is underutilized. I’m going to pull it into the cluster that’s in the yellow and then bring it back into the green again.” This would happen all while we sleep. When we wake up in the morning, we’d say, “Oh, hey, look at that. The robots moved that over.”

Gardner: Algorithmic operations. It sounds very exciting.

Automation begets more automation 

Sullins: Yes, we have the push-button automation in place for that. It’s the next level of what that engine is that’s going to make those decisions and do all of those things.

Gardner: And that raises another issue. When you take the plunge into IT automation, you are making your way down the Chisholm Trail with your cattle, all of a sudden it becomes easier along the way. The automation begets more automation. As you learn and grow, does it become more automated along the way?

Sullins: Yes. Just to put an exclamation point on this topic, imagine the situation we opened the podcast with, which is, “Okay, we have to reallocate a bunch of hosts for” If it’s fully automated, and we have robots making those decisions, the response is instantaneous. “Oh, hey, we want to scale up by 200 percent on” We can say, “Okay, go ahead, roll out your VM. The system will react accordingly. It will add physical hosts as you see fit, and we don’t have to do anything, we have already done the work with the automation.” Right?

But to the automation begetting automation, which is a great way of putting it, by the way, there are always opportunities for more automation. And on a career side note, I want to dispel the myth that you automate your way out of a job. That is a complete and total myth. I’m not saying it doesn’t happen, where people get laid off as a result of automation. I’m not saying that doesn’t happen, but that’s relatively rare because when you automate something, that automation is going to need to be maintained because things change over time.

The other piece of that is a lot of times you have different organizations at various states of automation. Once you get your head above water to where it’s, “Okay, we have this process and now it’s become trivial because it’s been automated.” We can now concentrate on automating either more things — or you have new things that need to be automated. And whether that’s the process for only VMs, a new feature base, monitoring, or auto-scaling — whatever it is — you have the capability of from day one to further automate these processes.

Gardner: What was it specifically about the HPE OneView and Synergy that allowed you to move past the manual processes, firefighting, and culture of gatekeeping into more herding of cattle and being progressively automated?

Sullins: It was two things. The Image Streamer was number one. To date, we don’t run PXE boots infrastructure, not that we can’t, it’s just not something that we have traditionally done. We needed a more standard process for doing that, and Image Streamer fit that and solved that problem.

The second piece is the provided Ansible playbooks that HPE has to kick off the entire process. If you are somewhat versed in how HPE does things through OneView, you have a server profile that you can impose on a blade, and that can be fully automated through Ansible.

Image Streamer allows us to say, “Okay, we build a gold image. We can apply that gold image to any frame in the cluster.” We needed a more standard process, and Image Streamer solved that problem.

And, by the way, you don’t have to use Image Streamer to use Ansible automation. This is really more of an HPE OneView approach, whereby you can actually use it to do automated profiles and whatnot. But the Image Streamer is really what allows us to say, “Okay, we build a gold image. We can apply that gold image to any frame in the cluster.” That’s the first part of it, and the rest is configuring the other side.

Gardner: Bryan, it sounds like the HPE Composable Infrastructure approach works well with others. You are able to have it your way because you like Ansible, and you have a history of certain products and skills in your organization. Does the HPE Composable Infrastructure fit well into an ecosystem? Is it flexible enough to integrate with a variety of different approaches and partners?

Sullins: It has been so far, yes. We have anticipated leveraging HPE for our bare metal Linux infrastructure. One of the additional driving forces and big initiatives right now is Kubernetes. We are going all-in on Kubernetes in our private cloud, as well as in some of our worker nodes. We eventually plan on running those as bare metal. And HPE OneView, along with Image Streamer, is something that we can leverage for that as well. So there is flexibility, absolutely, yes.

Coordinating containers 

Gardner: It’s interesting, you have seen the transition from having VMware and other hypervisor sprawl to finding a way to manage and automate all of that. Do you see the same thing playing out for containers, with the powerful endgame of being able to automate containers, too?

Sullins: Right. We have been utilizing Rancher as part of our coordination tool for our Kubernetes infrastructure and utilizing vSphere for that. So we are using that.

As far as the containerization approach, REI has been doing containers before containers was a big thing. Our containerization platform has been around since at least 2015. So REI has been pretty cutting edge as far as that is concerned.

And now that Kubernetes has won the orchestration wars, as it were, we are looking to standardize that for people who want to do things online, which is to say, going back to the digital transformation journey.

Basically, the industry has caught up with what our super-awesome developers have done with containerization. But we are looking to transition the heavy lifting of maintaining a platform away from the developers. Now that we have a standard approach with Kubernetes, they don’t have to worry so much about it. They can just develop what they need to develop. It will be a big win for us.

Gardner: As you look back at your automation journey, have you developed a philosophy about automation? How this should this best work in the future?

Trust as foundation of automation 

Sullins: Right. Have you read Gene Kim’s The Unicorn Project? Well, there is also his The Phoenix ProjectMy take from that is the whole idea of trust, of trusting other people. And I think that is big.

I see that quite a bit in multiple organizations. For REI, we are going to work as a team and we trust each other. So we have a pretty good culture. But I would imagine that in some places that is still big challenge.

And if you take a look at The Unicorn Project, a lot of the issues have to do with trusting other human beings. Something happened, somebody made a mistake, and it caused an outage. So they lock it up and lock it away and say only certain people can do that. And then if you multiply that happening multiple times — and then different individuals walking that down — it leads to not being able to automate processes without somebody approving it, right?

Gardner: I can’t imagine you would have been capable, when you had to transition your private cloud for more online activity, if you didn’t have that trust built into your culture.

Sullins: Yes, and the big challenge that might still come up is the idea of trusting your end users, too. Once you go into the realm of self-service, you come up on the typical what-ifs. What if somebody adds a zero and they meant to only roll out 4 VMs but they roll out 40? That’s possible. How do you create guardrails that are seamless? If you can, then you can trust your users. You decrease the risk and can take that leap of faith that bad things won’t happen.

Gardner: Tell us about your wish list for what comes next. What you would like HPE to be doing? 

Small steps and teamwork rewards 

Sullins: My approach is to first automate one thing and then work out from there. You don’t have to boil the ocean. Start with something small and work your way up.

As far as next steps, we want auto scaling a physical layer and having the robots do all of that. The robots will scale up and down our requesters while we sleep.

We will continue to do application programming interface (API)-capable automation with anything that has a REST API. If we can connect to that and manipulate it, we can do pretty much whatever automation we want. 

We are also containerizing all things. So if any application can be containerized properly, containerize it if you can.

As far as what decision-making engine we have to do the auto-scaling on the physical layer, we haven’t really decided upon what that is. We have some ideas but we are still looking for that.

Gardner: How about more predictive analytics using artificial intelligence (AI) with the data that you have emanating from your data center? Maybe AIOps?

Sullins: Well, without a doubt. I, for one, haven’t done any sort of deep dive into that, but I know it’s all the rage right now. I would be open to pretty much anything that will encompass what I just talked about. If that’s HPE InfoSight, then that’s what it is. I don’t have a lot of experience quite honestly with InfoSight as of yet. We do have it installed in a proof of concept (POC) form, although a lot of the priorities for that have been shifted due to COVID-19. We hope to revisit that pretty soon, so absolutely.

Gardner: To close out, you were ahead of the curve on digital transformation. That allowed you to be very agile when it came time to react to the COVID-19 pandemic.  What did that get you? Do you have any results?

Sullins: Yes, as a matter of fact, our boss’s boss, his boss — so three bosses up from me — he actually sits in on our load testing. It was an all-hands-on-deck situation during that May online sale. He said that it was the most seamless one that he had ever seen. There were almost no issues with this one.

We had done what we needed on the infrastructure side to make sure that we met dynamic demands. It was very successful. We went past our goals, so it was a win-win all the way around.

What I attribute that to is, yes, we had done what we needed on the infrastructure side to make sure that we met dynamic demands. Also, everybody worked as a team. Everybody, all the way up the stacks, from our infrastructure contribution, to the hypervisor and hardware layer, all the way on up to the application layer and the containers, and all of our DevOps stuff. It was very successful. We went past our goals of what we had thought for the sale, so it was a win-win all the way around.

Gardner: Even though you were going through this terrible period of adjustment, that’s very impressive.

Sullins: Yes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, data center, Data center transformation, Enterprise architect, Hewlett Packard Enterprise, marketing, retail, User experience, Virtualization, VMware | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

How the right data and AI deliver insights and reassurance on the path to a new normal

The next BriefingsDirect Voice of AI Innovation podcast explores how businesses and IT strategists are planning their path to a new normal throughout the COVID-19 pandemic and recovery.

By leveraging the latest tools and gaining data-driven inferences, architects and analysts are effectively managing the pandemic response — and giving more people better ways to improve their path to the new normal. Artificial intelligence (AI) and data science are proving increasingly impactful and indispensable.

Stay with us as we examine how AI forms the indispensable pandemic response team member for helping businesses reduce risk of failure and innovate with confidence. To learn more about the analytics, solutions, and methods that support advantageous reactivity — amid unprecedented change — we are joined by two experts.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Please welcome Arti Garg, Head of Advanced AI Solutions and Technologies, at Hewlett Packard Enterprise (HPE), and Glyn Bowden, Chief Technologist for AI and Data, at HPE Pointnext Services. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We’re in uncharted waters in dealing with the complexities of the novel coronavirus pandemic. Arti, why should we look to data science and AI to help when there’s not much of a historical record to rely on?  

Garg: Because we don’t have a historical record, I think data science and AI are proving to be particularly useful right now in understanding this new disease and how we might potentially better treat it, manage it, and find a vaccine for it. And that’s because at this moment in time, raw data that are being collected from medical offices and through research labs are the foundation of what we know about the pandemic.

This is an interesting time because, when you know a disease, medical studies and medical research are often conducted in a very controlled way. You try to control the environment in which you gather data, but unfortunately, right now, we can’t do that. We don’t have the time to wait.

And so instead, AI — particularly some of the more advanced AI techniques — can be helpful in dealing with unstructured data or data of multiple different formats. It’s therefore becoming very important in the medical research community to use AI to better understand the disease. It’s enabling some unexpected and very fruitful collaborations, from what I’ve seen.

Gardner: Glyn, do you also see AI delivering more, even though we’re in uncharted waters?

Bowden: The benefits of something like machine learning (ML), for example, which is a subset of AI, is very good at handling many, many features. So with a human being approaching these projects, there are only so many things you can keep in your head at once in terms of the variables you need to consider when building a model to understand something.

But when you apply ML, you are able to cope with millions or billions of features simultaneously — and then simulate models using that information. So it really does add the power of a million scientists to the same problem we were trying to face alone before.

Gardner: And is this AI benefit something that we can apply in many different avenues? Are we also modeling better planning around operations, or is this more research and development? Is it both?

Data scientists are collaborating directly with medical science researchers and learning how to incorporate subject matter expertise into data science models. 

Garg: There are two ways to answer the question of what’s happening with the use of AI in response to the pandemic. One is actually to the practice of data science itself.

One is, right now data scientists are collaborating directly with medical science research and learning how to incorporate subject matter expertise into data science models. This has been one of the challenges preventing businesses from adopting AI in more complex applications. But now we’re developing some of the best-practices that will help us use AI in a lot of domains.

In addition, businesses are considering the use of AI to help them manage their businesses and operations going forward. That includes things such as using computer vision (CV) to ensure that social distancing happens with their workforce, or other types of compliance we might be asked to do in the future.

Gardner: Are the pressures of the current environment allowing AI and data science benefits to impact more people? We’ve been talking about the democratization of AI for some time. Is this happening more now?

More data, opinions, options 

Bowden: Absolutely, and that’s both a positive and a negative. The data around the pandemic has been made available to the general public. Anyone looking at news sites or newspapers and consuming information from public channels — accessing the disease incidence reports from Johns Hopkins University, for example — we have a steady stream of it. But those data sources are all over the place and are being thrown to a public that is only just now becoming data-savvy and data-literate.

As they consume this information, add their context, and get a personal point of view, that is then pushed back into the community again — because as you get data-centric you want to share it.

So we have a wide public feed — not only from universities and scholars, but from the general public, who are now acting as public data scientists. I think that’s creating a huge movement. 

Garg: I agree. Making such data available exposes pretty much anyone to these amazing data portals, like Johns Hopkins University has made available. This is great because it allows a lot of people to participate.

It can also be a challenge because, as I mentioned, when you’re dealing with complex problems you need to be able to incorporate subject matter expertise into the models you’re building and in how you interpret the data you are analyzing.

And so, unfortunately, we’ve already seen some cases — blog posts or other types of analysis — that get a lot of attention in social media but are later found to be not taking into account things that people who had spent their careers studying epidemiology, for example, might know and understand.

Gardner: Recently, I’ve seen articles where people now are calling this a misinformation pandemic. Yet businesses and governments need good, hard inference information and data to operate responsibly, to make the best decisions, and to reduce risk.

What obstacles should people overcome to make data science and AI useful and integral in a crisis situation?

Garg: One of the things that’s underappreciated is that a foundation, a data platform, makes data managed and accessible so you can contextualize and make stronger decisions based on it. That’s going to be critical. It’s always critical in leveraging data to make better decisions. And it can mean a larger investment than people might expect, but it really pays off if you want to be a data-driven organization.

Know where data comes from 

Bowden: There are a plethora of obstacles. The kind that Arti is referring to, and that is being made more obvious in the pandemic, is the way we don’t focus on the provenance of the data. So, where does the data come from? That doesn’t always get examined, and as we were talking about a second ago, the context might not be there.

All of that can be gleaned from knowing the source of the data. The source of the data tends to come from the metadata that surrounds it. So the metadata is the data that describes the data. It could be about when the data was generated, who generated it, what it was generated for, and who the intended consumer is. All of that could be part of the metadata.

Organizations need to look at these data sources because that’s ultimately how you determine the trustworthiness and value of that data.

We don’t focus on the provenance of the data. Where does the data come from? That doesn’t always get examined and he context might not be there.

Now it could be that you are taking data from external sources to aggregate with internal sources. And so the data platform piece that Arti was referring to applies to properly bringing those data pieces together. It shouldn’t just be you running data silos and treating them as you always treated them. It’s about aggregation of those data pieces. But you need to be able to trust those sources in order to be able to bring them together in a meaningful way.

So understanding the provenance of the data, understanding where it came from or where it was produced — that’s key to knowing how to bring it together in that data platform.

Gardner: Along the lines of necessity being the mother of invention, it seems to me that a crisis is also an opportunity to change culture in ways that are difficult otherwise. Are we seeing accelerants given the current environment to the use of AI and data?

AI adoption on the rise 

Garg: I will answer that question from two different perspectives. One is certainly the research community. Many medical researchers, for example, are doing a lot of work that is becoming more prominent in people’s eyes right now.

I can tell you from working with researchers in this community and knowing many of them, that the medical research community has been interested and excited to adopt advanced AI techniques, big data techniques, into their research. 

It’s not that they are doing it for the first time, but definitely I see an acceleration of the desire and necessity to make use of non-traditional techniques for analyzing their data. I think it’s unlikely that they are going to go back to not using those for other types of studies as well.

In addition, you are definitely going to see AI utilized and become part of our new normal in the future, if you will. We are already hearing from customers and vendors about wanting to use things such as CV to monitor social distancing in places like airports where thermal scanning might already be used. We’re also seeing more interest in using that in retail.

So some AI solutions will become a common part of our day-to-day lives.

Gardner: Glyn, a more receptive environment to AI now?

Bowden: I think so, yes. The general public are particularly becoming used to AI playing a huge role. The mystery around it is beginning to fade and it is becoming far more accepted that AI is something that can be trusted.

It does have its limitations. It’s not going to turn into Terminator and take over the world.

The fact that we are seeing AI more in our day-to-day lives means people are beginning to depend on the results of AI, at least from the understanding of the pandemic, but that drives that exception.

The general public are particularly becoming used to AI playing a huge role. The mystery around it is beginning to fade and it is becoming far more accepted that AI is something that can be trusted.

When you start looking at how it will enable people to get back to somewhat of a normal existence — to go to the store more often, to be able to start traveling again, and to be able to return to the office — there is that dependency that Arti mentioned around video analytics to ensure social distancing or temperatures of people using thermal detection. All of that will allow people to move on with their lives and so AI will become more accepted.

I think AI softens the blow of what some people might see as a civil liberty being eroded. It softens the blow of that in ways and says, “This is the benefit already and this is as far as it goes.” So it at least forms discussions whenever it was formed before.

Garg: One of the really valuable things happening right now are how major news publications have been publishing amazing infographics, very informative, both in terms of the analysis that they provide of data and very specific things like how restaurants are recovering in areas that have stay-in-place orders.

In addition to providing nice visualizations of the data, some of the major news publications have been very responsible by providing captions and context. It’s very heartening in some cases to look at the comments sections associated with some of these infographics as the general public really starts to grapple with the benefits and limitations of AI, how to contextualize it and use it to make informed decisions while also recognizing that you can go too far and over-interpret the information.

Gardner: Speaking of informed decisions, to what degree you are seeing the C-suite — the top executives in many businesses — look to their dashboards and query datasets in new ways? Are we seeing data-driven innovation at the top of decision-making as well?

Data inspires C-suite innovation 

Bowden: The C-suite is definitely taking a lot of notice of what’s happening in the sense that they are seeing how valuable the aggregation of data is and how it’s forwarding responses to things like this.

So they are beginning to look internally at what data sources are available within their own organizations. I am thinking now about how do we bring this together so we can get a better view of not only the tactical decisions that we have to make, but using the macro environmental data, and how do we now start making strategic decisions, and I think the value is being demonstrated for them in plain sight.

So rather than having to experiment, to see if there is going to be value, there is a full expectation that value will be delivered, and now the experiment is how much they can draw from this data now.

Garg: It’s a little early to see how much this is going change their decision-making, especially because frankly we are in a moment when a lot of the C-suite was already exploring AI and opening up to its possibilities in a way they hadn’t even a year ago.

And so there is an issue of timing here. It’s hard to know which is the cause and which is just a coincidence. But, for sure, to Glyn’s point, they are dealing with more change.

Gardner: For IT organizations, many of them are going to be facing some decisions about where to put their resources. They are going to be facing budget pressures. For IT to rise and provide the foundation needed to enable what we have been talking about in terms of AI in different sectors and in different ways, what should they be thinking about?

How can IT make sure they are accelerating the benefits of data science at a time when they need to be even more choosy about how they spend their dollars?

IT wields the sword to deliver DX 

Bowden: With IT particularly, they have never had so much focus as right now, and probably budgets are responding in a similar way. This is because everyone has to now look at their digital strategy and their digital presence — and move as much as they can online to be able to be resistant to pandemics and at-risk situations that are like this.

So IT has to have the sword, if you like, in that battle. They have to fix the digital strategy. They have to deliver on that digital promise. And there is an immediate expectation of customers that things just will be available online.

With the pandemic, there is now an AI movement that will get driven purely from the fact that so much more commerce and business are going to be digitized. We need to enable that digital strategy. 

If you look at students in universities, for example, they assume that it will be a very quick fix to start joining Zoom calls and to be able to meet that issue right away. Well, actually there is a much bigger infrastructure that has to sit behind those things in order to be able to enable that digital strategy.

So, there is now an AI movement that will get driven purely from the fact that so much more commerce and business is going to be digitized.

Gardner: Let’s look to some more examples and associated metrics. Where do you see AI and data science really shining? Are there some poster children, if you will, of how organizations — either named or unnamed — are putting AI and data science to use in the pandemic to mitigate the crisis or foster a new normal?

Garg: It’s hard to say how the different types of video analytics and CV techniques are going to facilitate reopening in a safe manner. But that’s what I have heard about the most at this time in terms of customers adopting AI.

In general, we are at very early stages of how an organization is going to decide to adopt AI. And so, for sure, the research community is scrambling to take advantage of this, but for organizations it’s going to take time to further adopt AI into any organization. If you do it right, it can be transformational. Yet transformational usually means that a lot of things need to change — not just the solution that you have deployed.

Bowden: There’s a plethora of examples from the medical side, such as how we have been able to do gene analysis, and those sorts of things, to understand the virus very quickly. That’s well-known and well-covered.

The bit that’s less well covered is AI supporting decision-making by governments, councils, and civil bodies. They are taking not only the data from how many people are getting sick and how many people are in hospital, which is very important to understand where the disease is but augmenting that with data from a socioeconomic situation. That means you can understand, for example, where an aging population might live or where a poor population might live because there’s less employment in that area.

The impact of what will happen to their jobs, what will happen if they lose transport links, and the impact if they lose access to healthcare — all of that is being better understood by the AI models.

As we focus on not just the health data but also the economic data and social data, we have a much better understanding of how society will react, which has been guiding the principles that the governments have been using to respond.

So when people look at the government and say, “Well, they have come out with one thing and now they are changing their minds,” that’s normally a data-driven decision and people aren’t necessarily seeing it that way.

So AI is playing a massive role in getting society to understand the impact of the virus — not just from a medical perspective, but from everything else and to help the people.

Gardner: Glyn, this might be more apparent to the Pointnext organization, but how is AI benefiting the operational services side? Service and support providers have been put under tremendous additional strain and demand, and enterprises are looking for efficiency and adaptability.

Are they pointing the AI focus at their IT systems? How does the data they use for running their own operations come to their aid? Is there an AIOps part to this story? 

AI needs people, processes 

Bowden: Absolutely, and there has definitely become a drive toward AIOps.

When you look at an operational organization within an IT group today, it’s surprising how much of it is still human-based. It’s a personal eyeball looking at a graph and then determining a trend from that graph. Or it’s the gut feeling that a storage administrator has when they know their system is getting full and they have an idea in the back of their head that last year something happened seasonally from within the organization making decisions that way. 

We are therefore seeing systems such as HPE’s InfoSight start to be more prominent in the way people make those decisions. So that allows plugging into an ecosystem whereby you can see the trend of your systems over a long time, where you can use AI modeling as well as advanced analytics to understand the behavior of a system over time, and how the impact of things — like everybody is suddenly starting to work remotely – does to the systems from a data perspective. 

So the models-to-be need to catch up in that sense as well. But absolutely, AIOps is desirable. If it’s not there today, it’s certainly something that people are pursuing a lot more aggressively than they were before the pandemic. 

Gardner: As we look to the future, for those organizations that want to be more data-driven and do it quickly, any words of wisdom with 20/20 hindsight? How do you encourage enterprises — and small businesses as well — to better prepare themselves to use AI and data science?

Garg: Whenever I think about an organization adopting AI, it’s not just the AI solution itself but all of the organizational processes — and most importantly the people in an organization and preparing them for the adoption of AI. 

I advise organizations that want to use AI and corporate data-driven decision-making to, first of all, make sure you are solving a really important problem for your organization. Sometimes the goal of adopting AI becomes more important than the goal of solving some kind of problem. So I always encourage any AI initiative to be focused on really high-value efforts. 

Use your AI initiative to do something really valuable to your organization and spend a lot of time thinking about how to make it fit into the way your organization currently works. Make it enhance the day-to-day experience of your employees because, at the end of the day, your people are your most valuable assets. 

Photo of light bulbs with shining fibers in a shape of CHANGE MANAGEMENT concept related words isolated on black background

Those are important non-technical things that are non-specific to the AI solution itself that organizations should think about if they want the shift to being AI-driven and data-driven to be successful. 

For the AI itself, I suggest using the simplest-possible model, solution, and method of analyzing your data that you can. I cannot tell you the number of times where I have heard an organization come in saying that they want to use a very complex AI technique to solve a problem that if you look at it sideways you realize could be solved with a checklist or a simple spreadsheet. So the other rule of thumb with AI is to keep it as simple as possible. That will prevent you from incurring a lot of overhead. 

Gardner: Glyn, how should organizations prepare to integrate data science and AI into more parts of their overall planning, management, and operations? 

Bowden: You have to have a use case with an outcome in mind. It’s very important that you have a metric to determine whether it’s successful or not, and for the amount of value you add by bringing in AI. Because, as Arti said, a lot of these problems can be solved in multiple ways; AI isn’t the only way and often isn’t the best way. Just because it exists in that domain doesn’t necessarily mean it should be used.

AI isn’t an on/off switch; it’s an iteration. You can start with something small and then build into bigger and bigger components that bring more data to bear on the problem, and then add new features that lead to new functions and outcomes.

The second part is AI isn’t an on/off switch; it’s an iteration. You can start with something small and then build into bigger and bigger components that bring more and more data to bear on the problem, as well as then adding new features that lead to new functions and outcomes.

The other part of it is: AI is part of an ecosystem; it never exists in isolation. You don’t just drop in an AI system on its own and it solves a problem. You have to plug it into other existing systems around the business. It has data sources that feed it so that it can come to some decision.

Unless you think about what happens beyond that — whether it’s visualizing something to a human being who will make a decision or automating a decision – it could really just be hiring the smartest person you can find and locking them in a room.

Pandemic’s positive impact

Gardner: I would like to close out our discussion with a riff on the adage of, “You can bring a horse to water but you can’t make them drink.” And that means trust in the data outcomes and people who are thirsty for more analytics and who want to use it.

How can we look with reassurance at the pandemic as having a positive impact on AI in that people want more data-driven analytics and will trust it? How do we encourage the perception to use AI? How is this current environment impacting that? 

Garg: The fact that so many people are checking the trackers of how the pandemic is spreading and learning through a lot of major news publications as they are doing a great job of explaining this. They are learning through the tracking to see how stay-in-place orders affect the spread of the disease in their community. You are seeing that already.

We are seeing growth and trust in how analyzing data can help make better decisions. As I mentioned earlier, this leads to a better understanding of the limitations of data and a willingness to engage with that data output as not just black or white types of things. 

As Glyn mentioned, it’s an iterative process, understanding how to make sense of data and how to build models to interpret the information that’s locked in the data. And I think we are seeing that.

We are seeing a growing desire to not only view this as some kind of black box that sits in some data center — and I don’t even know where it is — that someone is going to program, and it’s going to give me a result that will affect me. For some people that might be a positive thing, but for other people it might be a scary thing.

People are now much more willing to engage with the complexities of data science. I think that’s generally a positive thing for people wanting to incorporate it in their lives more because it becomes familiar and less other, if you will. 

Gardner: Glyn, perceptions of trust as an accelerant to the use of yet more analytics and more AI?

Bowden: The trust comes from the fact that so many different data sources are out there. So many different organizations have made the data available that there is a consistent view of where the data works and where it doesn’t. And that’s built up the capability of people to accept that not all models work the first time, that experimentation does happen, and it is an iterative approach that gets to the end goal. 

I have worked with customers who, when they saw a first experiment fall flat because it didn’t quite hit the accuracy or targets they were looking for, they ended the experiment. Whereas now I think we are seeing in real time on a massive scale that it’s all about iteration. It doesn’t necessarily work the first time. You need to recalibrate, move on, and do refinement. You bring in new data sources to get the extra value.

What we are seeing throughout this pandemic is the more expertise and data science you throw in an instance, the much better the outcome at the end. It’s not about that first result. It’s about the direction of the results, and the upward trend of success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

Posted in AIOps, artificial intelligence, big data, Business intelligence, Cyber security, data analysis, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Information management, machine learning, User experience, video delivery | Leave a comment

Data science helps hospitals improve patient payments and experiences while boosting revenue

The next BriefingsDirect healthcare finance insights discussion explores new ways of analyzing healthcare revenue trends to improve both patient billing and services.

Stay with us as we explore new approaches to healthcare revenue cycle management and outcomes that give patients more options and providers more revenue clarity.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the next generation of data-driven patient payments process improvements, we’re joined by Jake Intrator, Managing Consultant for Data and Services at Mastercard, and Julie Gerdeman, CEO of HealthPay24. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Julie, what’s driving healthcare providers to seek new and better ways of analyzing data to better manage patient billing? What’s wrong with the status quo?

Gerdeman: Dana, we are in such an interesting time, particularly in the US, with this being an election time. There is such a high level of visibility — really a spotlight on healthcare. There is a lot of change happening, such as in regulations, that highlights interoperability of data and price transparency for patients.

And there’s ongoing change on the insurance reimbursement side, with payer plans that seem to change and evolve every year. There are also trends changing provider compensation, including value-based care and pay-for-performance.


On the consumer-patient side, there is significant pressure in the market. Statistics show that 62 percent of patients say knowing their out-of-pocket costs in advance will impact their likelihood of pursuing care. So the visibility and transparency of costs — that price expectation — is very, very important and is driving consumerism into healthcare like we have never seen before due to rising costs to patients.

Finally, there is more competition. Where I live in Pennsylvania, I can drive a five-mile radius and access a multitude of different health providers in different systems. That level of competition is unlike anything we have seen before.

Healthcare’s sea change

Gardner: Jake, why is healthcare revenue management difficult? Is it different from other industries? Do they lag in their use of technology? Why is the healthcare industry in the spotlight, as Julie pointed out?

Intrator: The word that Julie used that was really meaningful to me was consumerism. There is a shift across healthcare where patients are responsible for a much larger proportion of their bills than they ever used to be.

And so, as things shift away from hospitals working with payers to receive dollars in an efficient, easy process — now the revenue is coming from patients. That means there needs to be new processes and new solutions to make it a more pleasant experience for patients to be able to pay. We need to enable people to pay when they want to pay, in the ways that they want to pay.


That’s something we have keyed on to, as a payments organization. That’s also what led us to work with HealthPay24. 

Gardner: It’s fascinating. If we are going to a consumer-type model for healthcare, why not take advantage of what consumers have been doing with their other financing, such as getting reports every month on their bills? It seems like there is a great lesson to be learned from what we all do with our credit cards. Julie, is that what’s going to happen?

IConsumer in driver’s seat 

Gerdeman: Yes, definitely. It’s interesting that healthcare has been sitting in a time warp. Historically, there remain many manual processes and functions in the health revenue cycle. That’s attributed to a piecemeal approach — different segments of the revenue cycle were tackled either at different times or acquisitions impacted that. I read recently that there are still eight billion faxes happening in healthcare.

So that consumer-level experience, as Jake indicated, is where it’s going — and where we need to go even faster.

Technology provides the transparency and interoperability of data. Investment in IT is happening, but it needs to happen even more.

Gardner: Wherever there is waste, inefficiency, and a lack of clarity is an opportunity to fix that for all involved. But what are the stakes? How much waste or mismanagement are we talking about? 

Intrator: The one statistic that sticks out to me is that care providers aren’t collecting as much as 80 percent of balances from older bills. So that’s a pretty substantial amount — and a large opportunity. Julie, do you have more? 

Gerdeman: I actually have a statistic that’s staggering. There is waste of $265 billion spent on administrative complexity. And then another $230 to $240 billion attributed to what’s termed pricing failure, which means price increases that aren’t in line with the current market. The stakes are very high and the opportunity is very large.

We have data that shows more than 50 percent of chief financial officers (CFOs) want better access to data and better dashboards to understand the scope of the problem. As we were talking about consumerism, Mastercard is just phenomenal in understanding consumer behavior. Think about the personalized experiences that organizations like Mastercard provide — or GoogleAmazonDisney, and Netflix. Everything is becoming so personalized in our consumer lives.

But healthcare? We are not there yet. It’s not a personalized experience where providers know in advance what a consumer or patient wants. HealthPay24 and Mastercard are coming together to get us much closer to that. But, truly, it’s a big opportunity.

Intrator: I agree. Payers and providers haven’t figured out how they enable personalized experiences. It’s something that patients are starting to expect from the way they interact with companies like Netflix, Disney, and Mastercard. It’s becoming table-stakes. It’s really exciting that we are partnering to figure out how to bring that to healthcare payers and providers alike.

Gardner: Julie, you mentioned that patients want upfront information about what their procedures are going to cost. They want to know their obligation before they go through a medical event. But oftentimes the providers don’t know in advance what those costs are going to be.

So we have ambiguity. And one of the things that’s always worked great for ambiguity in other industries is to look at the data, extrapolate, and get analytics involved. So, how are data-driven analytics coming to the rescue? How will that help?

Data to the rescue 

Gerdeman: Historical data allows for a forward-looking view. For HealthPay24, for example, we have been involved in patient payments for 20 years. It makes us a pioneer in the space. It gives us 20 years of data, information, and trends that we can look at. To me, data is absolutely critical.

Having come out of the spend management technology industry I know that in the categories of direct and indirect materials there have long been well-defined goods and services that are priced and purchased accordingly.

But, the ambiguity of patient healthcare payments and patient responsibility presents a new challenge. What artificial intelligence (AI) and algorithms provide are the capability to help anticipate and predict. That offers something much more applicable to a patient at a consumer level.

Gardner: Jake, when you have the data you can use it. Are we still at the point of putting the data together? Or are we now already able to deliver those AI- and machine learning (ML)-driven outcomes?

Intrator: Hospitals still don’t feel like they are making the best use of data. They tie that both to not having access to the data and not yet having the talent, resources, and tools to leverage it effectively. This is top of mind for many people in healthcare.

In seeking to help them, there are two places where I divide the use of analytics. The first is ahead of time. By using patient estimator tools, can you understand what somebody might owe? That’s a really tricky question. We are grappling with it at Mastercard.

By working with HealthPay24, we have developed a solution that is ready and working today. Answering the questions gets a lot smarter when you incorporate the data and analytics.

By working with HealthPay24, we have developed a solution that is ready and working today on the other half of the process. For example, somebody comes to the hospital. They know that they have some amount of patient payment responsibility. What’s the right way for a hospital to interact with that person? What are the payment options that should be available to them? Are they paying upfront? Are they paying over a period of time? What channels are you using to communicate? What options are you giving to them? Answering those questions gets a lot smarter when you incorporate data and analytics. And that’s exactly what we are doing today.

Gardner: Well, we have been dancing around and alluding to the joint-solution. Let’s learn more about what’s going on between HealthPay24 and Mastercard. Tell us about your approach. Are we in a proof of concept (POC) or is this generally available?

Win-win for patients and providers 

Gerdeman: We are currently in a POC phase, working with initial customers on the predictive analytic capability that marries the Mastercard Test and Learn platform with HealthPay24’s platform and executing what’s recommended through the analytics in our platform.

Jake, go ahead and give an overview of Test and Learn, and then we can talk about how we have come together to do some great work for our customers.

Intrator: Sure. Test and Learn is a platform that Mastercard uses with a large number of partner clients to measure the impact of business decisions. We approach that through in-market experiments. You can do it in a retail context where you are changing prices or you can do it in the healthcare context where you are trying different initiatives to focus on patient payments. 

That’s how we brought it to bear within the HealthPay24 context. We are working together along with their provider partners to understand the tactics that they are using to drive payments. What’s working, what’s working for the right patient, and what’s working at the right time for the right patients? 

Gerdeman: It’s important for the audience to understand that the end-goal is revenue collection and the big opportunity providers have to collect more. The marriage of Test and Learn with HealthPay24 provides the intelligence to allow providers to collect more, but it also offers more options to patients based on that intelligence and creates a better patient experience in the end.

The marriage of Test and Learn with HealthPay24 provides the intelligence to allow providers to collect more, but it also offers more options to patients based on that intelligence, and creates a better patient experience.

If a particular patient will always take a payment plan and make those payments consistently – that is versus when they are presented with a big amount and wouldn’t pay it off – the intelligence through the platform will say, “This patient should be offered a payment plan consistently,” and the provider ends up collecting all of the revenue.

That’s what we are super-excited about. The POC is showing greater revenue collection by offering flexibility in the options that patients truly want and need.

Gardner: Let’s unpack this a little bit. So we have HealthPay24 as chocolate and Mastercard’s Test and Learn platform as peanut butter, and we are putting them together to make a whole greater than the sum of the parts. What’s the chocolate? What’s the peanut butter? And what’s the greater whole?

Like peanut butter and chocolate 

Intrator: One of the things that’s made working with HealthPay24 so exciting for us is that they sit in the center of all of the data and the payment flows. They have the capability to directly guide the patient to the best possible experience.

They are hands-on with the patients. They can implement all of these great learnings through our analytics. We can’t do that on our own. We can do the analytics, but we are not the infrastructure that enables what’s happening in the real world.

That’s HealthPay24. They are in the real world. When you have the data flowing back and forth, we can help measure what’s working and come up with new ideas and hypotheses about how to try different payment programs. 

It’s been a really important chocolate and peanut butter combination where you have HealthPay24 interacting with patients and us providing the analytics in the background to inform how that’s happening.

Gerdeman: Jake said it really well. It is a beautiful combination because years ago, the hot thing was propensity to pay. And, yes, providers still talk about that. It was best practice many years ago, of pulling a soft or even hard credit check on a patient to determine their propensity to pay and potentially offer financial assistance, even charity, given the needs of the patient.

But this takes it to a whole other level. That’s why the combination is magical. What makes it so different is there doesn’t need to be that old way of thinking. It’s truly proactive through the data we have in working with providers and the unique capabilities of Mastercard Test and Learn. We bring those together and offer proactively the right option for that specific patient-consumer.

It’s super exciting because payment plans are just one example. The platform is phenomenal and the capabilities are broad. The next financial application is discounts.

Through HealthPay24, providers could configure discounts based on their own policies and thresholds. But, if you know that a particular patient will pay the amount when offered the discount through the platform, that should be offered every time. The intelligence gives us the capability to know that, to offer it, and for the provider to collect that discounted amount, which might be more than that amount going to bad debt and never being collected.

Intrator: If you are able to drive behavior with those discounts, is it 10 percent or 20 percent? If you give away an additional 10 percent, how does that change the number of people reacting to it? If you give away more, you had better hope that you are getting more people to pay more quickly.

Those are exactly the sorts of analytical questions we can answer with Test and Learn and with HealthPay24 leading the charge on implementing those solutions. I am really excited to see how this continues to solve more problems going forward.

Gardner: It’s interesting because in the state of healthcare now, more and more people, at least in the United States, have larger bills regardless of their coverage. There are more co-pays, more often there are large deductibles, with different deductibles for each member of a family, for example, and varying deductibles depending on the type of procedures. So, it seems like many more people will be facing more out-of-pocket items when it comes to healthcare. This impacts literally tens of millions of people. 

So we have created this new chocolate confection, which is wonderful, but the proof is in the eating. When are patient-consumers going to get more options, not only for discounts, but perhaps for financing? If you would like to spread the payments out, does it work in both ways, both discounts as well as in payment plans with interest over time? 

Flexibility plus privacy

Gerdeman: In HealthPay24, we currently have all of the above — depending on what the provider wants to offer, their patient base, and the needs and demographics. Yes, they can offer payment plans, discounts, and lines of credit. That’s already embedded in the platform. It creates an opportunity for all the different options and the flexibility we talked about. 

Earlier I mentioned personalization, and this gets us much closer to personalization of the financial experience in healthcare. There is so much happening on the clinical side, with great advances around clinical care and how to personalize it. This combination gets us to the personalization of offers and options for patients and payments like we have never seen in the past.

Gardner: Jake, for those listening and reading, who maybe are starting to feel a little concerned that all this information — about not just their healthcare, but now their finances — being bandied about among payers, providers, and insurers, are we going to protect that financial information? How should people feel about this in terms of a privacy or a comfort level?

We aspire and really do put a lot of work and effort into being a leader in data privacy and allowing people to have ownership of their data and to feel comfortable. 

Intrator: That is a question and a problem near and dear to Mastercard. We aspire and really do put a lot of work and effort into being a leader in data privacy and allowing people to have ownership of their data and to feel comfortable. I think that’s something that we deeply believe in. It’s been a focus throughout our conversations with HealthPay24 to make sure that we are doing it right on both sides.

Gardner: Now that you have this POC in progress, what have been some of the outcomes? It seems to me over time the more you deal with more data, the more benefits, and then the more people adopt it, and so on. Where are we now, and do we have some insight into how powerful is this?

Gerdeman: We do. In fact, one example is a 400-bed hospital in the Northeast US that, through the combination of Mastercard Test and Learn and HealthPay24, were able to look at and identify 25,000 unpaid accounts. Just by targeting 5,000 of the 25,000, they were able to identify an incremental $1 million in collections to the hospital.

That is very significant in that they are just targeting the top 5,000 in a conservative approach. They now know that they have the capability through this intelligence and by offering the right plans to the right people to be able to collect $1 million more to their bottom line.

Intrator: That certainly captures the big picture and the big story. I can also zoom in on a couple of specific numbers that we saw in the POC. As we tackled that, we wanted to understand a couple of different metrics, such as increases in payments. We saw substantial increases from payment plans. As a result, people are paying more than 60 percent more on their bills compared to similar patients that haven’t received the payment plans. 

Then we zoomed in a step farther. We wanted to understand the specific types of patients who benefited more from receiving a payment plan and how that potentially could guide us going forward. We were able to dig in, to build a predictive model, and that’s exactly what Julie was talking about. Those top 25,000 accounts, how much we think they are going to pay and the relative prioritization. Hospitals have limited resources. So how do you make sure that you are focusing most appropriately?

Gardner: Now that we have gotten through this trial period, does this scale? Is this something you can apply to almost any provider organization? If I am a provider organization, how might I start to take advantage of this? How does this go to market?

Personalized patient experiences 

Gerdeman: It absolutely does scale. It applies across all providers; actually, it applies across many industries as well. Any provider who wants to collect more wants additional intelligence around their patient behavior, patient payments and collection behavior — it really is a terrific solution. And it scales as we integrate the technologies. I am a huge believer in best-of-breed ecosystems. This technology integrates into the HealthPay24 solution. The recommendations are intelligent and already in the platform for providers.

Gardner: And how about that grassroots demand? Should people start going into their clinics and emergency departments and say, “Hey, I want the plan that I heard about. I want to have financing. I want you to give me all my options.” Should people be advocating for that level of consumerism now when they go into a healthcare environment?

Gerdeman: You know, Dana, they already are. We are at a tipping point in the disruption of healthcare. This kind of grassroots demand of consumerism and a consumer personalized experience — it’s only a matter of time. You mentioned data privacy earlier. There is a very interesting debate happening in healthcare around the balance between sharing data, which is so important for care, billing, and payment, with the protection of privacy. We take all of that very seriously. 

Nonetheless, I feel the demand from providers as well as patients will only get greater.

Gardner: Before we close out let’s extrapolate on the data we have. How will things be different in two or three years from now when more organizations embrace these processes and platforms?

Intrator: The industry is going to be a lot smarter in a couple of years. The more we learn from these analytics, the more we incorporate it into the decisions that are happening every day, then it’s going to feel like it fits you as a patient better. It’s going to improve the patient experience substantially.

The industry is going to be a lot smarter in a couple of years. The more we learn from these analytics, the more we incorporate it into the decisions that are happening every day. It’s going to improve the patient experience substantially. 

Personally, I am really excited to see where it goes. There are going to be new solutions that we haven’t heard about yet. I am closely following everything that goes on.

Gerdeman: This is heading to an experience for patients where from the moment they seek care, they research care, they are known. They are presented with a curated, personalized experience from the clinical aspect of their encounter all the way through the billing and payment. They will be presented with recommendations based on who they are, what they need, and what their expectations are. 

That’s the excitement around AI and ML and how it’s going to be leveraged in the future. I am with Jake. It’s going to look very different in healthcare experiences for consumers over the next few years.

Gardner: And for those interested in learning more about this pilot program, about the Mastercard Test and Learn platform and HealthPay24’s platform, where might they go? Are there any press releases, white papers? What sort of information is available?

Gerdeman: We have a great case study from the POC that we are currently running. We are happy to work with anyone who is interested, just contact us via our website at HealthPay24 or through Mastercard.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

Posted in artificial intelligence, big data, Business intelligence, data analysis, electronic medical records, healthcare, machine learning, risk assessment, Security, User experience | Leave a comment

How IT modern operational services enables self-managing, self-healing, and self-optimizing

General digital business transformation and managing the new normal around the COVID-19 pandemic have hugely impacted how businesses and IT operate. Faced with mounting complexity, rapid change, and striking budgets, IT operational services must be smarter and more efficient than ever.

The next BriefingsDirect Voice of Innovation podcast examines how Hewlett Packard Enterprise (HPE) Pointnext Services is reinventing the experience of IT support to increasingly rely on automation and analytics to help enable continued customer success as IT enters a new era. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the HPE Pointnext Services vision for the future of IT operational services are Gerry Nolan, Director of Portfolio Product Management, Operational Service Portfolio, at HPE Pointnext Services, and Ronaldo Pinto, Director of Portfolio Product Management, Operational Service Portfolio, at HPE Pointnext Services. The discussion is moderated by Dana Gardner, P