How the composable approach to IT aligns automation and intelligence to overcome mounting complexity

The next edition of the BriefingsDirect Voice of the Innovator podcast series explores the latest developments in hybrid IT and datacenter composability.

Bringing higher levels of automation to data center infrastructure has long been a priority for IT operators, but it’s only been in the past few years that they have actually enjoyed truly workable solutions for composability.

The growing complexities, from hybrid cloud and the pressing need for conservation of IT spend — as well as the need to find high-level IT skills — means there is no going back. Indeed, there is little time for even a plateau on innovation around composability.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us now as we explore how pervasive increasingly intelligent IT automation and composability can be with Gary Thome, Vice President and Chief Technology Officer for Composable Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gary, what are the top drivers making composability top-of-mind and something we’re going to see more of?

Thome: It’s the same drivers for businesses as a whole, and certainly for IT. First, almost every business is going through some sort of digital transformation. And that digital transformation is really about transforming to leverage IT to connect with their customers and make IT the primary way they interact with customers and make revenue.

Digital transformation drives composability 


With that, there’s a desire to go very fast, of rapidly getting connections to customers much faster and for adding features faster via software for your customers.

The whole idea of digital transformation and becoming a digital business is driving a whole new set of behaviors in the way enterprises run – and as a result – in the way that IT needs to support them.

From the IT standpoint, there is this huge driver to say, “Okay, I need to be able to go faster to keep up with the speed of the business.” That is a huge motivator. 

But at the same time, there’s the constant desire to keep IT cost in line, which requires higher levels of automation. That automation — along with a desire to flexibly align with the needs of the business — drives what we call composability. It combines the flexibility of being able to configure and choose what you need to meet the business needs — and ultimately customer needs — and do it in a highly automated manner.

Gardner: Has the adoption of cloud computing models changed the understanding of how innovation takes place in an IT organization? There used to be long periods between upgrades or a new revision. Cloud has given us constant iterative improvements. Does composability help support that in more ways?

Thome: Yes, it does. There has been a general change in the way of thinking, of shifting from occasional, large changes to frequent, smaller changes. This came out of an Agile mindset and a DevOps environment. Interestingly enough, it’s permeated to lots of other places outside of IT. More companies are looking at how to behave that way in general.

How to Achieve Composability

Across Your Datacenter

On the technology side, the desire for rapid, smaller changes means a need for higher levels of automation. It means automating the changes to the next desired state as quickly as possible. All of those things lend themselves toward composability.

Gardner: At the same time, businesses are seeking economic benefits via reducing unutilized IT capacity. It’s become about “fit-for-purpose” and “minimum viable” infrastructure. Does composability fit into that, making an economic efficiency play?

Thome: Absolutely. Along with the small, iterative changes – of changing just what you need when you need it – comes a new mindset with how you approach capacity. Rather than buying massive amounts of capacity in bulk and then consuming it over time, you use capacity as you need it. No longer are there large amounts of stranded capacity.

Composability is key to this because it allows you through technical means to gain an environment that gets the desired economic result. You are simply using what you need when you need it, and then releasing it when it’s not needed — versus pre-purchasing large amounts of capacity upfront.

Innovation building blocks 

Gardner: As an innovator yourself, Gary, you must have had to rethink a lot of foundational premises when it comes to designing these systems. How did you change your thinking as an innovator to create new systems that accommodate these new and difficult requirements?

Thome: Anyone in an innovation role has to always challenge their own thinking, and say, “Okay, how do I want to think differently about this?” You can’t necessarily look to the normal sources for inspiration because that’s exactly where you don’t want to be. You want to be somewhere else.

For myself it may mean looking at any other walk of life – from what I do, read, and learn as possible sources of inspiration for rethinking the problem.

Interestingly enough, there is a parallel in the IT world of taking applications and decomposing them into smaller chunks. We talk about microservices that can be quickly assembled into larger applications — or composed, if you want to think of it that way. And now we’re able to disaggregate the infrastructure into elements, too, and then rapidly compose them into what’s needed. 

Those are really parallel ideas, going after the same goal. How do I just use what I need when I need it — not more, not less? And then automate the connections between all of those services.

That, in turn, requires an interface that makes it very easy to assemble and disassemble things together — and therefore very easy to produce the results you want. 

When you look at things outside of the IT world, you can see patterns of it being easy to assemble and disassemble things, like children’s building blocks. Before, IT tended to be too complex. How do you make the IT building blocks easier to assemble and disassemble such that it can be done more rapidly and more reliably?

Gardner: It sounds as if innovations from 30 years ago are finding their way into IT. Things such as simultaneous engineering, fit-for-purpose design and manufacturing, even sustainability issues of using no more than you need. Were any of those inspirations to you?

Cultivate the Agile mindset

Thome: There are a variety of sources, everything from engineering practices, to art, to business practices. They all start swiveling around in your head. How do I look at the patterns in other places and say, “Is that the right kind of pattern that we need to apply to an IT problem or not?”

The historical IT perspective of elongated steps and long development cycles led to the end-place of very complex integrations to get all the piece-parts put together. Now, the different, Agile mindset says, “Why don’t you create what you need iteratively but make sure it integrates together rapidly?”

Can you imagine trying to write a symphony and have 20 different people develop their own parts? There’s separate trombone, or timpani, or violin. And then you just say, “Okay, play it together once, and we will start debugging when it doesn’t sound right.” Well, of course that would be a disaster. If you don’t think about it upfront, do you want to develop it as-you-go?

The same thing needs to go into how we develop IT — with both the infrastructure and applications. That’s where the Agile and the DevOps mindsets have evolved to. It’s also very much the mindset we have in how we develop composability within HPE.

Gardner: At HPE, you began bringing composability to servers and the data center stack, trying to make hardware behave more like software, essentially. But it’s grown past that. Could you give us a level-set of where we are right now when it comes to the capability to compose the support for doing digital business?

Intelligent, rapid, template-driven assembly 

Thome: Within the general category of composablity, we have this new thing called Composable Infrastructure, and we have a product called HPE Synergy. Rather than treat the physical data resources in the data center as discrete servers, storage arrays, switches, it looks at them as pools of compute capacity, storage capacity, fabric capacity, and even software capacity or images of what you want to use.

Each of those things can be assembled rapidly through what we call software-defined intelligence. It knows how to assemble the building blocks – compute, storage, and networking — into something interesting. And that is template-driven. You have a template, which is a description of what you want the end-state to look like, what you want your infrastructure look like, when you are done.

And the templates say, “Well, I need a compute of this big block or size. This much storage, or this kind of network.” Whatever you want. “And then, by the way, I want this software loaded on it.” And so forth. You describe the whole thing as a template and then we can assemble it based on that description.

That approach is one we’ve innovated on in a lab from the infrastructure’s standpoint. But what’s very interesting about it is, if you look at a modern cloud made up of applications, it uses a very similar philosophical approach to the assembling. In fact, just like with modern applications, you say, “Well, I’m assembling a group of services or elements. I am going to create it all via APIs.” Well, guess what? Our hardware is driven by APIs also. It’s an API-level assembly of the hardware to compose the hardware into whatever you want. It’s the same idea of composing that applies everywhere.

Millennials lead the way

Gardner: The timing for this is auspicious on many levels. Just as you’re making crafting of hardware solutions possible, we’re dealing with an IT labor shortage. If, like many Millennials, you are of a cloud-first mentality you will find kinship with composability — even though you’re not necessarily composing a cloud. Is that right?

Thome: Absolutely. That cloud mindset, or service’s mindset, or asset-service mindset — whatever you want to think of it as – is one where this is a natural way of thinking. The younger people may have grown up with this mindset. It wouldn’t occur to them to think any differently. And others may have to shift to a new way of thinking.

This is one of the challenges for organizations. How do they shift — not just the technologies or the tools — but the mindset within the culture in a different direction?

How to Remove Complexity

From Multicloud and Hybrid IT

You have to start with changing the way you think. It’s a mindset change to ask, “How do I think about this problem differently?” That’s the key first thing that needs to happen, and then everything falls behind that mindset.

It’s a challenge for any company doing transformation, but it’s also true for innovation — shifting the mindset.

Gardner: The wide applicability of composability is impressive. You could take this composable mindset, use these methods and tools, and you could compose a bare-metal, traditional, on-premises data center. You could compose a highly virtualized on-premises data center. You could compose a hybrid cloud, where you take advantage of private cloud and public cloud resources. You can compose across multiple types of private and public clouds. 

Cross-cloud composability

Thome: We think composability is a very broad, useful idea. When we talk to customers they are like, “Okay, well, I’ll have my own kind of legacy estate, my legacy applications. Then I have my new applications, and new way of thinking that are being developed. How do I apply principles and technologies that are universal across them?”

The idea of being able to say, “Well, I can compose the infrastructure for my legacy apps and also compose my new cloud-native apps, and I get the right infrastructure underneath.” That is a very appealing idea.

But we also take the same ideas of composability and say, “Well, I would even want to compose ultimately across multiple clouds.” So more-and-more enterprises are leveraging clouds in various shapes and forms. They are increasing the number of clouds they use. We are trending to hybrid cloud, where there are people using different clouds for different reasons. They may actually have a single application that’s spanning multiple clouds, including on-premises clouds.

When you get to that level, you start thinking, “Well, how do I compose my environment or my applications across all of those areas?” Not everybody is necessarily thinking about it that way yet, but we certainly are. It’s definitely something that’s coming.

You start thinking, “How do I compose my environment or my applications across all areas?” Not everyone is thinking about it yet that way, but we certainly are. It’s definitely coming.

Gardner: Providers are telling people that they can find automation and simplicity but the quid pro quo is that you have to do it all within a single stack, or you have to line up behind one particular technology or framework. Or, you have to put it all into one particular public cloud.

It seems to me that you may want to keep all of your options open and be future-proof in terms of what might be coming in a couple of years. What is it about composability that helps keep one’s options open?

Thome: With automation, there’s two extremes that people wind up with. One is a great automation framework that promises you can automate anything. The most important thing is that you can; meaning, wedon’t do it, but youcan, if you are willing to invest all of the hard work into it. That’s one approach. The good news is that there are multiple vendors with actual parts of the automation-technology total. But it can be a very large amount of work to develop and maintain systems across that kind of environment.

On the other hand, there are automation environments where, “Hey, it works great. It’s really simple. Oh, by the way, you have to completely stay within our environment.” And so you are stuck within the confines of their rules for doing things.

Both of these approaches, obviously, have a very significant downside because any one particular environment is not going to be the sum of everything that you do as a business. We see both of them as wrong.

Real composability shines when it spans the best of both of those extremes. On the one hand, composability makes it very easy to automate the composable infrastructure, and it also automates everything within it. 

In the case of HPE Synergy, composable management (HPE OneView) makes it easy to automate the compute, storage, and networking — and even the software stacks that run on it — through a trivial interface. And at the same time, you want to integrate into the broader, multivendor automation environments so you can automate across all things.

You need that because, guaranteed, no one vendor is going to provide everything you want, which is the failing of the second approach I mentioned. Instead, what you want is to have a very easy way to integrate into all of those automation environments and automation frameworks without throwing a whole lot of work to the customer to do.

We see composability strength in being API-driven. It makes it easy to integrate into automation frameworks, but secondly, it completely automates the things that are underneath that composable environment. You don’t have to do a lot of work to get things operating.

So we see that as the best of those two extremes that have historically been pushed on the market by various vendors.

Gardner: Gary, you have innovated and created broad composability. In a market full of other innovators, have there been surprises in what people have done with composability? Has there been follow-on innovation in how people use composability that is worth mentioning and was impressive to you? 

Success stories 

Thome: One of my goals for composability was that, in the end, people would use it in ways I never imagined. I figured, “If you do it right, if you create a great idea and a great toolset, then people can do things with it you can’t imagine.” That was the exciting thing for me.

One customer created an environment where they used the HPE composable API in the Terraform environment. They were able to rapidly span a variety of different environments based on self-service mechanisms. Their scientist users actually created the IT environments they needed nearly instantly. 

It was cool because it was not something that we set out specifically to do. Yet they were saying it solves business needs and their researchers’ needs in a very rapid manner.

Another customer recently said, “Well, we just need to roll out really large virtualization clusters.” In their case, it’s a 36-node cluster. It used to take them 21 days. But when they shifted to HPE composability, they got it down to just six hours.

How to Tame

Multicloud Complexity

Obviously it’s very exciting to see such real benefits to customers, to get faster with putting IT resources to use and to minimize the burden on the people associated with getting things done.

When I hear those kinds of stories come back from customers — directly or through other people — it’s really exciting. It says that we are bringing real value to people to help them solve both their IT needs and their business needs.

Gardner: You know you’re doing composable right when you have non-IT people able to create the environments they need to support their requirements, their apps, and their data. That’s really impressive.

Gary, what else did you learn in the field from how people are employing composability? Any insights that you could share?

Thome: It’s in varying degrees. Some people get very creative in doing things that we never dreamed of. For others, the mindset shift can be challenging, and they are just not ready to shift to a different way of thinking, for whatever reasons.

Gardner: Is it possible to consume composability in different ways? Can you buy into this at a tactical level and a strategic level?

Thome: That’s one of the beautiful things about the HPE composability approach. The answer is absolutely, “Yes.” You can start by saying, “I’m going to use composability to do what I always did before.” And the great news is it’s easier than what you had done before. We built it with the idea of assembling things together very easily. That’s exactly what you needed.

Then, maybe later, some of the more creative things that you may want to do with composability come to mind. The great news is it’s a way to get started, even if you haven’t yet shifted your thinking. It still gives you a platform to grow from should you need to in the future.

Gardner: We have often seen that those proof-points tactically can start the process to change peoples’ mindsets, which allows for larger, strategic value to come about.

Thome: Absolutely. Exactly right. Yes.

Gardner: There’s also now at HPE, and with others, a shift in thinking about how to buy and pay for IT. The older ways of IT, with longer revisions and forklift upgrades meant paying was capital-intensive.

What is it about the new IT economics, such as HPE GreenLake Flex Capacity purchasing, that align well with composability in terms of making it predictable and able to spread out costs as operating expenses?

Thome: These two approaches are perfect together; they really are. They are hand-in-glove and best buddies. You can move to the new mindset of, “Let me just use what I need and then stop using it when I don’t need it.”

That mindset — and being able to do rapid, small changes in capacity or code or whatever you are doing, it doesn’t matter – also allows a new economic perspective. And that is, “I only pay for what I need, when I need it; and I don’t pay for the things I am not using.”

Our HPE GreenLake Flex Capacity service brings that mindset to the economic side as well. We see many customers choose composability technology and then marry it with GreenLake Flex Capacity as the economic model. They can bring together that mindset of making minor changes when needed, and only consuming what is needed, to both the technical and the economic side.

We see this as a very compelling and complementary set of capabilities — and our customers do as well.

Gardner: We are also mindful nowadays, Gary, about the edge computing and the Internet of Things (IoT), with more data points and more sensors. We also are thinking about how to make better architectural decisions about edge-to-core relationships. How do we position the right amount of workload in the right place for the right requirements?

How does composability fit into the edge? Can there also be an intelligent fabric network impact here? Unpack for us how the edge and the intelligent network foster more composability.

Composability on the fly, give it a try 

Thome: I will start with the fabric. So the fabric wants to be composable. From a technology side, you want a fabric that allows you to say, “Okay, I want to very dynamically and easily assemble the network connections I want and the bandwidth I want between two endpoints — when I want them. And then I want to reconfigure or compose, if you will, on the fly.”

We have put this technology together, and we call it Composable Fabric. I find this superexciting because you can create a mesh simply by connecting the endpoints together. After that, you can reconfigure it on the fly, and the network meets the needs of the applications the instant you need them.

How to Achieve Composability

Across Your Datacenter 

This is the ultimate of composability, brought to the network. It also simplifies the management operation of the network because it is completely driven by the need from the application. That is what directly drives and controls the behavior of the network, rather than having a long list of complex changes that need to be implemented in the network. That tends to be cumbersome and winds up being unresponsive to the real needs of the business. Those changes take too long. This is completely driven from the needs of application down into the needs of the fabric. It’s a super exciting idea, and we are really big on it, obviously.

Now, the edge is also interesting because we have been talking about conserving resources. There are even fewer resources at the edge, so conservation can be even more important. You only want to use what you need, when you need it. Being able to make those changes incrementally, when you need them, is the same idea as the composability we have been talking about. It applies to the edge as well. We see the edge as ultimately an important part of what we do from a composable standpoint.

Gardner: For those folks interested in exploring more about composability, methodologies, technologies, and getting some APIs to experiment with — what advise do you have for them? What are some good ways to unpack this and move into a proof-of-concept project?

Thome: We have a lot of information on our website, obviously, about composability. There is a lot you can read up on, and we encourage anybody to learn about composability through those materials.

They can also try composability because it is completely software-defined and API-driven. You can go in and play with the composable concepts through software. We suggest people try directly. But they can also go and connect it to their automation tools and see how they can compose things via the automation tools they might already be using for other purposes. It can then extend into all things composable as well.

I definitely encourage people to learn more, but specially to move into the “doing phase.” Just try it out, see how easy it is to get things done.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business intelligence, Cloud computing, containers, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, machine learning, multicloud, Software-defined storage, storage, User experience | Tagged , , , , , , , , , , , | Leave a comment

SAP Ariba COO James Lee on the best path to an intelligent and talented enterprise

Hand holding artificial intelligence robot face with network data 0 and 1 background,3D Rendering.

The next BriefingsDirect enterprise management innovations discussion explores the role of the modern chief operating officer (COO) and how they are tasked with creating new people-first strategies in an age of increased automation and data-driven intelligence.

We will now examine how new approaches to spend management, process automation, and integrated procurement align with developing talent, diversity, and sustainability.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the leadership trends behind making globally dispersed and complex organizations behave in harmony, please welcome James Lee, Chief Operating Officer at SAP Aribaand SAP Fieldglass.The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: James, why has there never been a better time to bring efficiency and intelligence to business operations? Why are we in an auspicious era for bridging organizational and cultural gaps that have plagued businesses in the past?

Lee: If you look at the role of the modern COO, or anyone who is the head of operations, you are increasingly asked to be the jack-of-all-trades. If you think about the COO, they are responsible for budgeting and planning, for investment decisions, organizational and people topics, and generally orchestrating across all aspects of the business. To do this at scale, you really need to drive standardization and best practices, and this is why efficiency is so critical. 


Now, in terms of the second part of your question, which has to do with intelligence, the business increasingly is asking for — not just reporting the news — but making the news. What does that mean? That means you have to offer insights to different parts of the business and help them make the right decisions; things that they wouldn’t know otherwise. That requires leveraging all the data available to do thorough analysis and provide the data that all the functional leaders can use to make the best-possible decision.

Gardner: It seems that the COO is a major consumer of such intelligence. Do you feel like you are getting better tools?

Make sense of data

Lee: Yes, absolutely. We talk about being in the era of big data, so the information you can get from systems — even from a lot of devices, be it mobile devices or sensors – amounts to an abundance and explosion of data. But how to make sense of this data is very tricky.

As a COO, a big part of what I do is not only collect the data from different areas, but then to make sense of it, to help the business understand the insights behind this data. So I absolutely believe that we are in the age where we have the tools and the processes to exploit data to the fullest.

Gardner: You mentioned the COO needs to be a jack-of-all-trades. What in your background allows you to bring that level of Renaissance man, if you will, to the job?

Lee: As COO of SAP Ariba and now SAP Fieldglass, too, I have operational responsibilities across our entire, end-to-end business. I’m responsible for helping with our portfolio strategy and investments, sales excellence, our commercial model, data analytics, reporting, and then also our learning and talent development. So that is quite a broad purview, if you will. 

I feel like the things I have done before at SAP have equipped me with the tools and the mindset to be successful in this position. Before I took this on, I was a COO and general manager of sales for the SAP Greater China business. In that position, during that time, I doubled the size of SAP’s business in China, and we were also involved in some of the largest product launches in China, including SAP S/4HANA

Before that, having been with SAP for 11 years, I had the opportunity to work across North America, Europe, and Asia in product and operating roles, in investment roles, and also sales roles.

Before joining SAP, I was a management consultant by training. I had worked at Hewlett Packard and then McKinsey and Company.

Gardner: Clearly most COOs of large companies nowadays are tasked with helping extend efficiency into a global environment, and your global background certainly suits you for that. But there’s another element of your background that you didn’t mention – which is having studied and been a concert pianist. What do you think it is about your discipline and work toward a high level of musical accomplishment that also has a role in your being a COO?

The COO as conductor 

Lee: That’s a really interesting question. You have obviously done your research and know my background. I grew up studying classical music seriously, as a concert pianist, and it was always something that was very, very important to me. I feel even to this day — I obviously have pursued a different profession — that it is still a very key and critical part of who I am.

If I think about the two roles — as a COO and as a musician — there are actually quite a few parallels. To start, as a musician, you have to really be in tune with your surroundings and listen very carefully to the voices around you. And I see the COO team ultimately as a service provider, it’s a shared services team, and so it’s really critical for me to listen to and understand the requirements of my internal and external constituents. So that’s one area where I see similarities.

Secondly, the COO role in my mind is to orchestrate across the various parts of the business, to produce a strong and coherent whole. And again, this is similar to my experiences as a musician, in playing in ensembles, and especially in large symphonies, where the conductor must always know how to bring out and balance various musical voices and instruments to create a magical performance. And again, that’s very similar to what a COO must do.

Gardner: I think it’s even more appropriate now — given that digital transformation is a stated goal for so many enterprises – to pursue orchestration and harmony and organize across multiple silos.

Does digital transformation require companies to think differently to attain that better orchestrated whole?

Lee: Yes, absolutely. From the customers that I have spoken to, digital transformation to be successful has to be a top-to-down movement. It has to be an end-to-end movement. It’s no longer a case where management just says, “Hey, we want to do this,” without the full support and empowerment of people at the working level. Conversely, you can have people at the project team level who are very well-intentioned, but without senior executive level support, it doesn’t work.

The role of the COO is to orchestrate across the various parts of the business, to produce a strong and coherent whole. This is similar to my experiences as a musician, in playing in ensembles, and especially in large symphonies. 

In cases where I have seen a lot of success, companies have been able to break down those silos, paint an overarching vision and mission for the company, brought everyone onto the same bandwagon, empowered and equipped them with the tools to succeed, and then drive with ruthless execution. And that requires a lot of collaboration, a lot of synergies across the full organization.

Gardner: Another lens through which to view this all is a people-centric view, with talent cultivation. Why do you think that that might even be more germane now, particularly with younger people? Many observers say Millennials have a different view of things in many ways. What is it about cultivating a people-first approach, particularly to the younger workers today, that is top of mind for you?

People-first innovation 

Lee: We just talked about digital transformation. If we think about technology, no matter how much technology is advancing, you always need people to be driving the innovation. This is a constant, no matter what industry you are in or what you are trying to do.

And it’s because of that, I believe, that the top priority is to build a sustainable team and to nurture talent. There are a couple of principles I really adhere to as I think about building a “people-first team.”

First and foremost, it’s very important to go beyond just seeking work-life balance. In this day and age, you have to look beyond that and think about how you help the people on your team derive meaning from what they do.

This goes beyond just work and life and balance, this has to do with social responsibility, personal enrichment, personal aspiration, and finding commonality and community among your peers. And I find that now — especially with the younger generation — a lot of what they do is virtual. We are not necessarily in the office all together at the same time. So it becomes even more important to build a sense of connectivity, especially when people are not all present in the same room. And this is something that Millennials really care about.

Also for Millennials it’s important for them, at the beginning of their careers, to have a strong true-north. Meaning that they need to have great mentors who can coach them through the process, work with them, develop them, and give them a good sense of belonging. That’s something I always try to do on my team, to ensure that the young people get mentorship early on in their career to have one-on-one dedicated time. There should always be a sounding board for them to air their concerns or questions.

Gardner: Being a COO, in your case, means orchestrating a team of other operations professionals. What do you look for in them, in their background, that gives you a sense of them being able to fulfill the jack-of-all-trades approach?

Growth mindset drives success

Lee: I tend to think about successful individuals, or teams, along two metrics. One is domain expertise. Obviously if you are in charge of, say, data analytics then your background as a data scientist is very important. Likewise, if you are running a sales operation, a strong acumen in sales tools and processes is very important. So there is obviously a domain expertise aspect of it.

But equally, if not more important, is another mentality. I tend to believe in people who are of a growth-mindset as opposed to a closed-mindset. They tend to achieve more. What I mean by that are people who tend to want to explore more, want to learn more, who are open to new suggestions and new ways of doing things. The world is constantly changing. Technology is changing. The only way to keep up with it is if you have a growth mindset.

It’s also important for a COO team to have a service mentality, of understanding who your ultimate customer is — be it internal or external. You must listen to them, understand what the requirements are, and then work backward and look at what you can create or what insights you can bring to them. That is very critical to me.

When we talk about procurement, end users are increasingly looking for a marketplace-like experience. They are used to a B2C experience. And for Millennials, they are pushing everyone to think differently. They expect easy, seamless access across all of their different platforms. 

Gardner: I would like to take advantage of the fact that you travel quite a bit, because SAP Ariba and SAP Fieldglass are global in nature. What you are seeing in the field? What are your customers telling you?

Lee: As I travel the globe, I have the privilege of supporting our business across the Americas, Europe, the Middle East, and Asia, and it’s fascinating to see that there are a lot of differences and nuances — but there are a lot of commonalities. At the end of the day, what people expect from procurement or digital transformation are more or less very similar.

There are a couple of trends I would like to share with you and your listeners. One is, when we talk about procurement, end users are increasingly looking for a marketplace-like experience. Even though they are in a business-to-business (B2B) environment, they are used to the business-to-consumer (B2C) user experience. It’s like what they get on Amazon where they can do shopping, they have a choice, it’s easy to compare value, and features — but at the same time you have all of the policies and compliance that comes with B2B. And that’s something that is beginning to be the lowest common denominator.

Secondly, when we talk about Millennials, I think the Millennial experience is pushing everyone to think differently about the user experience. And not just for SAP Ariba and SAP Fieldglass, but for any software. How do we ensure that there is easy data access across different platforms — be it your laptop, your desktop, your iPad, your mobile devices? They expect easy, seamless access across all their different platforms. So that is something I call the Millennial experience.

Contingent, consistent labor

Thirdly, I have learned about the rise of contingent labor in a lot of regions. We, obviously, are very honored to now have Fieldglass as part of the SAP Ariba family. And I have spent more and more time with the Fieldglass team.

In the future, there may even be a situation where there are few permanent, contracted employees. Instead, you may have a lot of project-based, or function-based, contingent laborers. We hear a lot about that, and we are focused on how to provide them with the tools and systems to manage the entire process with contingent labor.

Gardner: It strikes as an interesting challenge for COOs — how do you best optimize and organize workers who work with you, but not for you.

Lee: Right! It’s very different because when you look at the difference between indirect and direct procurement, you are talking about goods and materials. But when you are talking about contingent labor, you are talking about people. And when you talk about people, there is a lot more complexity than if you are buying a paper cup, pen, or pencil.

You have to think about what the end-to-end cycle looks like to the [contingent workers]. It extends from how you recruit them, to on-boarding, enabling, and measuring their success. Then, you have to ensure that they have a good transition out of the project they are working on.

SAP Fieldglass is one of the few solutions in the market that really understands that process and can adapt to the needs of contingent laborers. 

Gardner: One more area from your observations around the globe: The definition and concept of the intelligent enterprise. That must vary somewhat, and certain cultures or business environments might accept more information, data, and analytics differently than others. Do you see that? Does it mean different things to different people?

Intelligent enterprise on the rise

Lee: At its core, if you look at the revolution of the enterprise software and solutions, we have gone from being a very transactional system — where we are the system of bookings and record, just tracking what is being done — to we start to automate, what we now call the intelligent enterprise. That means making sense of all the information and data to create insight.

A lot of companies are looking to transform into an intelligent enterprise. That means you need to access an abundance of data around you. We talked about the different sources — through sensors, equipment, customers, suppliers, sometimes even from the market and your competitors — a 360-degree view of data. 

Then how do you have a seamless system that analyzes all of this data and actually makes sense of it? The intelligent enterprise takes it to the next level, which is leveraging artificial intelligence (AI). There is no longer a person or a team sitting in front of a computer and doing Excel modeling. This is the birth of the age of AI.

Now we are looking at predictive analytics, where, for example, at SAP Ariba, we look for patterns and trends on how you conduct procurement, how you contract, and how you do sourcing. We then suggest actions for the business to take. And that, to me, is an intelligent enterprise.

Gardner: How do you view the maturity of AI, in a few years, as an accelerant to the COO’s job? How important will AI be for COOs specifically?

Lee: AI is absolutely a critical, critical topic as it relates to — not just procurement transformation — but any transformation. There are four main areas addressed with AI, especially the advanced AI that we are seeing today.

Number one, it allows you to drive deeper engagement and adoption of your solution and what you are doing. If you think about how we interact with systems through conversations, sometimes even through gestures, that’s a different level of engagement than we had before. You are involving the end user in a way that was never done before. It’s interactive, it’s intuitive, and it avoids a lot of cost when it comes to training.

Secondly, we talk a lot about decision-making. AI gives you access to a broad array of data and you can uncover hidden insights and patterns while leveraging it.

Thirdly, we talked about talent, and I believe that having AI helps you attract and retain talent with state-of-the-art technology. We have self-learning systems that help you institutionalize a lot of knowledge.

And last, but not least, it’s all about improving business outcomes. So, you think about how you increase efficiencies for your personalized, context-specific information. In the context of procurement, you can improve approvals and accuracy, especially when you are dealing with contracts. An AI robot is a lot less prone to error than the human working on a contract. We have the statistics to prove it. 

At the end of the day, we look at procurement and we see an opportunity to transform it from a very tactical, transactional function into a very strategic function. And what that means is AI can help you automate a lot of the repetitive tasks, so that procurement professionals can focus on what is truly value-additive to the organization.

Gardner: We seem to be on the cusp of an age where we are going to determine what it is that the people do best, and then also determine what the machines do best — and let them do it.

This whole topic of bots and robotic process automation (RPA) is prevalent now across the globe. Do you have any observations about what bots and RPA are doing to your customers of SAP Fieldglass and SAP Ariba?

Sophisticated bot benefits

Lee: When we talk about bots, there are two types that come to mind. One is in the shop floor, in a manufacturing setting, where you have physical bots replacing humans and what they do. 

Secondly, you have virtual bots, if you will. For example, at SAP Ariba, we have bots that analyze data, make sense of the patterns, and provide insights and decision-making support to our end users.

In the first case, I absolutely believe that the bots are getting more sophisticated. The kinds of tasks that they can take on, on the shop floors, are a lot more than what they were before — and it drives a lot of efficiency, cuts costs, and allows employees to be redeployed to more strategic, higher value-added roles. So I absolutely see that as a positive trend going forward.

When it comes to the artificial, virtual bots, we see a lot of advancement now, not just in procurement, but in the way they are being used across sales and human resources systems. I was talking to a company just last week and they are utilizing virtual bots to do the recruiting and interviewing process. Can you imagine that?

The next time you submit your resume to a company, on the other end of the line might not be a human, but a robot that is screening you. It’s now to that level of sophistication.

The next time that you are submitting your résumé to a company, on the other end of the line might not be a human that you are talking to, but actually a robot that’s screening you. And it’s now to the level of sophistication where it’s hard for you to tell the difference.

Gardner: I might feel better that there is less subjectivity. If the person interviewing me didn’t have a good sleep the previous night, for example. I might be okay with that. So it’s like the Turing test, right? Do you know whether it’s real bodies or virtual bots?

Before we close out, James, do you have any advice for other COOs who are seeking to take advantage of all the ways that digital transformation is manifesting itself? What advice do you have for COOs who are seeking to up their game?

It’s up to you to up your COO game

Lee: Fundamentally, the COO role is what you make of it. A lot of companies don’t even have a COO. It’s a unique role. There is no predefined job scope or job definition.

For me, a successful COO — at least in the way I measure myself — is about what kind of business impact you have when you look at the profits and loss (P and L). Everything that you do should have a direct impact on your top line, as well as your bottom line. And if you feel like the things that you are doing are not directly impacting the P and L, then it’s probably time to reconsider some of those things.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in application transformation, Ariba, artificial intelligence, big data, Business intelligence, Business networks, Cloud computing, data analysis, Enterprise transformation, ERP, Information management, machine learning, Networked economy, procurement, SAP, SAP Ariba, Spot buying, supply chain, User experience | Tagged , , , , , , , , , , , , , , , , | Leave a comment

How healthcare providers are taking cues from consumer expectations to improve patient experiences

The next BriefingsDirect healthcare insights discussion explores the shift medical services providers are making to improve the overall patient experience.

Taking a page from modern, data-driven industries that emphasize consumer satisfaction and ease, a major hospital in the New York metro area has embarked on a journey to transform healthcare-as-a-service.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the surging importance and relevance for improving patient experiences in the healthcare sector using the many tools available to other types of businesses, we are joined by Laura Semlies, Vice President of Digital Patient Experience, at Northwell Health in metro New York, and Julie Gerdeman, President at HealthPay24 in Mechanicsburg, Penn. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s driving a makeover in the overall medical patient experience?

Semlies: The trend we’re watching is recognizing the patient as a consumer. Now, healthcare systems are even calling patients “consumers” — and that is truly critical.


In our organization we look at [Amazon founder and CEO] Jeff Bezos’ very popular comment about having “customer obsession” — and not “competition obsession.” In doing so, you better understand what the patient needs and what the patient wants as a consumer. Then you can begin to deliver a new experience. 

Gardner: This is a departure. It wasn’t that long ago when a patient was typically on the receiving end of information and care and was almost expected to be passive. They were just off on their way after receiving treatment. Now, there’s more information and transparency up-front. What is it about the emphasis on information sharing that’s changed, and why?

Power to the patients

Semlies: A lot of it has to do with what patients experience in other industries, and they are bringing those expectations to healthcare. Almost every industry has fundamentally changed over the course of a last decade, and patients are bringing those changes and expectations into healthcare. 

In a digital world, patients expect their data is out there and they expect us to be using it to be more transparent, more personalized, and with more curated experiences. But in healthcare we haven’t figured it out just yet — and that’s what digital transformation in healthcare means. 

How do you take information and translate it into more meaningful and personalized services to get to the point where patients have experiences that drive better clinical outcomes?

Gardner: Healthcare then becomes more of a marketplace. Do you feel like you’re in competition? Could other providers of healthcare come in with a better patient experience and draw the patients away?

Semlies: For sure. I don’t know if that’s true in every market, but it certainly is in the market that I operate in. We live in a very competitive market in New York. The reality is if the patient is not getting the experience they want, they have choices, and they will opt for those choices. 

A recent study concluded that 2019 will be the year that patients choose who renders their care based on things that they do or do not get. Those things can range from the capability to book appointments online, to having virtual visits, to access to a patient portal with medical record information — or all of the above. 

And those patients are going to be making those choices tomorrow. If you don’t have those capabilities to treat the patient and meet their needs — you won’t get that patient after tomorrow.

Gardner: Julie, we’re seeing a transition to the healthcare patient experience similar to what we have seen in retail, where the emphasis is on an awesome experience. Where do you see the patient experience expanding next? What needs to happen to make it a more complete experience?

Gerdeman: Laura is exactly right. Patients are doing research upfront before providers interact with them, before they even call and book an appointment. Some 70 percent of patients spend that time to look at something online or make a phone call.

Competitive, clinical customer services


We’re now talking about addressing a complete experience. That means everything from up-front research, through the clinical experience, and including the financial and billing experiences. It means end-to-end, from pre-service through post-service.

And that financial experience needs to be at or better than the level of experience they had clinically. Patients are judging their experience end-to-end, and it is competitive. We hear from healthcare providers who want to keep patients out of their competitors’ waiting rooms. Part of that is driving an improved experience, where the patient-as-consumer is informed and engaged throughout the process. 

Financially speaking, what does that mean? It means digital engagement — something simple, beautiful, and mobile that’s delivered via email or text. We have to meet the consumer, whenever, and wherever they are. That could be in the evening or early in the morning on their devices. 

That’s how people live today. Those personalized and curated experiences with Google or Alexa, they want that same experience in healthcare.

Gardner: You don’t want a walk into a time machine and go back 30 to 40 years just because you go to the hospital. The same experience you can get in your living room should be there when you go to see your doctor. 

Laura, patient-centric care is complicated enough in just trying to understand the medical issues. But now we have a growing level of complexity about the finances. There are co-pays, deductibles, different kinds of insurance, and supplemental insurance. There are upfront cost estimates versus who knows what the bill is going to be in six months.

How do we fulfill the need for complete patient-centric services when we now need to include these complex financial issues, too?

Learn How to Meet Patient Demands

For Convenient Payment Options

For Healthcare Services

Semlies: One way is to segment patients based on who they are at any moment. Patients can move very quickly from a healthy state to a state of chronic disease management. Or they can go from an episode where they need very intense care to quickly being at home. 

First, you need to understand where the patients’ pain points are across those different patient journeys.

Second is studying your data and looking back and analyzing it to understand what those ranges of responsibility look like. Then you can start to articulate and package those things. You have more norms to do early and targeted financial counseling. 

The final part is being able to communicate, even as things change in a person’s course of treatment, and that has an impact on your financial responsibility. That kind of dialogue in our industry is almost non-existent right now.

Sharing data and dialogue

Among the first things patients look for is via searches based on their insurance carrier. Well, insurance isn’t enough. It’s not enough to know you are going to see doctor so-and-so for x with insurance plan B. You need to know far more than that to really get an accurate sense of what’s going on. Our job is to figure out how to do that for patients.

We have to get really good at figuring out how to deliver the right level of detail on information about you and what you are seeking. We need to know enough about our healthcare system, what are the costs are and what the options are so that we can engage in dialogue.

It could be a digital dialogue, but we have to engage in a dialogue. The reality is we know even in a digital situation that patients only want to share certain amount of information. But they also want accurate information. So what’s that balance? How do you achieve that? I think the next 12 to 18 months is going to be about figuring that out. 

Transparency isn’t only posting a set of hospital charges; it’s just not. It’s a step in the right direction. There is now a mandate saying that transparency is important, and we all agree with that. Yet we still need meaningful transparency, which includes the ability to start to control your options and make decisions in association with a patients’ financial health goals, too.

Gardner: So, the right information, to the right person, in the right context, at the right time. To me, that means a conversation based on shared data, because without data all along the way you can’t get the context.

What is the data sharing and access story behind the patient-centric experience story?

One of the biggest problems right now is the difference between an explanation of benefits and a statement. They don’t say the same thing, and are coming from two different places. It’s very difficult to explain to a patient.

Semlies: If we look at the back-end of the journey, one of the biggest problems right now is the difference between an explanation of benefits and a statement. They don’t say the same thing, and they are coming from two different places. It’s very difficult to explain everything to a patient when you don’t have that explanation of benefits (EOB) in front of you. 

What we’re going to see in the next months and years — as more collaboration is needed between payers and health systems and providers – is a new standard around how to communicate. Then we can perhaps have an independent dialogue with a patient about their responsibilities. 

But we don’t own the benefits structure. There are a lot of moving parts in there. To independently try to control that conversation across health systems, we couldn’t possibly get it right.

So one of the strategies we are pursuing is how do we work with each and every one of our health systems to try and drive innovation around data sharing and collaboration so that we can get the right answer for a shared patient. 

That “consumer” is shared between us as providers as well as the payer plan that hosts the patient. Then you need to add another layer of extra complexity around the employer population. Those three players need to be working carefully together to be able to solve this problem. It’s not going to be a single conversation.

Gardner: This need to share collaborative data across multiple organizations is a big problem. Julie, how do you see this drive for a customer-centric shared data equation playing out?

Healthy interoperability 

Gerdeman: Technology and innovation are going to drive the future of this. It’s an opportunity for companies to come together. That means interoperability, whether you’re a payments provider like HealthPay24, or you’re providing statement information, you’re providing estimates information. Across those fronts, all of that data relates to one patient. Technology and innovation can help solve these problems.

We view interoperability as the key, and we hear it all the time. Northwell and our other provider customers are asking for that transparency and interoperability. We, as part of that community, need to be interoperable and integrate in order to present data in a simple way that a consumer can understand. 

When you’re a consumer you want the information that you need at that moment to make a decision. If you can get it proactively — all the better. Underlying all this, though, is trust. It’s something I like to talk about. Transparency is needed because there is lack of trust.

Transparency is just part of the trust equation. If you present transparency and you do it consistently, then the consumer — the patient — has trust. They have immediate trust when they walk into a provider or doctor’s office as a patient. Technology has an opportunity to help solve that.

Gardner: Laura, you’re often at the intercept point with patients. They are going to be asking you – the healthcare provider — their questions. They will look to you to be the funnel into this large ecosystem behind the scenes.

What would you like to see more of from those other players in that ecosystem to make your job easier, so that you can provide that right level of trusted care upfront to the patient?

Simplify change and choice

Semlies: Collaboration and interoperability in this space are essential. We need to see more of that.

The other thing that we need — and it’s not necessarily from those players, but from the collective whole — is a sense of modeling “if-then” situations. If thishappens what will thenhappen?

By leveraging from such process components, we can remodel things really well and in a very sophisticated fashion. And that can work in many areas with so many choices and paths that you could take. So far, we don’t do any of that in price transparency with our patients. And we need to because the boundaries are not tight.

What you charge – from copay to coinsurance – can change as you’re moving from observation to inpatient, or inpatient back to observation. It changes the whole balance card for a patient. We need the capability to model that out and articulate the why, how, and when — and then explain what the impact is. It’s a very complicated conversation.

But we need to figure out all of those options along with the drivers of costs. It has to be made simple so that patients can engage, understand, and anticipate it. Then, ultimately, we can explain to them their responsibility.

I often hear that patients are slow to pay, or struggle to pay. Part of what makes them slow to pay is the confusion and complexity around all of this information. I think patients want to pay their share.

Earn patients’ trust

It’s just the complexity around this makes it difficult, and it creates a friction point that shouldn’t be there. We do have a trust situation from an administrative perspective. I don’t think our patients trust us in regard to the cost of their care, and to what their share of the care is. 

I don’t think they trust their insurers and payers tremendously. So we have to earn trust. And it’s going to mean that we need to be way more accurate and upfront. It’s about the basics. Did you give me a bill that I can understand? Did I have options when I went to pay it? We don’t even do that easy stuff well today.

I used to joke that we should be paying patients to pay us because we make it so difficut. We are now in a better place. We are putting in the foundation so that we can earn trust and credibility. 

I used to joke that we should be paying patients to pay us because we made it so difficult. We are now in a better place. We are putting in the foundation so that we can earn trust and credibility. We are beginning the dialogue of, “What do you need as a patient?” With that information, we can go back and create the tools to engage with these patients. 

We have done more than 1,000 hours of patient focus group studies on financial health issues, along with user testing to understand what they need to feel better about their financial health. There is clinical health, there are clinical outcomes — but there is also financial health. Those are new words to the industry.

If I had a crystal ball, I’d say we’re going to be having new conversations around what a patient needs to feel secure, that they understood what they were getting into, and that they knew about their ability to pay it or had other options, too. 

Meet needs, offer options

Gerdeman: Laura made two points that I think are really important. The first is around learning, testing, and modeling — so we can look at the space differently. That means using predictive analytics upfront in specific use cases to anticipate patient needs. What do they need, and what works? 

We can use isolated, specific use-cases to test using technology — and learn. For example, we have offered up-front discounts for patients. If they pay in full, they get a discount. We learned that there are certain cases where you can collect more by offering a discount. That’s just one use-case, but predictive analytics, testing, and learning are the key. 

The second thing that is dead-on is around options. Patients want options. Patients want to know, “Okay, what are my choices?” If that’s an emergency situation, we don’t have the option to research it, but then soon after, what are the choices?

Most American consumers have less than $500 set aside for medical expenses. Do they have the option of a self-service and flexible payment plan? Can they get a loan? What are their choices to make an informed choice? Perhaps at home at their convenience.

Those are two examples where technology can really help play a role in the future. 

Gardner: You really can’t separate the economics from healthcare. We’re in a new era where economics and healthcare blend together, the decision-making for both of them comes together.

We talked about the need for data and how it can help collaboration and process efficiency. It also allows for looking at that data and applying analytics, learning from it, then applying those lessons back. So, it’s a really exciting time.

But I want to pause for a moment. Laura, your title of “Vice President of Digital Patient Experience” is unique. What does it take to become a Vice President of Digital Patient Experience?

Journey to self-service 

Semlies: That is a great question. The Digital Patient Experience Office at Northwell is a new organization inside of the health system. It’s not an initiative- or a program-focused office where it’s one and done, where you go in and you deliver something and then you’re done. 

We are rallying around the notion that the patient expects to be able to interact with us digitally. To do so we need to transform our entire organization — culturally, operationally, and technically to be able accommodate that transformation. 

Before, I was responsible for revenue cycle transformation of the Northwell Health system. So I do have a financial background. However, what set me up for pursuing this digital transformation was the recognition that self-service was going to disrupt the traditional revenue cycle. We need to have a new capability around self-service that inherently allows the consumer to do what they want and need to manage their administrative interactions differently with the health system. 

See the New Best Practice

Of Driving Patient Loyalty

Through Estimation

I was a constant voice for the last decade in our health system, saying, “We need to do this to our infrastructure so that we can be able to rationalize and standardize our core applications that serve the patient, including the revenue cycle systems, so that we can interoperate in a different way and create a platform by which patients can self-serve.”

And we’re still in that journey, but we’re at a point where we can begin to engage very differently. I’m working to solve three fundamental questions at the heart of the primary pain-points, or friction points, that patients have.

Patients tell us these three things: “You never remember who I am. I have been coming here for the last 10 years and you still ask me for my name, my date of birth, my insurance, my clinical history. You should know that by now.”

Two, they say, “I can’t figure out how to get in to see the right doctor at the right time at the right location for me. Maybe it’s a great location for you, or a great appointment time for you. But what if it doesn’t work for me? How do I fix that?”

And, third, they say, “My bills are confusing. The whole process of trying to pay a bill or get a question answered about one is infuriating.”

Whenever you talk to anyone in our health system — whether it’s our chief patient experience officer, CEO, chief administrative officer, or COO — those are the three things that were also coming out of customer service, Press Ganey [patient satisfaction] results, and complaints. When you have direct conversations with patients, such as through family advisory councils, the complaints weren’t about the clinical stuff. 

Digital tools to ease the pain

It was all on the administrative burden that we were putting on patients, and this anonymity that patients were walking through our halls with. Those are what we needed to focus on first. And so that’s what we’re doing.

We will be bringing out a set of tools so our patients will be able to, in a very systematic way, negotiate appointment management. They will be able to view and manage their appointments online with the ability to book, change, and cancel anything that they need to. They will simply see those appointments and get directions to those appointments and communicate with those administrative officers.

The second piece of the improvement is around the, “You never remember who I am” problem, where they have been to a doctor and get the blank clipboard to fill out. Then, regardless of whether they were there yesterday or went to see a new doctor, they get the same blank clipboard.

We’re focused on getting away from the clipboard to remembering information and not seeking the same information twice — only if there is the potential that information has changed. Instead of a blank form, we present them the opportunity to revise. And they do it remotely on their time. So we are respecting them by being truly prepared when they come to the office.

The second piece of the improvement is around the, “You never remember who I am” problem, where they have been to a doctor and get the blank clipboard to fill out. Regardless of whether they were there yesterday or go to a new doctor, they get the same blank clipboard to fill out.

The other side of “never remembering who I am” is proper authentication of digital identity. It’s not just attaching a name with the face virtually. You have to be able to authenticate so that information can be shared with the patient at home. It means being able to have digital interactions that are not superficial. 

The third piece [of our patient experience improvement drive] is the online payment portal for which we use HealthPay24. The vision is not only for patients to be able to pay one bill, but for any party that has a responsibility within the healthcare system — whether it’s a lab, ambulance, hospital or physician – to provide the capability to all be paid in a single transaction using our digital tools. We take it one step further by giving it a retail experience, with such features as “save the card on file” so if you paid the bill last week you shouldn’t have to rekey those digits into the system. 

We plan to take it even further. For example, providing more options to pay — whether by a loan, payment plan, or to use such services as Apple Pay and Google Pay. We believe these should be stable stakes, but we’re behind and are putting in those pieces now just to catch up. 

Our longer-term vision goes far deeper. We expect to go all the way back to the point of when patients are just beginning to seek care. How do I help them understand what their financial responsibility and options are at that point, before they even have a bill in our system? This is the early version of digital transformation.

Champion patient loyalty

Gerdeman: Everything Laura just talked about comes down to one word — loyalty. What they are putting in place will drive patient loyalty, just like consumer loyalty. In the retail space we have seen loyalty to certain brands because of how consumers interact with them, as an emotional experience. It comes down to a combination of human elements and technology to create the raving fans, in this case, of Northwell Health.

Gardner: We have seen the digital user experience approach be very powerful in other industries. For example, when I go to my bank digitally I can see all my transactions. I know what my balances are. I can set payment schedules. If I go to my investment organization, I can see the same thing with my retirement funds. If I go to my mortgage holder, same thing. I can see what I owe on my house, and maybe I want a second property and so I can immediately initiate a new loan. It’s all there. We know that this can be done.

Julie, what needs to happen to get that same level of digital transparency and give the power to the consumer to make good choices across the healthcare sector?

Rx: Tech for improved healthcare

Gerdeman: It requires a forward-looking view into what’s possible. And we’re seeing disruption. At the recent HiMSS 2019 conference [in February in Orlando] a gathering of 45,000 people were thinking like champions of healthcare — about what can be done and what’s possible. To me, that’s where you start. 

Like Laura said, many are playing catch-up. But we also need to be leapfrogging into the future. What emerging technologies can change the dynamic? Artificial intelligence (AI) and what’s happening there, for example. How can we better leverage predictive analytics? We’re also examining Blockchain, so what can distributed ledger do and what role can it play?

I’m really excited about what’s possible with marrying emerging technology, while still solving the nuts and bolts of interoperability and integration. There is hard work in integration and interoperability to get systems talking to one another. You can’t get away from that ugly part of the job, but then there is an exciting future part of job that I think is fundamental. 

Laura also talked about culture and cultural shift. None of it can happen without an embrace of change management. That’s also hard because there are always people and personalities. But if you can embrace change management along with the technology disruption, new things can happen.

Semlies: Julie mentioned the hard, dirty work behind the scenes. That data work is really fundamental, and that has prevented healthcare from becoming more digital. People are represented by their data in the digital space. You only know people when you understand their data.

In healthcare — at least from a provider perspective — we have been pretty good about collecting information about a patient’s clinical record. We understand them clinically.

We also do a pretty decent job at understanding the patient from a reimbursement and charges perspective. We can get a bill out the door and get the bill paid. Sometimes if we don’t get the bill paid, when it gets down to the secondary responsibility, we do collect that information and we get those bills out. The interaction is there. 

What we don’t do well is managing processes across hundreds of systems. There are hundreds of systems in any big healthcare system today. The bridges and connections between those data systems are just not there. So a patient often finds themselves interacting with each and every one of them.

For example, I am a patient as the mom of three kids. I am a patient as the daughter of two aging parents. I am wife to a husband who I am interacting with. And I am myself my own patient. The data that I need to deal with — and the systems I need to interact with — when I am booking an appointment, versus paying a bill, versus looking for lab results, versus trying to look for a growth chart on a child — I am left to self-navigate across this world. It’s very complex and I don’t understand it as a patient. 

Our job is to figure out how to manage tomorrow and the patient of tomorrow who wants to interact digitally. We have to be able to integrate all of these different data points and make that universally accessible.

Electronic medical record (EMR) portals deal more with the clinical interactions. Some have gotten good at doing some of the administrative components, but certainly not all of them. We need to create something that is far broader and has the capability to connect the data points that live in silos today — both operationally as well as technically. This has to be the mandate. 

Open the digital front door

Gardner: You don’t necessarily build trust when you are asking the patient to be the middleware, to be the sneaker-ware, walking between the PC and the mainframe. 

Let’s talk about some examples. In order to get cultural change, one of the tried-and-true methods is to show initial progress, have a success story that you can champion. That then leads to wider adoption, and so forth. What is Northwell Health’s Digital Front Door Team? That seems an example of something that works and could be a harbinger of a larger cultural shift.

Semlies: Our Digital Front Door Team is responsible for creating tools and technology to provide a single access point for our patients. They won’t have to have multiple passwords or multiple journeys in order to interact with us.

Over the course of the last year, we’ve established a digital platform that all of our digital technologies and personnel connect to. That last point is really important because when a patient interacts with you digitally, there is a core expectation today that if they have told you something digitally, as soon as they show up in person, you are going to know it, use it, and remember it. The technology needs to extend the conversation or journey of experience as opposed to starting over. That was really critical for our platform to provide.

When a patient interacts with you digitally, there is a core expectation today that if they have told you something digitally, as soon as they show up, you are going to know it and use it. The technology needs to extend the conversation. 

Such a platform should consist of a single sign-on (SSO) capability, an API management tool, and a customer relationship management (CRM) database, from which we can learn all of the information about a patient. The CRM data drives different kinds of experiences that can be personalized and curated, and that data lives in the middle of the two data topics we discussed earlier. We collect that data today, and the CRM tool brokers all of this so it can be in the hands of every employee in the health system. 

The last piece was to put meaningful tools around the friction points we talked about, such as for appointment management. We can see availability of a provider and book directly into it with no middleman. This is direct booking, just like when I book an appointment on OpenTable. No one has to call me back. They may just send a digital reminder.

Gardner: And how has the Digital Front Door Team worked out? Do you have any metrics of success?

Good for patients, good for providers

Semlies: We took an agile approach to implementing it. Our first component was putting in the online payment capability with HealthPay24 in July 2018. Since then, we have approximately $25 million collected. In just the last six months, there have been more than 46,000 transactions. In December, we began a sign-in benefit so patients can login and see all of their balances across Northwell. 

We had 3,000 people sign-in to that process in the first several weeks, and we’re seeing evidence that our collections are starting to go up.

We implemented our digital forms tool in September 2018. We collected more than 14,000 digital forms in the first few months. Patients are loving that capability. The next version will be an at-home version so you will get text messages saying, “We see you have booked an appointment. Here are your forms to prepare for your visit.” They can get them all online. 

We are also piloting biometrics so that when you first show up at your appointment you will have the opportunity to have your picture taken. It’s iris-scanning and deep facial recognition technology so that will be the method of authentication. That will also be used more over time for self check-ins and eventually to access the ultimate portal. 

The intent was to deploy as early as there was value to the patient. Then over time all of those services will be connected as a single experience. Next to come are improved appointment management with the capability to book appointments online, as well as to change, manage, see all appointments via a connection to the patient portal.

All of those connection points will be rendered through the same single sign-in by the end of this quarter, both on our website,, and via a proprietary mobile app that will come out in the app stores.

Gardner: Those metrics and rapid adoption show that a good patient experience isn’t just good for the patient — it’s good for the provider and across the entire process. Julie, is Northwell Health unique in providing the digital front door approach?

Gerdeman: We are seeing more healthcare providers adopt this approach, with one point of access into their systems, whether you are finding a doctor or paying a bill. We have seen in our studies that seven out 10 patients only go to a provider’s website to pay a bill. 

From a financial perspective, we are working hard with leaders like Laura whose new roles support the digital patient experience. Creating that experience drives adoption, and that adoption drives improved collections. 

Ease-of-use entertains and retains clients

Semlies: This channel is extremely important to us from a patient loyalty and retention perspective. It’s our opportunity to say, “We have heard you. We have an obligation to provide you tools that are convenient, easy to use, and, quite frankly, delight you.”

But the portal is not the only channel. We recognize that we have to be in lots of different places from the adoption perspective. The portal is not the only place every patient is going. There will be opportunities for us to populate what I refer to as the book-now button. And the book-now button cannot be exclusive to the Northwell digital front door. 

View More Provider Success Stories

On Driving Useage, Adotion,

And Loyalty Among Patients

need to have that book-now button in the hands of every payer agent who is on the phone talking to a patient or in their digital channel or membership. I need to have it out in the Zocdocs of the world, and in any other open scheduling application out there. 

I need to have ratings and reviews. We need to be multichannel in our funnel in, but once we get you in we have to give you tools and resources that surprise and delight you and make that re-engagement with somebody else harder because we make it so easy for you to use our health system. 

And we have to be portable so you can take it with you if you need to go somewhere. The concept is that we have to be full service, and we want to give you all of the tools so you can be happy about the service you are getting — not just the clinical outcome but the administrative service, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in Business intelligence, Cloud computing, data analysis, electronic medical records, ERP, healthcare, Identity, Information management, machine learning, procurement, Security, supply chain, User experience | Tagged , , , , , , , , , , | Leave a comment

How HPC supports ‘continuous integration of new ideas’ for optimizing Formula 1 car design

The next BriefingsDirect extreme use-case for high-performance computing (HPC) examines how the strictly governed redesign of Formula 1 race cars relies on data center innovation to coax out the best in fluid dynamics analysis and refinement.

We’ll now learn how Alfa Romeo Racing (formerly Alfa Romeo Sauber F1 Team) in Hinwil, Switzerland leverages the latest in IT to bring hard-to-find but momentous design improvements — from simulation, to wind tunnel, to test track, and ultimately, to victory. The goal: To produce cars that are glued to the asphalt and best slice through the air.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to describe the challenges and solutions from the compute-intensive design of Formula 1 cars is Francesco Del Citto, Head of Computational Fluid Dynamics Methodology for Alfa Romeo Racing, and Peter Widmer, Worldwide Category Manager for Moonshot/Edgeline and Internet of Things (IoT) at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why does Alfa Romeo Racing need to prepare for another car design?

Del Citto

Del Citto: Effectively, it’s a continuous design process. We never stop, especially on the aerodynamic side. And what every Formula 1 team does is dictated by each race season and by the specific planning and concept of your car in terms of performance. 

For Formula 1 racing, the most important and discriminating factor in terms of performance is aerodynamics. Every Formula 1 team puts a lot of effort in designing the aerodynamic shape of their cars. That includes for brake cooling, engine cooling, and everything else. So all the airflow around and inside of the car is meticulously simulated to extract the maximum performance.

Gardner: This therefore becomes as much an engineering competition as it is a racing competition.

Engineered to race

Del Citto: Actually, it’s both. On the track, it’s clearly a racing competition between drivers and teams. But before you ever get to the track, it is an engineering competition in which the engineers both design the cars as well as the methods used to design the cars. Each Formula 1 team has its own closely guarded methodologies and processes – and they are each unique.

Gardner: When I first heard about fluid dynamics and aerodynamic optimization for cars, I was thinking primarily about reduction of friction. But this is about a lot more, such as the cooling but also making the car behave like a reverse airplane wing.

Tell us why the aerodynamic impacts are much more complicated than people might have appreciated.

Del Citto: It is very complicated. Most of the speed and lap-time reductions you gain are not on the straightaways. You gain over your competitors in how the car behaves in the corners. If you can increase the force of the air acting on the car — to push the car down onto the ground — then you have more force preventing the car from moving out of line in the corners.

Why use the force of the air? Because it is free. It doesn’t come with any extra weight. But it is difficult to gain such extra inertial control forces. You must generate them in an efficient way, without being penalized too much from friction.

Learn How High-Density HPC 

Doubles Throughput 

While Slashing Energy Use

It’s also difficult to generate such forces without breaking the rules, because there are rules. There are limits for designing the shapes of the car. You cannot do whatever you want. Still, within these rules, you have to try to extract the maximum benefits. 

The force the car generates is called downforce, which is the opposite of lift forcefrom the design of an airplane. The airplane has wings designed to lift. The racing car is designed to be pushed down to the ground. The more you can push to the ground, the more grip you have between the tires and the asphalt and the faster you can go in the corners before the friction gives up and you just slide.

Gardner: And how fast do these cars go nowadays?

Del Citto: They are very fast on the straight, around 360-370 km/hour (224-230 mph), especially in Mexico City, where the air is thin due to the altitude. You have less resistance and they have a very long straight there, so this is where you get the maximum speeds. 

But what is really impressive is the corner speed. In the corners you can now have a side acceleration force that is four to five times the force of gravity. It’s like being in a jet fighter plane. It’s really, really high.

Widmer: They wear their security belts not only to hold them in in case of an accident, but also for when they brake and steer. Otherwise, they could be catapulted out of the car because the forces are close to 5G. The efficiency of the car is really impressive, not only from the acceleration or high speeds. The other invisible forces also differentiate a Formula 1 car from a street car.

Gardner: Peter, because this is an engineering competition, we know the simulations result in impactful improvements. And that then falls back on the performance of the data center and its level of innovation. Why is the high-performance computing environment such an essential part of the Formula 1 team?


Widmer: Finding tens of thousands of a second on the racetrack, where a lap time can be one minute or less, pushes the design of the cars to the extreme edge. To find that best design solution requires computer-aided design (CAD) guidance — and that’s where the data center plays an important part.

Those computational fluid dynamics (CFD) simulations take place in the data center. That’s why we are so happy to work together with Alfa Romeo Racing as a technology partner.

Gardner: Francesco, do you have constraints on what you can do with the computers as well as what you can do with the cars?

Limits to compute for cars

Del Citto: Yes, there are limits in all aspect of the car, design, and especially in the aerodynamic research. That’s because aerodynamics is where you can extract more performance — but it’s where you can spend more money as well.

The Formula 1 governing body, the FIA, a few years ago put in place ways of controlling the money spent for aerodynamic research. So instead of putting on a budget cap, they decided to put a limit on the resources you can use. The resources are both the wind tunnel and the computational fluid dynamics. It’s a tradeoff between the two. The more wind tunnel you use, the less computational power you can use, and vice versa. So each team has its sweet spot, depending on their strategy. 

You have restrictions in how much computational capacity you can use to solve your simulations. You can do a lot of post-processing and pre-processing, but you cannot extract too much from that. The solving part, in which it tells you the performance results of the new car design, is what is limited.

Gardner: Peter, how does that translate into an HPE HPC equation? How do you continuously innovate to get the most from the data center, but without breaking the rules?

Widmer: We work with a competency center on the HPC to determine the right combination of CPU, throughput, and whatever it takes to get the end results, which are limited by the regulations.

We are very open on the platform requirements for not only Alfa Romeo Racing, but for all of the teams, and that’s based on the most efficient combination of CPU, memory, networking, and other infrastructure so that we can offer the CFD use-case.

Learn How High-Density HPC 

Doubles Throughput 

While Slashing Energy Use

It takes know-how about how to tune the CPUs, about the specifics of the CFD applications, and knowledge of the regulations formula which then leads us to get that success in CFD for Formula 1.

Gardner: Let’s hear more about that recipe for success. 

Memory makes the difference

Widmer: It’s an Intel Skylake CPU, which includes graphic cards onboard. That obviously is not used for the CFD use-case, but the memory we do use as a level-four memory cache. That then provides us extra performance, which is not coming from the CPU, which is regulated. Due to the high-density packaging of the HPE Moonshot solution — where we can put 45 compute notes in a 4.30 rack chassis — this is quite compact. And it’s just topped out at about 5,000-plus cores.

Del Citto: Yes, 5,760 cores. As Peter was saying before, the key factor here is the software. There are three main CFD software applications used by all the Formula 1 teams. 

The main limitation for this kind of software is always the memory bandwidth, not the computational power. It’s not about the clock speed frequency. The main limitation is the memory bandwidth. This is why the four-level cache gives the extra performance, even compared to a higher spec Intel server CPU. The lower spec with low energy use CPU version gives us the extra performance we need because of the extra memory cache.

Gardner: And this isn’t some workload you can get off of a public cloud. You need to have this on-premises? 

Del Citto: That’s right. The HPC facility is completely owned and run by us for the Formula 1 team. It’s used for research and even for track analysis data. We use it for multiple purposes, but it’s fully dedicated to the team.

It is not in the cloud. We have designed a building where we have a lot of electricity and cooling capacity requirements. Consider that the wind tunnel fan — only the fan – uses 3 megawatts. We need to have a lot of electricity there.

Gardner: Do you use the wind tunnel to cool the data center?

Del Citto: Sort of. We use the same water to cool the wind tunnel and the data center. But the wind tunnel has to be cooled because you need the air at a constant temperature to have consistent tests.

Gardner: And Peter, this configuration that HPE has put together isn’t just a one-off. You’re providing the basic Moonshot design for other Formula 1 teams as well?

A winning platform

Widmer: Yes, the solution and fit-for-regulations design was so compelling that we managed to get 6 out of 10 teams to use the platform. We can say that at least the first three teams are on our customer list. Maybe the other ones will come to us as well, but who knows?

We are proud that we can deliver a platform to a sport known for such heavy competition and that is very technology-oriented. It’s not comparable to any other sport because you must consistently evolve, develop, and build new stuff. The evolution never stops in Formula 1 racing.

For a vendor like HPE, it’s really a very nice environment. If they have a new idea that can give a team a small competitive advantage, we can help them do it. And that’s been the case for 10 years now.

Let’s figure out how much faster we can go, and then let’s go for it. These teams are literally open-minded to new solutions, and they are eager to learn about what’s coming down the street in technology and how could we get some benefits out of it. So that’s really the nice story around it.

These teams are literally open-minded to new solutions, and they are eager to learn about what’s coming down the street in technology and how they could get benefits out of it. That’s the nice story around it.

Gardner: Francesco, you mentioned this is a continuous journey. You are always looking for new improvements, and always redesigning.

Now that you have a sophisticated HPC environment for CFD and simulations, what about taking advantage of HPC data center for data analysis? For using artificial intelligence (AI) and machine learning (ML)?

Is that the next stage you can go to with these powerful applications? Do you further combine the data analysis and CFD to push the performance needle even further?

Del Citto: We generate tons of data — from experiments, the wind tunnel, the CFD side, and from the track. The cars are full of sensors. During a practice run, there are hundreds of pressure sensors around the car. In the wind tunnel, there are 700 sensors constantly running. So, as you can imagine, we have accumulated a lot of data.

Now, the natural step will be how we can use it. Yes, this is something everyone is considering. I don’t know where this will bring us. There is nothing else I can comment on at the moment.

Gardner: If they can put rules around the extent to which you can use a data center for AI, for example, it could be very powerful.

Del Citto: It could be very powerful, yes. You are suggesting something to the rule-makers now. Obviously, we have to work with what we have now and see what will come next. We don’t know yet, but this is something we are keeping our eyes on, yes.

Learn How High-Density HPC 

Doubles Throughput 

While Slashing Energy Use

Gardner: Good luck on your redesign for the 2019 season of Formula 1 racing, which begins in March 2019.

Widmer: Thanks a lot.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Cloud computing, data center, Data center transformation, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, server | Tagged , , , , , , , , , , , , , | Leave a comment

Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible

The next BriefingsDirect intelligent solutions discussion explores how healthcare organizations are using the latest digital technologies to transform patient care and experiences.

When it comes to healthcare, time is of the essence and every second counts, but healthcare is a different game today. Doctors and clinicians, once able to focus exclusively on patients, are now being pulled into administrative tasks that can eat into their capability to deliver care.

To give them back their precious time, innovative healthcare organizations are turning to a new breed of intelligent digital workspace technologies. We are now joined by two leaders who will share their thoughts on how these solutions change the game and help transform healthcare as we know it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Please welcome Mick Murphy, Vice President and Chief Technology Officer at WellSpan Health, and Christian Boucher, Director and Strategist-Evangelist for Healthcare Solutions at Citrix. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

An integrated healthcare system with more than 19,000 employees serving Central Pennsylvania and Northern Maryland, WellSpan Health consists of 1,500 physicians and clinicians, a regional behavioral health organization, a homecare organization, eight respected hospitals and more than 170 patient care locations.

Here are some excerpts:

Gardner: Christian, precision medicine is but one example of emerging trends that target improved patient care, in this case to specifically to treat illnesses with more specialized and direct knowledge. What is it about intelligent workspace solutions that help new approaches, such as precision medicine, deliver more successful outcomes?

Boucher: We investigated precision medicine to better understand how such intricate care was being delivered. Because every individual is different — they have their own needs, whether on the medication side, the support side, or the genomic side — physicians and healthcare are beginning to identify better ways to treat patients as a customized experience. This comes not only in the clinical setting, but also when the patients get home. Knowing this helped us formulate our concept of the intelligent workspace.


So, we decided to look at how users consume resources. As an IT organization — and I supported a healthcare organization for 12 years before joining Citrix — it was always our role to deliver resources to our users and anticipate how they needed to consume them. It’s not enough to define how they utilize those resources, but to identify how and when they need to access them, and then to make it as simple and seamless as possible.

With the intelligent workspace we are looking to deliver that customized experience to not only the organizations that deploy our solutions, but also to the users who are consuming them. That means being able to understand how and where the doctors and nurses are consuming resources and being able to customize that experience in real-time using our analytics engines and machine learning (ML). This allows us to preemptively deliver computing resources, applications, and data in real-time.

For example, when it comes to walking into a clinic, I can understand through our analytics engine that we will need for this specific clinic to utilize three applications. So before that patient walks in, we can have those apps spinning up in the background. That helps minimize the time to actual access.

Every minute we can subtract from a technology interaction is another minute we can give back to our clinicians to work with the patients and spend direct healthcare time with them.

Gardner: Understanding the context and the specific patient in more detail requires bringing together a lot of assets and resources on the back end. But doing so proactively can be powerful and ultimately reduces the complexity for the people on the front lines. 

Mick, WellSpan Health has been out front on seeking digital workspace technology for such better patient outcomes. What were some of the challenges you faced? Why did you need to change the way things were?

IT increases doctor-patient interaction 

Murphy: There are a couple of things that drive us. One is productivity and giving time back to clinicians so that they can focus on patients. There is a lot of value in bringing more information to the clinical space. The challenge is that the physicians and nurses can end up interacting more with computers than with the patients.


We don’t think about this in minutes, but in 9-second increments. That may sound a little crazy, but when we talk about a 15-minute visit with a primary care doctor, that’s 900 seconds. And if you think about 9 seconds, that’s 1 percent of that visit.

We are looking to give back multiple percentage points of such a visit so that the physician is not interacting with the computer, they are interacting directly with the patient. They should be able to quickly get the information they need from the medical record and then swivel back and focus directly on the patient.

Gardner: It’s ironic that you have to rely on digital technology and integration — pulling together disparate resources and assets — in order to then move past interacting with the computers. 

Murphy: Optimally the technology fades into the background. Many of us in technology may like to have the attention, but at the end of the day if the technology just works, that’s really what we are striving for. 

We want to make sure that as soon as a physician wants something — they get it. Part of that requires the ability to quickly get into the medical record, for example. With the digital workspace, an emphasis for us was improve on our old systems. We were at 38 seconds to get to the medical records, but we have been able to cut that to under 4 seconds.

How the Right Technology

Improves Patient Care and 

Helps Save Lives 

This gets back to what Christian was talking about. We know when a physician walks into an exam room that they are going to need to get into the medical records. So we spin up a Citrix session in advance. We have already connected to the electronic health records (EHRs). All we are waiting for is the person to walk in the door. And as soon as they drop their ID badge onto a reader they are quickly and securely into that electronic medical record. They don’t spend any time doing searches, and whatever applications are needed to run are ready for them.

Gardner: Christian, having technology in the background, anticipating needs, operating in the context of a process — this is all complex. But the better you do it, the better the outcome in terms of the speed and the access to the right information at the right time.

What goes on in the background to make these outcomes better for the interactions between the physicians, clinicians, and patients?

Boucher: For years, IT has worked with physicians and clinicians to identify ways to increase productivity and give time back to focus more on patient care. What we do is leverage newer technologies. We useartificial intelligence (AI),  ML, and analytics and drill down deeper into what is happening — and not only on a generic physician workflow.

It can’t just be generic. There may be 20 doctors at a hospital, but they all work differently. They all have preferences in how they consume information, and they perform at different levels, depending on the technology they interact with. Some doctors want to work with tablets and jump from one screen to the next depending upon their specific workflow. Others are more comfortable working with full-on laptops or working on a desktop.

We have to understand that and deliver an experience that each clinician can decide is best-suited for their specific work style. This is really key. If you go from one floor to another in a hospital and watch how nurses work differently — from the emergency room to the neonatal intensive care unit (NICU) — the workflows are considerably different. 

Not only do we have to deliver those key applications, we have to be mindful of how each of those different groups interacts with the technologies. It’s not just applications. It’s not just accessing the health record. It’s hardware, software, and location.

We have to be able to not only deliver those experiences but predict in real-time how they will be consumed to expedite the processes for them to get back into the patient-focused arena. 

Work outside the walls

Murphy: That’s a great point. You mentioned tablets. I don’t know what it is about physicians, but a lot of their kids seem to swim. So a lot of our doctors spend time at swim meets. And if you are on-call and are at a swim meet, you have a lot of time when your child is not in the pool. We wanted to give them secure access [to work while at such a location]. 

It’s really important, of course, that we make sure that medical records are private and secure. We are now able to say to our physicians, “Hey, grab your tablet, take it with you to the swim meet. You will be able to stay at the swim meet if you get a call or you get paged. You will be able to pop out that tablet, access the medical records – and all of that access stays inside of our data center.”

All they are looking at is a pretty picture of what’s going on inside the data center at that point. And so that prescription refill that’s urgent, they are able to handle that there without having to leave and take time away from their kids. 

We are able to improve the quality of life for our physicians because they are under intense pressure with healthcare the way it is today.

Boucher: I agree with that. As we look at how work is being done, there is no predefined workspace anymore — especially in healthcare where you have these on-call physicians.

Look at business operations. We are able to offset internal resources for billing. The work does not just get done in the hospital anymore. We are looking at ways to extend that same secure delivery of apps and data outside the four walls.

Just look at business operations as well. We are able to offset internal resources for billing. The work does not just get done in the hospital anymore. We are looking for ways to extend that same secure delivery of applications and data outside the four walls, especially if you have 19 hospitals. 

As you find leverage points for organizations to be able to attract resources that may not fall inside the hospital walls, it’s key for us to be more flexible in how we can allow organizations to deliver those resources outside those walls.

Gardner: Christian, you nailed it when you talked about how adoption is essential. Mick, how have digital workspace solutions helped people use these apps? What are the adoption patterns now that you can give flexibility and customize the experience?

Faster workflow equals healthier data

Murphy: Our adoptions are pretty strong. I will be clear, it’s required that you interact with electronic health records. There isn’t really an option to opt out. But what we have seen is that by making this more effective and faster, we have seen better compliance with things like securing workstations. Going back to privacy, we want to make sure that that electronic health data is protected.

And if it takes me too long to get back into a work context, well, then I may be tempted to not lock that workstation when I step away for just a moment. And then that moment can become an extended period and that would be dangerous for us. Knowing that I am going to get back to where I was in less than four seconds — and I am not even going to have to do anything other than touch my badge to get there, — means we see that folks secure their workstations with great frequency. So we feel like we are safer than we were. That’s a major improvement. 

Gardner: Mick, tell us more about the way you use workspaces to allow people to authenticate easily regardless of where they are.

Murphy: We have combined a couple of technologies. We use smart badges with a localized reader, with the readers scattered about for the folks who need to touch multiple workstations. 

So myself as an executive, for example, I can log into one machine by typing in my password. But for clinicians going from place to place, we have them login once a day and then as long as they are retaining their badge and they are getting back in. All they have to do is touch their badge to a reader and it drops them right back into their workspace. 

Gardner: We began our conversation talking about precision medicine, but there are some other healthcare trends afoot now, too. Transparency about the financial side of healthcare interactions is increasingly coming into play, for example.

We have new kinds of copays and coinsurance, and it’s complex. Physicians and clinicians are being asked more to be part of the financial discussion with patients. That requires a whole new level of integration and back-end work to make that information available in these useful tools. 

Taking care of business

Murphy: That is a big challenge. It’s something we are investing in. Already we are extending our website to allow patients to get on and say, “Hey, what’s this going to cost?” What the person really wants to know is, “What are my out-of-pocket costs going to be?” And that depends on that individual. 

We haven’t automated that yet end-to-end, but we have created a place where a patient can come on and say, “Hey, this is what I am going to need to have done. Can you tell me what it’s going to cost?”

We actually can turn that back around [with answers]. We have to get a human being involved, but we make that available either by phone or through our website. 

Gardner: Christian, we are seeing that the digital experience and the workspace experience in the healthcare process are now being directed back to the patient, for patient experience and digital experience benefits. Is the intelligent workspace approach that provider organizations like WellSpan are using putting them in an advantageous position to extend the digital experience to the patient — wherever they are — as well as to clinicians and physicians?

Boucher: In some scenarios we have seen some of our customers extend some of Citrix’s resources outside to customers. A lot of electronic health records now include patient portalsas part of their ecosystem. We see a lot of customers leveraging that side of the house for electronic health records.

We understand that regardless of the industry, the finance side and he back-office side play a major role in any organization’s offerings. It’s just as important to be able to get paid for something as it is to deliver care or any resource.

We understand that regardless of the industry, the finance side and the back-office side play a major role in any organization’s offerings. It’s just as important to be able to get paid for something as it is to deliver care or deliver any resources that your organization may deliver. 

So one of the key aspects for us was understanding how the workspace approach transforms over the next year. Some of the things we are doing on our end is to look at those extended workflows.

We made an acquisition in the last six months, a software company, [Sapho], that essentially creates micro apps. What that really means is we may have processes on ancillary systems, they could be Software as a service (SaaS)-based, they could be web-based applications, they could be on-premises installations of old client/server technologies. But this technology allows us to create micro experienceswithin the application.

So just say a process for verifying billing takes seven steps, and you have to login to a system, and you have to navigate through menus, and then you get to the point where you can hit a button to say, “Okay, this is going to work.”

What we have done is take that entire workflow — maybe it’s 10 clicks, plus a password — and create a micro app that goes through that entire process and gives you a prompt to do it all in one or two steps. 

So every application that we can integrate to — and there are 150 or so – we can take those workflows, which in some cases can take five minutes to walk-through and turn it into a 30-second interaction with [the Sapho] technology. Our goal is to look beyond just general workflows and be able to extend that out into these ancillary programs, where you may have these kinds of normal everyday activities that don’t need to take as long as they do and simplify that process for our end users to really optimize their workflows during the day.

Gardner: This sounds like moving in a direction of simplifying process, using software robots and as a way of automating things, taking the time and compressing it, and simplifying things — all at the same time.

Murphy: It’s fascinating. It sounds like a great direction. We are completely transparent, and that’s a future for us. It sounds like I need to get together with Christian after this interview.

Gardner: Let’s revisit the idea of security and compliance. Regulations are always there, data sharing is paramount, but protecting that data can be a guard rail or a limiter in how well you can share information.

Mick, how are you able to take that patient experience with the clinician and enrich it with all the data and resources you can regardless of whether they are at the pool, at home, on the road, and yet at the same time have compliance and feel confident about your posture when it comes to risk?

Access control brings security, compliance

Murphy: We feel pretty good about this for a couple of reasons. One is, as I mentioned, the application is still running in our data center. The the next question is, “Well, who can get access to that?”

One way is strong passwords, but as we all know with phishing those can be compromised. So we have gone with multifactor authentication. We feel pretty good about remote access, and once you have access we are not letting you pull stuff down onto your local device. You are just seeing what’s on the screen, but you are not pulling files down or anything of that nature. So, we have a lot of confidence in that approach.

Boucher: Security is always a moving target, and there may be certain situations when I access technology and I have full access to do what I please. I can copy and paste out of applications, I can screenshot, and I may be able to print specific records. But there may be times within that same workflow — but a different work style — where I may be remote-accessing technologies and those security parameters change in real-time.

As an organization, I don’t feel comfortable allowing user X to be able to print patient records when they are not on a trusted network, or they are working from home, or on an unknown device.

So we at Citrix understand those changing factors and understanding that our border now is the Internet. If we are allowing access from home, we are now extending our resources out to that, out to the Internet. So it really gives us a lot more to think about.

We have built into our solutions granular control that uses ML and analytics solutions. When you access something from inside the office, you have a certain amount of privileges as the end user. But as soon as you extend out that same access outside of the organization, in real-time we can flip those permissions and stop allowing users to print or screenshot or copy and paste between applications.

Digitally Enhanced Precision Medicine 

Delivers Patient-Specific Care to 

Treat Illness and Cure Diseases 

And that’s invisible to the end-user. It all happens in the back-end in real-time. Security is something that we take extremely seriously at Citrix, and we understand that our customers do as well. So, giving them those controls allows them to be a lot more nimble in how they deploy solutions.

Murphy: I agree with that. Another thing that we like to do is have technology control and help people be safe. A lot of this isn’t about the bad actor, it’s about somebody who’s just trying to do the right thing — but they don’t realize the risk that they are taking. We like to put in technology safeguards. For example, if you are working at home, you are going to have some guardrails around you that are tighter than the guardrails when you are standing in our hospital. 

Gardner: Let’s revisit one of our core premises, which is the notion of giving time back to the clinicians, to the physicians, to improve the patient experience and outcomes. Do you have other examples of intelligent and digital workspace solutions that help give time back? Are there other ways that you’re seeing the improvement in the quality of care and the attention span that can be directed at that patient and their situation?

The top of your license

Murphy: We talk a lot in healthcare about working at the top of your license. We try and push tasks to the least skill level needed in order to do something. 

When you come in for a visit, rather than having the physician look up your record, we have the medical assistant that rooms you and asks why you are there. They open the record, they ask you a few questions. They get all that in place. Then they secure the workstation. So it’s locked when the physician walks in and they drop their badge and get right in to the electronic medical record in four seconds.

That doctor can then immediately turn to you and say, “Hey, how are you doing today? What brings you in?” And they can just go right into the care conversation. The technology tees everything up so that the focus is on the patient.

Gardner: I appreciate that because the last thing I want to do is fill out another blank clipboard, telling them my name, my age, date of birth, and the fact that I had my appendix out in 1984. I don’t want to do that four times in a day. It’s so great to know that the information is going to follow me across the process.

Murphy: And, Dana, you are better than me, because I had a tonsillectomy in ‘82, ‘83, ‘84? It depends on which time I answered the survey correctly, right?

All systems go with spatial computing 

Boucher: As we look forward a few years, that tighter integration between the technologies and our clinicians is going to become more intertwined. We will start talking about spatial computing and these new [augmented reality] interfaces between doctors and health records systems or ambulatory systems. Spatial computing can become more of a real-time factor in how care is delivered.

And these are just some of the things we are talking about in our labs, in better understanding how workflows are created. But imagine being able to walk into a room with no more than a smart watch on my wrist that’s essentially carrying my passport and being able to utilize proximity-based authentication into those systems and interact with technology without having to login and do all the multifactor authentications.

And then take a step further by having these interfaces between the technology in the room, the electronic records, and your bed-flow systems. So as soon as I walk into a room, I no longer have to navigate within the EHR to find out which patient is in the room. By them being in the room and interfacing with bed flow, or having a smart patient ID badge, I can automatically navigate to that patient in real-time.

As soon as I walk into the room, I no longer have to navigate within the EHR to find out which patient is in the room. By them being in the room and interfacing with the bed flow, or having a smart ID badge, I can navigate to the patient in real time.

In reality, I am removing all of the administrative tasks from a clinician workflow. Whether it’s Internet of things (IoT)-based devices, or smart devices in rooms, they will help complete half of that workflow for you before you even step in.

Those are some of the things we look at for our intelligent workspace in our micro app design and our interfaces across different applications. Those are the kind of ways that we see our solutions being able to help clinicians and organizations deliver better care.

Gardner: And there is going to be ever-more data. It’s always increasing, whether it’s genomic information, a smart device that picks up tracking information about an individual’s health, or more from population information across different types of diseases and the protocols for addressing them.

Mick, we are facing more complexity, more data, and more information. That’s great because it can help us do better things in medicine. But it also needs to be managed because it can be overwhelming.

What steps should we be taking along the way so that information becomes useful and actionable rather than overwhelming?

AI as medical assistant

Murphy: This is a real opportunity for AI around an actual smart clinical assistant. So something that’s helping comb through all the data. There’s genomic data, drug-drug interaction data, and we need to identify what’s most important to get that human judgment teed up.

These are the things that I think you should look at versus, “Oh, here is all the possible things you couldlook at.” Instead we want, “Here are the things that you shouldreally focus on,” or that seem most relevant. So really using computing to assist clinicians rather than tell them what to do. But at least help them with where to focus.

Gardner: Christian, where would that AI exist? Is that something we’re going to be putting into the doctor’s office, or is that going to be something in a cloud or data center? How does AI manifest itself to accomplish what Mick just described?

Boucher: AI leverages intense computing power, so we are talking about significant IT resources internally. While we do see some organizations trying to bring quantum computing-based solutions into their organization and leveraging that, what I see is probably more of a hosted solution at this point. That’s because of the expense but also because of the technology, of when you start talking about distributed computing and being able to leverage multiple input solutions.

If you talk about an Epic or Cerner, I’m sure that they are working on technologies like that within their own solutions — or at least that allow their information to be shared within that.

I think we’re in the infancy of that AI trend. But we will see more-and-more technology play a factor in that. We could see some organizations partnering together to build out solutions. It’s hard to say at this point, but we know there is a lot of traction right now and unfortunately, they are mostly high-tech companies trying to leverage their algorithms and their solutions to deliver that, which at some point, I would guarantee that they’ll be mass produced and ready for purchase.

Murphy: AI could be everything from learning to just applying rules. I might not classify applying rules as AI, but I would say it’s rudimentary AI. For example, we have a rule set, an algorithm for sepsis. It enables us to monitor a variety of things about a patient — vital signs, lab results, and various data points that are difficult for any one human to be looking at across the entire set of patients in our hospitals at any given time.

Learn How to Build 

An Intelligent Workspace 

Infrastructure and Culture 

So we have a computer watching that. And when certain criteria are met in this algorithm, it reports to a centralized team staffed with nurses. The nurses can then look at that individual patient and say, “Does this look like a false alarm or does this look like something that we really need to pursue?” And based off of that, they send someone to the bedside.

We’ve had dramatic improvements with sepsis. So there are some really easy technical things to do — but you have to engage with them, with human beings, to get the team involved and make that happen.

Gardner: The intelligent digital workspaces aren’t just helping cut time, they are going to be the front end to help coordinate some of these advanced services that are coming down that can have a really significant impact on the quality of care and also the cost of case, so that’s very exciting.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix Systems.

You may also be interested in:

Posted in application transformation, artificial intelligence, Business intelligence, Citrix, Cloud computing, data center, Data center transformation, electronic medical records, Enterprise architect, enterprise architecture, healthcare, Identity, Internet of Things, machine learning, Security, User experience, Virtualization | Tagged , , , , , , , , , , | Leave a comment

Want to manage your total cloud costs better? Emphasize the ‘Ops’ in DevOps, says Futurum analyst Daniel Newman

The next BriefingsDirect Voice of the Analyst interview explores new ways that businesses can gain the most control and economic payback from various cloud computing models.

We’ll now hear from an IT industry analyst on how developers and IT operators can find newfound common ground to make hybrid cloud the best long-term economic value for their organizations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help explore ways a managed and orchestrated cloud lifecycle culture should be sought across enterprise IT organizations is Daniel Newman, Principal Analyst and Founding Partner at Futurum Research. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Daniel, many tools have been delivered over the years for improving software development in the cloud. Recently, containerization and management of containers has been a big part of that.

Now, we’re also seeing IT operators tasked with making the most of cloud, hybrid cloud, and multi-cloud around DevOps – and they need better tools, too.

Has there been a divide or lag between what developers have been able to do in the public cloud environment and what operators must be able to do? If so, is that gap growing or shrinking now that new types of tools for automation, orchestration, and composability of infrastructure and cloud services are arriving?

Out of the shadow, into the cloud 

Newman: Your question lends itself to the concept of shadow IT. The users of this shadow IT find a way to get what they need to get things done. They have had a period of uncanny freedom.


But this has led to a couple of things. First of all, generally nobody knows what anybody else is doing within the organization. The developers have been able to creatively find tools.

On the other hand, IT has been cast inside of a box. And they say, “Here is the toolset you get. Here are your limitations. Here is how we want you to go about things. These are the policies.”

And in the data center world, that’s how everything gets built. This is the confined set of restrictions that makes a data center a data center.

But in a developer’s world, it’s always been about minimum viable product. It’s been about how to develop using tools that do what they need them to do and getting the code out as quickly as possible. And when it’s all in the cloud, the end-user of the application doesn’t know which cloud it’s running on, they just know they’re getting access to the app.

Basically we now have two worlds colliding. You have a world of strict, confined policies — and that’s the “ops” side of DevOps. You also have the developers who have been given free rein to do what they need to do; to get what they need to get done, done.

Get Dev and Ops to collaborate 

Gardner: So, we need to keep that creativity and innovation going for the developers so they can satisfy their requirements. At the same time, we need to put in guard rails, to make it all sustainable.

Otherwise we see not a minimal viable cloud – but out-of-control expenses, out-of-control governance and security, and difficulty taking advantage of both private cloud and public cloud, or a hybrid affair, when you want to make that choice.

How do we begin to make this a case of worlds collaborating instead of worlds colliding?

Newman: It’s a great question. We have tended to point DevOps toward “dev.” It’s really been about the development, and the “ops” side is secondary. It’s like capital D, lowercase o.

The thing is, we’re now having a massive shift that requires more orchestration and coordination between these groups.

How to Make 

Hybrid IT 


You mentioned out-of-control expenses. I spoke earlier about DevOps and developers having the free rein – to do what they need to do, put it where they need to put it, containers, clouds, tools, whatever they need, and just get it out because that’s what impacts their customers.

If you have an application where people buy things on the web and you need to get that app out, it may be a little more expensive to deploy it without the support of Ops, but you feel the pressure to get it done quickly.

Now, Ops can come in and say, “Well, you know … what about a flex consumption-based model, what about multi-cloud, what about using containers to create more portability?”

“What if we can keep it within the constraints of a budget and work together with you? And, by the way, we can help you understand which applications are running on which cloud and provide you the optimal [aggregate cloud use] plan.”

Let’s be very honest, a developer doesn’t care about all of that. … They are typically not paid or compensated in any way that leads to optimizing on cost. That’s what the Ops people do.

Such orchestration — just like almost all larger digital transformation efforts — starts when you have shared goals. The problem is, they call it a DevOps group — but Dev has one set of goals and Ops has different ones.

What you’re seeing is the need for new composable tools for cloud services, which we saw at such events as the recent Hewlett Packard Enterprise (HPE) Discover conference. They are launching these tools, giving the Ops people more control over things, and — by the way — giving developers more visibility than has existed in the past.

There is a big opportunity [for better cloud use economics] through better orchestration and collaboration, but it comes down to the age-old challenges of having the Dev and Ops people share the same goals.

–Daniel Newman

There is a big opportunity [for better cloud use economics] through better orchestration and collaboration, but it comes down to the age-old challenges inside of any IT organization — and that is having the Dev and the Ops people share the same goals. These new tools may give them more of a reason to start working in that way.

Gardner: The more composability the operations people have, the easier it is for them to define a path that the developers can stay inside of without encumbering the developers.

We may be at the point in the maturity of the industry where both sides can get what they want. It’s simply a matter of putting that together — the chocolate and peanut-butter, if you will. It becomes more of a complete DevOps.

But there is another part of this people often don’t talk about, and that’s the data placement component. When we examine the lifecycle of a modern application, we’re not just developing it and staging it where it stays static. It has to be built upon and improved, we are doing iterations, we are doing Agile methods.

We also have to think about the data the application is consumingandcreating in the same way. That dynamic data use pattern needs to fit into a larger data management philosophy and architecture that includes multi-cloud support.

I think it’s becoming DevDataOps— not just DevOps these days. The operations people need to be able to put in requirements about how that data is managed within the confines of that application’s deployment, yet kept secure, and in compliance with regulations and localization requirements.

DevDataOps emerges

Newman: We’ve launched the DevDataOps category right now! That’s actually a really great point, because if you think about where does all that live — meaning IT orchestration of the infrastructure choices and whether that’s in the cloud or on-premises – there has to be enough of the right kind of storage.

Developers are usually worried about data from the sense of what can they do with that data to improve and enhance the applications. When you add in elements like machine learning (ML) and artificial intelligence (AI), that’s going to just up the compute and storage requirements. You have the edge and Internet of Things (IoT) to consider now too for data. Most applications are collecting more data in real-time. With all of these complexities, you have to ask, “Who really owns this data?”

Well, the IT part of DevOps, the “Ops,” typically worries about capacity and resources performance for data. But are they really worried about the data in these new models? It brings in that needed third category because the Dev person doesn’t necessarily deal with the data lifecycle. The need to best use that data is a business unit imperative, a marketing-level issue, a sales-level data requirement. It can include all the data that’s created inside of a cloud instance of SAP or Salesforce.

How to Solve Cost 

and Utilization Challenges 

of Hybrid Cloud

Just think about how many people need to be involved in orchestration to maximize that? Culturally speaking, it goes back to shared tools, shared visibility, and shared goals. It’s also now about more orchestration required across more external groups. So your DevOps group just got bigger, because the data deluge is going to be the most valuable resource any company has. It will be, if it isn’t already today, the most influential variable in what your company becomes.

You can’t just leave that to developers and operators of IT. It becomes core to business unit leadership, and they need to have an impact. The business leadership should be asking, “We have all this data. What are we doing with it? How are we managing it? Where does it live? How do we pour it between different clouds? What stays on-premises and what goes off? How do we govern it? How can we have governance over privacy and compliance?”

I would say most companies really struggle to keep up with compliance because there are so many rules about what kind of data you have, where it can live, how it should be managed, and how long it should be stored. 

I think you bring up a great point, Dana. I could probably rattle on about this for a long, long time. You’ve just added a whole new element to DevOps, right here on this podcast. I don’t know that it has to do with specifically Dev or Ops, but I think it’s Dev+Ops+Data — a new leadership element for meaningful digital transformation.

Gardner: We talked about trying to bridge the gap between development and Ops, but I think there are other gaps, too. One is between data lifecycle management – for backup and recovery and making it the lowest cost storage environment, for example. Then there is the other group of data scientists who are warehousing that data, caching it, and grabbing more data from outside, third-party sources to do more analytics for the entire company. But these data strategies are too often still divorced.

These data science people and what the developers and operators are doing aren’t necessarily in sync. So, we might have another category, which would be Dev+Data+DataScience+Ops.

Add Data Analytics to the Composition 

Newman: Now we’re going four groups. You are firstly talking about the data from the running applications. That’s managed through pure orchestration in DevOps, and that works fine through composability tools. Those tools provide IT the capability to add guard rails to the developers, so they are not doing things in the shadows, but instead do things in coordination.

The other data category is that bigger analytical data. It includes open data, third-party data, and historical data that’s been collected and stored inside of instances of Enterprise resource planning (ERP) apps and Customer-relationship management (CRM) apps for 20 or 30 years. It’s a gold mine of information. Now we have to figure out an extract process and incorporate that data into almost every enterprise-level application that developers are building. Right now Dev and Ops don’t really have a clue what is out there and available across that category because that’s being managed somewhere else, through an analytics group of the company.

Gardner: Or, developers will have to create an entirely different class of applications for analytics alone, as well as integrating the analytics services into all of the existing apps.

Newman: One of the HPE partners I’ve worked with the in the past, SAS, and companies such as SAS and SAP, are going to become much closer aligned with infrastructure. Your DevOps is going to become your analytics Ops, too.

How to Achieve 


Across Your Data Center

Hardware companies have built software apps to run their hardware, but they haven’t been historically building software apps to run the data that sits on the hardware. That’s been managed by the businesses running business intelligence software, such as the ones I mentioned.

There is an opportunity for a new level of coordination to take place at the vendor level, because when you see these alliances, and you see these partnerships, this isn’t new. But, seeing it done in a way that’s about getting the maximum amount of usable data from one system into every application — that’s futuristic, and it needs to be worked on today. 

Gardner: The bottom line is that there are many moving parts of IT that remain disjointed. But we are at the point now with composability and automation of getting an uber-view over services and processes to start making these new connections – technically, culturally, and organizationally.

What I have seen from HPE around the HPE Composable Cloud vision moves a big step in that direction. It might be geared toward operators, but, ultimately it’s geared toward the entire enterprise, and gives the business an ability to coordinate, manage, and gain insights into all these different facets of a digital business.

Companies right now still struggle with the resources to run multi-cloud. They tend to have maybe one public cloud and their on-premises operations. They don’t know which is the best cloud approach because they are not getting the total information.

–Daniel Newman

Newman: We’ve been talking about where things can go, and it’s exciting. But let’s take a step back.

Multi-cloud is a really great concept. Hyper-converged infrastructure, it’s all really nice, and there has been massive movement in this area in the last couple of years. Companies right now still struggle with the resources to run multi-cloud. They tend to have maybe one public cloud and their on-premise operations. They have their own expertise, and they have endless contracts and partnerships.

They don’t know which the best-cloud approach is because they are not necessarily getting that total information. It depends on all of the relationships, the disparate resources they have across Dev and Ops, and the data can change on a week-to-week basis. One cloud may have been perfect a month ago, yet all of a sudden you change the way an application is running and consuming data, and it’s now in a different cloud.

What HPE is doing with HPE Composable Cloud takes the cloud plus composable infrastructure and, working through HPE OneSphere and HPE OneView, brings them all into a single view. We’re in a software and user experience world.

The tools that deliver the most usable and valuable dashboard-type of cloud use data in one spot are going to win the battle. You need that view in front of you for quick deployment, with quick builds, portability, and container management. HPE is setting itself in a good position for how we do this in one place.

How to Remove 

Complexity From 

Multi-Cloud and Hybrid IT

Give me one view, give me my one screen to look at, and I think your Dev and Ops — and everybody in between – and all your new data and data science friends will all appreciate that view.HPE is on a good track, and I look forward to seeing what they do in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Cloud computing, containers, Cyber security, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, multicloud, Security | Tagged , , , , , , , , , , , | Leave a comment

Price transparency in healthcare to regain patient trust requires accuracy via better use of technology

The next BriefingsDirect healthcare finance insights discussion explores the impacts from increased cost transparency for medical services.

The recent required publishing of hospital charges for medical procedures is but one example of rapid regulatory and market changes. The emergence of more data about costs across the health provider marketplace could be a major step toward educated choices – and ultimately more efficiency and lower total expenditures.

But early-stage cost transparency also runs the risk of out-of-context information that offers little actionable insight into actual consumer costs and obligations. And unfiltered information requirements also place new burdens on physicians, caregivers, and providers – in areas that have more to do with economics than healthcare.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the pluses and minuses of increased costs transparency in the healthcare sector, we are joined by our expert panel:

The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: For better or worse, we are well into an era of new transparency about medical costs. Heather, why is transparency such a top concern right now? 


Kawamoto: It’s largely due to a cost shift. Insurance companies are having patients owe more of a payment’s portion. With that there has been a significant rise in the high-deductible health plans — not only in the amount of the deductible, but also in the number of patients on high-deductible plans. 

And when patients get, sadly, more surprise bills, we start to hear about it in the media. We also have the onset this month of the IPPS/LTCH PPS final rule from the Centers for Medicare and Medicaid Services (CMS) [part of the U.S. Department of Health and Human Services].

The New York Times did a recent story about this, and that’s created buzz. And then people start saying, “Hey, I know I have a medical service coming up, I probably need to call in and actually find out how much my service is going to be.”

Gardner: It seems like the consumer, the patient, needs to be far more proactive in thinking about their care, not just in terms of, “Oh, how do I get better? Or how do I stay as healthy as I can?” But in asking, “How do I pay for this in the best possible way?”

That economic component wasn’t the case that long ago. You would get care and you didn’t give much thought to price or how it was billed.

Joann, as somebody who provides care, what’s changed that makes it necessary for patients to be proactive about their health economics?

Know before you owe


Barnes-Lague: It’s the consumer-driven health plans, where patients are now responsible for more. They have to make a decision – “Do I buy my groceries, or do I have an MRI.”

The shift in healthcare makes us go after the patient before insurance is paid 100 percent. Patients now have a lot of skin in the game. And they have to start thinking, “Do I really need this procedure, or can it wait?”

Gardner: And we get this information-rush from other parts of our lives. We have so much more information available to us when we buy groceries. If we do it online, we can compare and contrast, we can comparison shop, we can even get analysis brought to the table. It can be a good thing.

Julie, you are trying to help people make better paying decisions. If we have to live with more cost transparency, how can technology be a constructive part of it?


Gerdeman: It’s actually a tremendous opportunity for technology to help patients and providers. We live in an experience economy, and in that economy everyone is used to having full transparency. We’re willing to pay for faster service, faster delivery.

We have highly personalized experiences. And all of that should be the same in our healthcare experiences. This is what people have come to expect. And that’s why, for us, it’s so important to provide personalized, consumer-friendly digital payment options. 

Sanborn: As someone who has been watching these high-deductible health plans unfold, data has come out saying the average American household can’t afford a $500 medical bill, that an unexpected $500 medical bill would drastically impact that household’s finances for months. So people are looking to understand upfront what they are going to owe. 

At the same time, patients are growing tired of the back-and-forth between the provider and the payer, with everyone kicking the can back and forth between then saying, “Well, I don’t know that. Your provider should know that.” And the provider says, “Well, your health plan is the one that arbitrates the price of your care. Why don’t you go ask them?” Patients are getting really, really tired of that.

Learn How to Meet Patient Demands

for Convenient Payment Options 

for Healthcare Services

Now the patients have the bullhorn, and they are saying, “I don’t care whose responsibility it is to inform me. Someone needs to inform me, and I want it now.” And in a consumer-driven healthcare space, which is what’s evolving now, consumers are going to go where they get that retail-like experience.

That’s why we are seeing the rise in urgent care centers, walk-in clinics, and places where they don’t have to wait. They can instead book an appointment on their phone and go to the appointment 20 minutes later. Patients have the opportunity to pick where they get their care and they know it. At the same time, they know they can demand transparency because it’s time. 

Gardner: So transparency can be a force for good. It can help people make better decisions, be more efficient, and as a result drive their cost down. But transparency can put too much information in front of people, perhaps at a time when they are not really in a mindset to absorb it.

What are you doing at CVS, Alena, to help people make better decisions, but not overload them? 

Clear information increases options

Harrison: The key to good transparency tools is that they have to be a 100 percent accurate. Secondly, the information has to be clear, actionable, and relevant to the patient.

If we gave patients 10 data points about the price of a drug — and sometimes there are 10 prices depending on how you look at it — it would overwhelm folks. It would confuse them, and we could lose that engagement. Providing simple, clear data that is accurate and actionable shows them the options specific to their benefit plan. That is what we can do to help consumers navigate through this very complex web in our healthcare system.

Gardner: Recondo helps people create and deliver estimates throughout this process. How does that help in providing the right information, at the right time, in the right context?

Kawamoto: It’s critical to provide [estimate information] when a patient schedules their service, because that gives them the opportunity — if there is a financial question or concern — to say, “Okay, I don’t know that I can pay for that. Is there another location where the price might be different? What are my financial options in terms of the payment plan or some sort of assistance?”

Enabling providers to proactively communicate that information to patients as they schedule a service or in advance gives patients an opportunity to shop. They know they are going to be meeting with an orthopedic surgeon because they need knee arthroscopy.

In advance of that, they should be able to get some idea of what they are going to owe, relative to their specific benefit information. It puts them in that position to engage with the orthopedic surgeon to say, “I looked at the facility and it’s actually going to be $3,000. What are my options?” Now, that provider can be a part of the cost discussion. I think that is critical. 

Barnes-Lague: As providers we have to be okay with patients making that decision, of saying, “Maybe I won’t have that service now.” That’s consumer-driven. And sometimes that hurts our volume. 

We may have had a hard time understanding that in the beginning, when we shared estimates and feared that the patients wouldn’t come. Well, would you rather trick them and then have bad debt?

As providers we have to be okay with patients making that decision, of saying, “Maybe I won’t have that service right now.” That’s consumer-driven. … It’s about being comfortable with the patient making educated decisions.

It’s about being comfortable with the patient making educated decisions. Perhaps they will come back for your MRI in December when their deductibles are met, and they can better afford it.

Gardner: Part of this solution requires the physician or practitioner to be educated enough to help the patient sort out the finances, as well as the care and medical treatments. As someone who has a lot of clinicians, technicians, and physicians, are they not the primary point for more transparency to the patient? 

Barnes-Lague: That would be the ideal solution, to have the physicians who are referring these very expensive services to begin having those conversations. Often patients are kind of robotic with what their doctors tell them. 

We have to tell them, “You have a choice. You have a choice to make some phone calls. You have a choice to do your own price shopping.” We would love it if the referring physicians began having those price-transparency conversations early, right in their offices. 

Gardner: So the new dual-major: Economics and pre-med?

Julie, your background is in technology. You and I both know there are lots of occupations where people have complex decisions to make. And they have to be provided trust and accommodation to make well-informed decisions.

Whether you are a purchasing agent, chief executive, or chief marketing officer, there are tools and data to help you. There have been great strides made in solving some of these problems. Is that what we are going to see applied to these medical decisions across the spectrum of payer, provider, and patient? 

Easy-to-access, secure data builds trust

Gerdeman: This field is ripe for disruption. And technology, particularly emerging technology, can make a big difference in providing transparency. 

A lot of my colleagues here have talked about trust. To me, the reason everybody is requiring transparency is to build trust. It goes back to that trusted relationship between the provider and the patient.

The data should be available to everyone. It’s now time to present the data in a very clear, simple, and actionable way for them to make decisions. The consumer can make an informed decision, and the provider can know what the consumer is facing.

Gardner: Yet to work, that data needs to be protected. It needs to adhere to multiple regulations in multiple jurisdictions, and compliance is a moving target because the regulations change so often. 

Beth, what do we do to solve the data availability problem? Everybody knows data is how to solve it. It’s about more data. But nobody wants to own and control that data.


Sanborn: Yes, it’s the $64,000 question. How do you own all that data and protect it at the same time? We know that healthcare is one of the most attacked industries when it comes to cyber criminals, ransomware, and phishing.

I hear all the time from experts that as much as the human element drives healthcare, as far as data and its protection [the human element] is also the greatest vulnerability. Most of the attacks you hear about happen because someone clicked on a link in an email or left their laptop somewhere. These are basic human errors that can have catastrophic consequences depending on who is on the receiving end of that error. 

Technology is, of course, a huge part of the future, but you can’t let technology develop faster than the protections that have to go with it. And so any developer, any innovator who is trying to help move this space forward has to make cybersecurity a grassroots foundational part of anything that they innovate.

It’s not enough to say, “My tool can help you do this, this, and this.” You have to be able to say, “Well, my tool will help you do this, this, and this, andthis is how we are going to protect you along the way.” That has to be part of, not just the conversation, but every single solution. 

Gardner: Alena, at CVS, do you see that data solution as a major hurdle to overcome? Meaning the controlling, managing, and protection of the data — but also making it available to every nook and cranny that it needs to get to? 

Harrison: That’s always a key focus for us, and it’s frankly ingrained in every single thing we do. To give a sense of what we are putting out there, the price transparency tools that we have developed are all directly connected to our claims system. It’s the only way we can make sure that the patient out-of-pocket costs we provide are 100 percent accurate. They must reflect what that patient would pay as they go to their local pharmacy.

See the New Best Practice

of  Driving Patient Loyalty 

Through Estimation

But making sure that our vendor partners have a robust and very rigorous process around security is paramount. It takes time to do that, and that’s one of the challenges we all face. 

Gardner: So we have a lot going on with new transparency regulations, and more information coming out. We know that we have to make it secure, and we are going to have to overcome that. So it’s early still.

It seems to me, though, there are examples of the tools already developed and how they can be impactful; they can work. 

Joann at Shields, do you have any examples of what benefits can happen when you bring in the right tools for transparency and for making good decisions? 

Transparency upfront benefits bottom line

Barnes-Lague: Yes, we bring in more revenue and we bring it in timely. We used to be at about 60 percent collected from the patient’s side overall. Since we implemented tools, we are at 85 percent collected, a 400 percent increase in our overall revenue.

We have saved $4.5 million in [advance procedure] denials, just based on eligibility, authorization, and things like that. We are bringing in more money and we don’t require as much labor because of the automation. We are staffed around the automation now. 

Gardner: Julie, how does it work? How do better tools and more information in advance help collect more money for a medical transaction? 

Gerdeman: It works in a couple of ways. First, from a patient-facing perspective, they have the access to pay whenever and wherever they are. Having that access and availability is critical.

We have saved $4.5 million in [advance procedure] denials — just based on eligibility, authorization, and things like that. We are bringing in more money and we don’t require as much labor because of the automation.

Also they need to be connected. An estimate – like Heather talked about, to be able to make a decision from that — has to be available from the very beginning. 

And then finally, it’s about options. All of these things help drive adoption if you give a patient options and clarity upfront. They have a choice of how to pay and they have the knowledge about costs. That adoption drives success.

So if you implement the tools appropriately you will see immediate impact. The patients adopt it, the staff adopts it, and then it drives up the collections that Joann is talking about. 

Gardner: Heather, we have seen in other industries that tracking decision processes and behaviors leads to understanding use patterns. From them, incentivization can come into play. Have you seen that? How can incentives and transparency improve the overall economic benefits?

Incentivization improves savings

Kawamoto: Being able to communicate to patients what their anticipated out-of-pocket costs will be is powerful. A lot of organizations have created the means where they say to the patient, “If you pay this amount in advance of your service, you will actually get a discount.” That puts the patient in a position tosay, “I could save $200 if I decide to pay this today.” That’s a key component of it. They know they are going to get a better cost if they pay sooner, and then many of them are incented to do that. 

Gardner: Any other thoughts about incentives, Alena? 

Harrison:Yes. An indirect incentive, but still quite relevant, is that our price transparency tools are available to all of our CVS Caremark members. We are seeing about 230,000 searches a month on our website.

When members search for the drugs they are taking, if there are lower-cost alternative options, we see members in their next refill order one of those lower cost drugs 20 percent of the time. That results in an average savings of $120 per prescription fill for those patients. As you can imagine, over the course of several months, that savings really starts to add up. 

Gardner: We have come back to the idea of the out-of-pocket costs. The higher the deductible, the lower the premiums. People are incentivized therefore to go to lower premiums. But then, heaven forbid, they have an illness, and then they have to start thinking about, “Oh my gosh, how do I best manage that out-of-pocket deductible?”

Nowadays, with technologies like machine learning (ML)artificial intelligence (AI), and big data analytics, we are seeing prescriptive or even recommendation types of technologies. How far do we need to go before we can start to bring some of those technologies about making good recommendations based on data — rather than intuition or even a lack of informed decision making — to medical finance decisions? How do we get to that point where we can be proscriptive in automated recommendations, rather than people slogging through this by themselves?

Automated advice advances

Gerdeman: At HealthPay24 we are looking at predictive analytics and what role the predictive capability can play in helping make recommendations for patients. That’s not necessarily on the clinical or pharmaceutical side, but we know when a patient makes an appointment and gets an estimate what their propensity to pay will be.

Proactively we can offer them options based on what we know ahead of time. They don’t even have to worry about it. They can just say, “Okay, here are my choices. I have only saved up $500; therefore, I am going to take advantage of a loan or a payment plan.” And I do believe that technology will help.

On the AI side, it’s already starting. As you talk to providers, they are using it for repetitive processes. But I think there is even more opportunity on the cognitive side of AI to play [a role] in hospitals. So there is a big opportunity. 

Gardner: We already see this in financial markets. People get more information, they get recommendations, and there is arbitrage. It’s not either/or. It’s what are the circumstances? What’s the credit we can offer? How do we make the most efficient transaction for all parties?

So, as in other transactions, we have to gain more comfort with the combination economics and medical procedures. Is that part of the culture shift? You have to be a crass consumer andyou have to be looking out for your health.

Any thoughts about the need to be both a savvy consumer as well as a patient? 

Kawamoto: It’s critical. To Julie’s point, we are now looking through our data and finding legitimate savings opportunities for patients, and we’re proactively outreaching to those patients. Of course, at the end of the day, the decision is always in the provider’s hands — and it should be, because not all of us are clinicians. I certainly am not. But to allow patients to prompt that fuller conversation helps drive the process, so the burden isn’t just on the provider. This is critical. 

Gardner: Before we close out, any recommendations? How should the industry best prepare for more transparency around procedures and payments in medical environments? Joann, what do you think people should be thinking about to better prepare themselves as providers for this new era of transparency? 

Lead with clear communication 

Barnes-Lague: Culture is very important within the organization. You need to continue to talk. It’s shifting. Let’s talk about the burden to the provider, now that the patients are responsible for more. There is no other product that you can purchase without paying upfront. But you can walk away from healthcare without paying for it. 

The more technology you implement, the more transparency you can provide, the more conversations you can have with those patients – these not only help the patients. You as providers are in business for revenue. This helps bring in the revenue that you have lost with the shift to consumer-driven health plans.

Gardner: Heather, as someone who provides tools to providers, what should they be thinking about when it comes to a new era of transparency?

View a Webinar on How Accurate 

Financial Data Helps Providers 

make Informed Decisions

Kawamoto: While there have been tools available to providers, now we have to make those tools available to patients. Providers are, in many cases, the first line of communication to patients. But before that patient even schedules, if they are in a position to know they need a service, they can go out and self-shop. 

That’s what providers need to be thinking about. How do I get even further out into the decision-making process? How do we engage with that patient at that early point, which is going to build trust, as well as ensure that revenue is coming to your particular facility?

Gardner: Beth, what advice do you have for consumers, the patients? What should they be thinking about to take advantage of transparency?

Take care of physicians and finances

Sanborn: First, I want to advocate for the physicians. We hear all the time about change fatigue, burnout; burnout is as hot a topic as transparency. If providers are going to be put in the position of having to have financial conversations with patients, I think health system leaders need to be aware of that and make sure that providers are properly educated. What do they need to know so that they can accurately communicate with patients? And they need to understand how that’s going to affect the workload — that is already onerous and at times damaging — to physicians. So along Joann’s comments about culture, there needs to be a culture around ushering in physicians into that role. 

From a consumer standpoint, when we look at the law that just went into effect, patients need to understand what are they looking at. The price list that the hospital is publishing is a chargemaster. It’s a naked price from a hospital. It’s not what they are going to pay, and so we need to eradicate the sticker shock that I am sure is happening at first glance. 

Gardner: The patient needs to self-educate about what’s net-net and what’s gross when it comes to these prices?

Patients need to be educated on what they are looking at, and then understand the options available to them as far as what they are actually going to pay. Payers need to make sure they are reaching out and make sure their consumers understand how the benefits work.

Sanborn:Right. You can put these prices in plain terms. The chargemaster is what a hospital charges. But remember you have insurance. There are discounts for self-pay. There could be other incentives or subsidies that you are eligible for.

So please don’t have a heart attack, literally, when you look at this price and go, “Oh, my gosh, is that what I am responsible for?” Patients need to be educated on what they are looking at, and then understand the options available to them as far as what you are actually going to pay. 

And the other thing is benefits literacy. Payers need to make sure they are reaching out to their consumers and making sure their consumers understand how the benefits work so that they can advocate for themselves. 

Gardner: Alena at CVS, as a provider of pharmaceutical services and goods, what advice do you have about making the best of transparency?

Harrison: Beth hit the nail on the head with a lot of her points. We see similar brute-force regulation happening in the prescription drug space. So pharmaceutical manufacturers now need to publish their “sticker” prices.

Little do most people know, the sticker price is something no one pays. Payers don’t pay it. Patients certainly don’t pay it. The pharmacy doesn’t pay it. And so it is so critical as this information becomes available to make sure that your customers, consumers, and members understand what they are looking at. You as an organization should be prepared to support them through the process of navigating this additional information.

Gardner: Julie, what should people be thinking about on the vendor side, the people providing these tools, now that transparency is a necessary part of the process? What should the tool providers be thinking about to help people navigate this?

Gerdeman: It comes back to the user experience — providing a simple, clear, and consumer friendly experience through the tools. That is what’s going to drive usage, adoption, and loyalty.

View Provider Success Stories

on Driving Usage, Adoption, 

and Loyalty Among Patients

Technology is a great way for providers to drive patient loyalty, and that is where it’s going to make a difference. That’s where you are going to engage them. You are going to win hearts and minds. They are going to want to come back because they had a great clinical experience. They feel better, they are healthier now, and you want the rest of their experience financially to match that great clinical experience. 

Anything we can do in the tools themselves to be predictive, clear, beautiful, and simple will make all the difference.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HealthPay24.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, electronic medical records, Enterprise transformation, Government, healthcare, Information management, machine learning, mobile computing, multicloud, Networked economy, procurement, Security, supply chain, User experience | Tagged , , , , , , , , , | Leave a comment

A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud

The next BriefingsDirect cloud adoption best practices discussion focuses on some of the strictest security and performance requirements that are newly being met for an innovative global finance services deployment.

We’ll now explore how a major financial transactions provider is exploiting cloud models to extend a distributed real-time payment capability across the globe. Due to the needs for localized data storage, privacy regulations compliance, and lightning-fast transactions speeds, this extreme cloud-use formula pushes the boundaries — and possibilities — for hybrid cloud solutions.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us now as we hear from an executive at Mastercard and a cloud deployment strategist about a new, cutting-edge use for cloud infrastructure. Please welcome Paolo Pelizzoli, Executive Vice President and Chief Operating Officer at Realtime Payments International for Mastercard, and Robert Christiansen, Vice President and Cloud Strategist at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is happening with cloud adoption that newly satisfies such major concerns as strict security, localized data, and top-rate performance? Robert, what’s allowing for a new leading edge when it comes to the public clouds’ use?

Christiansen: A number of new use cases have been made public. For the front runners like Capital One [Financial Corp.], and some other organizations, they have taken core applications that would otherwise be considered sacred and are moving them to cloud platforms. Those have become more-and-more evident and visible. The Capital One CIO, Robert Alexander, has been very vocal about that.


So now others have followed suit. And the US federal government regulators have been much more accepting around the audit controls. We are seeing a lot more governance and automation happening as well. A number of the business control objectives – from security to the actual technologies to the implementations — are becoming more accepted practices today for cloud deployment.

So, by default, folks like Paolo at Mastercard are considering the new solutions that could give them a competitive edge. We are just seeing a lot more acceptance of cloud models over the last 18 months.

Gardner: Paolo, is increased adoption a matter of gaining more confidence in cloud, or are there proof points you look for that opens the gates for more cloud adoption?

Compliance challenges cloud 

Pelizzoli: As we see what’s happening in the world around nationalism, the on-the-soil [data sovereignty] requirements have become much more prevalent. It will continue, so we need the ability to reach those countries, deploy quickly, and allow data persistence to occur there.


The adoption side of it is a double-edged sword. I think everybody wants to get there, and everybody intuitively knows that they can get there. But there are a lot of controls around privacy, as well as the SOX and SOC 1 reports compliance, and everything else that needs to be adjusted to take into the cloud into account. And if the cloud is rerouting traffic because one zone goes down and it flips to another zone, is that still within the same borders, is it still compliant, and can you prove that?

So while technologically this all can be done, from a compliance perspective there are still a lot of different boxes left to check before someone can allow payments data to flow actively across the cloud — because that’s really the panacea.

Gardner: We have often seen a lag between what technology is capable of and what regulations, standards, and best practices allow. Are we beginning to see a compression of that lag? Are regulators, in effect, catching up to what the technology is capable of?

Pelizzoli: The technology is still way out in the front. The regulators have a lot on their plates. We can start moving as long as we adhere to all the regulations, but the regulations between countries and within some countries will continue to have a lagging effect. That being said, you are beginning to see governments understand how sanctions occur and they want their own networks within their own borders.

Those are the types of things that require a full-fledged payments network that predated the public Internet to begin to gain certain new features, functions, and capabilities. We are now basically having to redo that payments-grade network.

Gardner: Robert, the technology is highly capable. We have a major player like Mastercard interested in solving their new globalization requirements using cloud. What can help close the adoption gap? Does hybrid cloud help solve the log-jam?

Christiansen: The regionalization issues are upfront, if not the number-one requirement, as Paolo has been talking about. I think about South Korea. We just had a meeting with the largest banking folks there. They are planning now for their adoption of public cloud, whether it’s Microsoft Azure, Amazon Web Services (AWS), or Google Cloud. But the laws are just now making it available.

Prior to January 1, 2019, the laws prohibited public cloud use for financial services companies, so things are changing. There is lot of that kind of thing going on around the globe. The strategy seems to be very focused on making the compute, network, and storage localized and regionalized. And that’s going to require technology grounding in some sort of connectivity across on-premises and public, while still putting the proper security in-place.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

So, you may see more use of things like OpenShift or Cloud Foundry’s Pivotal platform and some overlay that allows folks to take advantage of that so that you can push down an appliance, like a piece of equipment, into a specific territory.

I’m not certain as to the cost that you incur as a result of adding such an additional local layer. But from a rollout perspective, this is an upfront conversation. Most financial organizations that globalize want to be able to develop and deploy in one way while also having regional, localized on-premises services. And they want it to get done as if in a public cloud. That is happening in a multiple number of regions.

Gardner: Paolo, please tell us more about International Realtime Payments. Are you set up specifically to solve this type of regional-global deployment problem, or is there a larger mandate? What’s the reason for this organization?

Hybrid help from data center to the edge

Pelizzoli: Mastercard made an acquisition a number of years ago of Vocalink. Vocalink did real-time secure interbank funds transfer, and linkage to the automated clearing house (ACH) mechanism for the United Kingdom (UK), including the BACS and LINK extensions to facilitate payments across the banking system. Because it’s nationally critical infrastructure, and it’s bank-to-bank secure funds transfer with liquidity checks in place, we have extended the capabilities. We can go through and perform the same nationally critical functions for other governments in other countries.

Vocalink has now been integrated into Mastercard, and Realtime Payments will extend the overall reach, to include the debit/credit loyalty gift “rails” that Mastercard has been traditionally known for.

I absolutely agree that you want to develop one way and then be able to deploy to multiple locations. As hybrid cloud has arrived, with the advent of Microsoft Azure Stack and more recently AWS’s Outposts, it gives you the cloud inside of your data center with the same capabilities, the same consoles, and the same scripting and automation, et cetera.

As we see those mechanisms become richer and more robust, we will go through and be deploying that approach to any and all of our resources — even being embedded at the edge within a point of sale (POS) device.

As we examine the different requirements from government regulations, it really comes down to managing personally identifiable information.

As we examine the different requirements from government regulations, it really comes down to managing personally identifiable information.

So, if you can secure the transaction information, by abstracting out all the other stuff and doing some interesting cryptography that only those governments know about, the [transaction] flow will still go through [the cloud] but the data will still be there, at the edge, and on the device or appliance.

We already provide for detection and other value-added services for the assurance of the banks, all the way down to the consumers, to protect them. As we start going through and seeing globalization — but also the regionalizationdue to regulation – it will be interesting to uncover fraudulent activity. We already have unique insights into that.

No more noisy neighbors

Christiansen: Getting back to the hybrid strategy, AWS Outposts and Azure Stack have created the opportunity for such globalization at speed. Someone can plug in a network and power cable and get a public cloud-like experience yet it’s on an on-premises device. That opens a significant number of doors.

You eliminate multi-tenancy issues, for example, which are a huge obstacle when it comes to compliance. In addition, you have to address “noisy neighbor” issues, performance issues, failovers, and stuff like that that are caused by multi-tenancy issues.

If you’re able to simply deploy a cloud appliance that is self-aware, you have a whole other trajectory toward use of the cloud technology. I am actively encouraged to see what Microsoft and Amazon can do to press that further. I just wanted to tag that onto what Paolo was talking about.

Pelizzoli: Right, and these self-contained deployments can use Kubernetes. In that way, everything that’s required to go through and run autonomously — even the software-defined networks (SDNs) – can be deployed via containers. It actually knows where its point of persistence needs to be, for data sovereignty compliance, regardless of where it actually ends up being deployed.

This comes back to an earlier comment about the technology being quite far ahead. It is still maturing. I don’t think it is fully mature to everybody’s liking yet. But there are some very, very encouraging steps.

As long as we go in with our eyes wide open, there are certain things that will allow us to go through and use those technologies. We still have some legacy stuff pinned to bare-metal hardware. But as things start behaving in a hybrid cloud fashion as we’re describing, and once we get all the security and guidelines set up, we can migrate off of those legacy systems at an accelerated pace.

Gardner: It seems to me that Realtime Payments International could be a bellwether use case for such global hybrid cloud adoption. What then are the checkboxes you need to sign off on in order to be able to use cloud to solve your problems?

Perpetual personal data protection

Pelizzoli: I can’t give you all the criteria, but the persistence layer needs to be highly encrypted. The transports need to be highly encrypted. Every time anything is persisted, it has to go through a regulatory set of checks, just to make sure that it’s allowed to do what it’s being asked to do. We need a lot of cleanliness in the way metrics are captured so that you can’t use a metric to get back to a person.

If nothing else, we have learned a lot from the recent [data intrusion] announcements by FacebookMarriott, and others. The data is quite prevalent out there. And payments data, just like your hospital data, is the most personal.

As we start figuring out the nuances of regulation around an individual service, it must be externalized. We have to be able to literally inject solutions to regulatory requirements – and not by coding it. We can’t be creating any payments that are ambiguous.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

That’s why we are starting to see a lot of effort going into how artificial intelligence (AI) can help. AI could check services and configurations to test for every possibility so that there isn’t a “hole” that somebody can go through with a certain amount of credentials.

As we go forward, those are the types of things that — when we are in a public cloud — we need to account for. When we were all internal, we had a lot of perimeter defenses. The new perimeter becomes more nebulous in a public cloud. You can create virtual private clouds, but you need to be very wary that you are expanding time factors or latency.

Gardner: If you can check off these security and performance requirements, and you are able to start exploiting the hybrid cloud continuum across different localities, what do you get? What are the business outcomes you’re seeking?

Common cloud consistency 

Pelizzoli: A couple of things. One is agility, in terms of being able to deploy to two adjacent countries, if one country has a major outage. That means ease of access to a payments-grade network — without having to go through and put in hardware, which will invariably fail.

Also, the ability to scale quickly. There is an expected peak season for payments, such as around the Christmas holidays. But there could be an unexpected peak season based on bad news — not a peak season, but a peak day. How do you go through and have your systems scale within one country that wasn’t normally producing a lot of transactions? All of a sudden, now it’s producing 18 times the amount of transactions.

Those types of things give us a different development paradigm. We have a lot of developers. A [common cloud approach] would give us consistency, and the ability to be clean in how we automate deployment; the testing side of it, the security checks, etc.

Before, there were a lot of different ways of doing development, depending on the language and the target. Bringing that together would allow increased velocity and reduced cost, in most cases. And what I mean by “most cases” is I can use only what I need and scale as I require. I don’t have to build for the worst possible day and then potentially never hit it. So, I could use my capacity more efficiently.

Gardner: Robert, it sounds like major financial applications, like a global real-time payment solution, are getting from the cloud what startups and cloud-native organizations have taken for granted. We’re now able to take the benefits of cloud to some of the most extreme and complex use cases. 

Cloud-driven global agility

Christiansen: That’s a really good observation, Dana. A healthcare organization could use the same technologies to leverage an industrial-strength transaction platform that allows them to deliver healthcare solutions globally. And they could deem it as a future-proof infrastructure solution. 

One of the big advantages of the public cloud has been the isolation of all those things that many central IT teams have had to do day-in and day-out. That is to patch releases, upgrade processes, constantly looking at the refresh. They call it painting the Golden Gate Bridge – where once you finish painting the bridge, you have to go back and do it all over again. And a lot of that effort and money goes into that refresh process. 

And so they are asking themselves, “Hey, how can we take our $3 or $4 billion IT spend, and take x amount of that and begin applying it toward innovation?”

Right now there is so much rigidity. Everyone is asking the same question, “How do I compete globally in a way that allows me to build the agility transformation into my organization?” 

And if someone can take a piece out of that equation, all things are eligible. Everyone is asking the same question, “How do I compete globally in a way that allows me to build the agility transformation into my organization?” Right now there is so much rigidity, but the balance against what Paolo was talking about — the industrial-grade network and transaction framework — to get this stuff done cannot be relinquished.

So people are asking a lot of the same questions. They come in and ask us at CTP, “Hey, what use-cases are actually in place today where I can start leveraging portions of the public cloud so I can start knocking off pieces?”

Paolo, how do you use your existing infrastructure, and what portion of cloud enablement can you bring to the table? Is it cloud-first, where you say, “Hey, everything is up for grabs?” Or are you more isolated into using cloud only in a certain segment?

Follow a paved path of patterns

Pelizzoli: Obviously, the endgame is to be in the cloud 100 percent. That’s utopian. How do we get there? There is analysis being done. It depends if we are talking about real-time payments, which is actually more prepared to go into the cloud than some of the core processing that handles most of North America and Europe from an individual credit card or debit card swipe. Some of those core pieces need more rewiring to take advantage of the cloud.

When we look at it, we are decomposing all of the legacy systems and seeing how well they fit in to what we call a paved path of patterns. If there is a paved path for a specific type of pattern, we put it on the list of things to transition to, as being built as a cloud-native service. And then we run it alongside its parent for a while, to test it, through stressful periods and through forced chaos. If the segment goes down, where does it flip over to? And what is the recovery time?

The one thing we cannot do is in any way increase latency. In fact, we have some very aggressive targets to reduce latency wherever we can. We also want to improve the recovery and security of the individual components, which we end up calling value-added services.

There are some basic services we have to provide, and then value-added services, which people can opt in or opt out of. We do have a plan and strategy to go through and prioritize that list.

Gardner: Paolo, as you master hybrid cloud, you must have visibility and monitoring across these different models. It’s a new kind of monitoring, a new kind of management.

What do you look to from CTP and HPE to help attain new levels of insight so you can measure what’s going on, and therefore optimize and automate?

Pelizzoli: CTP has been a very good and integral part of our first steps into the cloud. 

Now, I will give you one disclaimer. We have some companies that are Mastercard companies that are already in the cloud, and were born in the cloud. So we have experience with AWS, we have experience with Azure, and we have some experience with Google Cloud Platform.

It’s not that Mastercard isn’t in the cloud already, it is. But when you start taking the entire plant and moving it, we want to make sure that the security controls, which CTP has been helping ratify, get extended into the cloud — and where appropriate, actually removed, because there are better ones in the cloud today.

Extend the cloud management office 

Now, the next phase is to start building out a cloud management office. Our cloud management office was created early last year. It is now getting the appropriate checks and audits from finance, the application teams, the architecture team, security teams, and so on.

As that list of prioritized applications comes through, they have the appropriate paved path, checks, and balance. If there are any exceptions, it gets fiercely debated and will either get a pass or it will not. But even if it does not, it can still sit within our on-premises version of the cloud, it’s just more protected.

As we route all the traffic, that is where there is going to be a lot of checks within the different network hops that it has to take to prevent certain information from getting outside when it’s not appropriate.

Gardner: And is there something of a wish list that you might have for how to better fulfill the mandate of that cloud management office?

Pelizzoli: We have CTP, which HPE purchased along with RedPixie. They cover, between those two acquisitions, all of the public cloud providers.

Now, the cloud providers themselves are selling you the next feature-function to move themselves ahead of their competitor. CTP and RedPixie are taking the common denominator across all of them to make sure that you are not overstepping promises from one cloud provider into another cloud provider. You are not thinking that everybody is moving at the same pace.

They also provide implementation capabilities, migration capabilities, and testing capabilities through the larger HPE organization. The fact is we have strong relationships with Microsoft and with Amazon, and so does HPE. If we can bring the collective muscle of Mastercard, HPE, and the cloud providers together, we can move mountains.

Gardner: We hear folks like Paolo describe their vision of what’s possible when you can use the cloud providers in an orchestrated, concerted, and value-added approach. 

Other people in the market may not understand what is going on across multi-cloud management requirements. What would you want them to know, Robert?

O brave new hybrid world

Christiansen: A hybrid world is the true reality. Just the complexity of the enterprise, no matter what industry you are in, has caused these application centers of gravity. The latency issues between applications that could be moved to cloud or not, or impacted by where the data resides, these have created huge gravity issues, so they are unable to take advantage of the frameworks that the public clouds provide. 

So, the reality is that the public cloud is going to have to come down into the four walls of the enterprise. As a result of that, we are seeing an explosion of the common abstraction — there is going to be some open sourced framework for all clouds to communicate and to talk and behave alike.

Over the past decade, the on-premises and OpenStack world has been decommissioning the whole legacy technology stack, moving it off to the side as a priority, as they seek to adopt cloud. The reality now is that we have regional, government, and data privacy issues, we have got all sorts of things that are pulling it all back internally again. 

Out of all this chaos is going to rise the phoenix of some sort of common framework. There has to be. There is no other way out of this. We are already seeing organizations such as Paolo’s at Mastercard develop a mandate to take the agile step forward.

They want somebody to provide the ability to gain more business value versus the technology, to manage and keep track of infrastructure, and to future-proof that platform. But at the same time, they want a technology position where they can use common frameworks, common languages, things that give interoperability across multiple platforms. That’s where you are seeing a huge amount of investment. 

I don’t know if you recently saw that HashiCorp got $100 million inadditional funding, and they have a valuation of almost $2 billion. This is a company that specializes in sitting in that space. And we are going to see more of that.

Learn More About Software-Defined and

Hybrid Cloud Solutions

That Reduce Complexity 

And as folks like Mastercard drive the requirements, the all-in on one public cloud mentality is going to quickly evaporate. These platforms absolutely have to learn how to play together and get along with on-premises, as well as between themselves.

Gardner: Paolo, any last thoughts about how we get cloud providers to be team players rather than walking around with sharp elbows?

Tech that plays well with others

Pelizzoli: I think it’s actually going to end up being a lot more of the technology that’s being allowed to run on these cloud platforms is going to take care of it.

I mentioned Kubernetes and Docker earlier, and there are others out there. The fact that they can isolate themselves from the cloud provider itself is where it will neutralize some of the sharp elbowing that goes on.

Now, there are going to be features that keep coming up that I think companies like ours will take a look at and start putting workloads where the latest cutting-edge feature gives us a competitive advantage and then wait for other cloud providers to go throughand catch up. And when they do, we can then deploy out on those. But those will be very conscious decisions. 

I don’t think that there is a one cloud fits all, but where appropriate we will go throughand be absolutely multi-cloud. Where there is defining difference, we will go throughand select the cloud provider that best suits in that area to cover that specific capability.

Gardner: It sounds like these extreme use cases and the very important requirements that organizations like Mastercard have will compel this marketplace to continue to flourish rather than become a one-size-fits-all. So an interesting time that we are seeing the maturation of the applications and use cases actually start to create more of a democratization of cloud in the marketplace.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, Cyber security, Docker, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, multicloud, Networked economy, Security | Tagged , , , , , , , , , , , | Leave a comment

Where the rubber meets the road: How users see the IT4IT standard building competitive business advantage

The next BriefingsDirect IT operations strategy panel discussion explores how the IT4IT[tm] Reference Architecture for IT management creates demonstrated business benefits – in many ways, across many types of organizations. 

Since its delivery in 2015 by The Open GroupIT4IT has focused on defining, sourcing, consuming, and managing services across the IT function’s value stream to its stakeholders. Among its earliest and most ardent users are IT vendors, startups, and global professional services providers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how this variety of highly efficient businesses and their IT organizations make the most of IT4IT – often as a complimentary mix of frameworks and methodologies — we are now joined by our panel:

The panel discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. Here are some excerpts:

Gardner: Big trends are buffeting business in 2019. Companies of all kinds need to attain digital transformation faster, make their businesses more intelligent and responsive to their markets, and improve end user experiences. So, software development, applications lifecycles, and optimizing how IT departments operate are more important than ever. And they need to operate as a coordinated team, not in silos.

Lars, why is the IT4IT standard so powerful given these requirements that most businesses face?

Rossen: There are a number of reasons, but the starting point is the fact that it’s truly end-to-end. IT4IT starts from the planning stage — how to convert your strategy into actionable projects that are being measured in the right manner — all the way to development, delivery of the service, how to consume it, and at the end of the day, to run it.

There are many other frameworks. They are often very process-oriented, or capability-oriented. But IT4IT gives you a framework that underpins it all. Every IT organization needs to have such a framework in place and be rationalized and well-integrated. And IT4IT can deliver that.

Gardner: And IT4IT is designed to help IT organizations elevate themselves in terms of the impact they have on the overall business.

Mark, when you encounter someone who says IT4IT, “What is that?” What’s your elevator pitch, how do you describe it so that a lay audience can understand it?

Bodman: I pitch it as a framework for managing IT and leave it at that. I might also say it’s an operating model because that’s something a chief information officer (CIO) or a business person might know.

If it’s an individual contributor in one of the value streams, I say it’s a broader framework than what you are doing. For example, if they are a DevOps guy, or a maybe a Scaled Agile Framework (SAFe) guy, or even a test engineer, I explain that it’s a more comprehensive framework. It goes back to the nature of IT4IT being a hub of many different frameworks — and all designed as one architecture.

Gardner: Is there an analog to other business, or even cultural, occurrences that IT4IT is to an enterprise? 

Rossen: The analogy I have is that you go to The Lord of the Rings, and IT4IT is the “one ring to rule them all.” It actually combines everything you need.

Gardner: Why do companies need this now? What are the problems they’re facing that requires one framework to rule them all?

Everyone, everything on the same page

Esler: A lot of our clients have implemented a lot of different kinds of software — automation software, orchestration software, and portals. They are sharing more information, more data. But they haven’t changed their operating model.

Using IT4IT is a good way to see where your gaps are, what you are doing well, what you are not doing not so well, and how to improve on that. It gives you a really good foundation on knowing the business of IT.

Bennett: We are hearing in the field is that IT departments are generally drowning at this point. You have a myriad of factors, some of which are their fault and some of which aren’t. The compliance world is getting nightmare-strict. The privacy laws that are coming in are straining what are already resource-constrained organizations. At the same time, budgets are being cut.

The other side of it is the users are demanding more from IT, as a strategic element as opposed to simply a support organization. As a result, they are drowning on a daily basis. Their operating model is — they are still running on wooden wheels. They have not changed any of their foundational elements.

If your family has a spending problem, you don’t stop spending, you go on a budget. You put in an Excel spreadsheet, get all the data into one place, pull it together, and you figure out what’s going on. Then you can execute change. That’s what we do from an IT perspective. It’s simply getting everything in the same place, on the same page, and talking the same language. Then we can start executing change to survive.

Peruse a Full Library of

IT4IT Reference Architecture


Gardner: Because IT in the past could operate in silos, there would be specialization. Now we need a team-sport approach. Mark, how does IT4IT help that?

Bodman: An analogy is the medical profession. You have specialists, and you have generalist doctors. You go to the generalist when you don’t really know where the problem is. Then you go to a specialist with a very specific skill-set and the tools to go deep. IT4IT has aimed at that generalist layer, then with pointers to the specialists.

Gardner: IT4IT has been available since October 2015, which is a few years in the market. We are now seeing different types of adoption patterns—from small- to medium-size businesses (SMBs) and up to enterprises. What are some “rubber meets the road” points, where the value is compelling and understood, that then drive this deeper into the organization?

Where do you see IT4IT as an accelerant to larger business-level improvements?

Success via stability

Vijaykumar: When we look at the industry in general there are a lot of disruptive innovations, such as cloud computing taking hold. You have other trends like big data, too. These are driving a paradigm shift in the way IT is perceived. So, IT is not only a supporting function to the business anymore — it’s a business enabler and a competitive driver.

Now you need stability from IT, and IT needs to function with the same level of rigor as a bank or manufacturer. If you look at those businesses, they have reference architectures that span several decades. That stability was missing in IT, and that is where IT4IT fills a gap — we have come up with a reference architecture.

What does that mean? When you implement new tooling solutions or you come up with new enterprise applications, you don’t need to rip apart and replace everything. You could still use the same underlying architecture. You retain most of the things — even when you advance to a different solution. That is where a lot of value gets created.

Esler: One thing you have to remember, too, is that this is not just about new stuff. It’s not just about artificial intelligence (AI)Internet of Things (IoT), big data, and all of that kind of stuff — the new, shiny stuff. There is still a lot of old stuff out there that has to be managed in the same way. You have to have a framework like IT4IT that allows you to have a hybrid environment to manage it all.

Gardner: The framework to rule all frameworks.

Rossen: That also goes back to the concept of multi-modal IT. Some people say, “Okay, I have new tools for the new way of doing stuff, and I keep my old tools for the old stuff.”

But, in the real world, these things need to work together. The services depend on each other. If you have a new smart banking application, and you still have a COBOL mainframe application that it needs to communicate with, if you don’t have a single way of managing these two worlds you cannot keep up with the necessary speed, stability, and security.

Gardner: One of the things that impresses me about IT4IT is that any kind of organization can find value and use it from the get-go. As a start-up, an SMB, Jerrod, where you are seeing the value that IT4IT brings?

Solutions for any size business

Bennett: SMBs have less pain, but proportionally it’s the same, exact problem. Larger enterprises have enormous pain, the midsize guys have medium pain, but it’s the same mess.

But the SMBs have an opportunity to get a lot more value because they can implement a lot more of this a lot faster. They can even rip up the foundation and start over, a greenfield approach. Most large organizations simply do not have that capability.

The same kind of change – like in big data, how much data is going to be created in the next five years versus the last five years? That’s universal, everyone is dealing with these problems.

Gardner: At the other end of the scale, Mark, big multinational corporations with sprawling IT departments and thousands of developers — they need to rationalize, they need to limit the number of tools, find a fit-for-purpose approach. How does IT4IT help them? 

Bodman: It helps to understand which areas to rationalize first, that’s important because you are not going to do everything at once. You are going to focus on your biggest pain points.

The other element is the legacy element. You can’t change everything at once. There are going to be bigger rocks, and then smaller rocks. Then there are areas where you will see folks innovate, especially when it comes to the DevOps, new languages, and new platforms that you deploy new capabilities on.

What IT4IT allows is for you to increasingly interchange those parts. A big value proposition of IT4IT is standardizing those components and the interfaces. Afterward, you can change out one component without disrupting the entire value chain.

Gardner: Rob, complexity is inherent in IT. They have a lot on their plate. How does the IT4IT Reference Architecture help them manage complexity?

Reference architecture connects everything

Akershoek: You are right, there is growing complexity. We have more services to manage, more changes and releases, and more IT data. That’s why it’s essential in any sized IT organization to structure and standardize how you manage IT in a broader perspective. It’s like creating a bigger picture.

Most organizations have multiple teams working on different tools and components in a whole value chain. I may have specialized people for security, monitoring, the service desk, development, for risk and compliance, and for portfolio management. They tend to optimize their own silo with their own practices. That’s what IT4IT can help you with — creating a bigger picture. Everything should be connected.

Esler: I have used IT4IT to help get rid of those very same kinds of silos. I did it via a workshop format. I took the reference architecture from IT4IT and I got a certain number of people — and I was very specific about the people I wanted — in the room. In doing this kind of thing, you have to have the right people in the room.

We had people for service management, security, infrastructure, and networking — just a whole broad range across IT. We placed them around the table, and I took them through the IT4IT Reference Architecture. As I described each of the words, which meant function, they began to talk among themselves, to say, “Yes, I had a piece of that. I had this piece of this other thing. You have a piece of that, and this piece of this.”

It started them thinking about the larger functions, that there are groups performing not just the individual pieces, like service management or infrastructure.

Peruse a Full Library of

IT4IT Reference Architecture


Gardner: IT4IT then is not muscling out other aspects of IT, such as Information Technology Infrastructure Library (ITIL), The Open Group Architecture Framework (TOGAF), and SAFe. Is there a harmonizing opportunity here? How does IT4IT fit into a larger context among these other powerful tools, approaches, and methodologies?

Rossen: That’s an excellent question, especially given that a lot of people into SAFe might say they don’t need IT4IT, that SAFe is solving their whole problem. But once you get to discuss it, you see that SAFe doesn’t give you any recommendation about how tools need to be connected to create the automated pipeline that SAFe relies on. So IT4IT actually compliments SAFe very well. And that’s the same story again and again with the other ones.

The IT4IT framework can help bring those two things – ITIL and SAFe — together without changing the IT organizations using them. ITIL can still be relevant for the helpdesk, et cetera, and SAFe can still function — and they can collaborate better.

Gardner: Varun, another important aspect to maturity and capability for IT organizations is to become more DevOps-oriented. How does DevOps benefit from IT4IT? What’s the relationship?

Go with the data flow

Vijaykumar: When we talk about DevOps, typically organizations focus on the entire service design lifecycle and how it moves into transition. But the relationship sometimes gets lost between how a service gets conceptualized to how it is translated into a design. We need to use IT4IT to establish traceability, to make sure that all the artifacts and all the information basically flows through the pipeline and across the IT value chain.

The way we position the IT4IT framework to organizations and customers is very important. A lot of times people ask me, “Is this going to replace ITIL?” Or, “How is it different from DevOps?”

The simplest way to answer those questions is to tell them that this is not something that provides a narrative guidance. It’s not a process framework, but rather an information framework. We are essentially prescribing the way data needs to flow across the entire IT value chain, and how information needs to get exchanged.

It defines how those integrations are established. And that is vital to having an effective DevOps framework because you are essentially relying on traceability to ensure that people receive the right information to accept services, and then support those services once they are designed.

Gardner: Let’s think about successful adoption, of where IT4IT is compelling to the overall business. Jerrod, among your customers where does IT4IT help them?

Holistic strategy benefits business

Bennett: I will give an example. I hate the word, but “synergy” is all over this. Breaking down silos and having all this stuff in one place — or at least in one process, one information framework — helps the larger processes get better.

The classic example is Agile development. Development runs in a silo, they sit in a black box generally, in another building somewhere. Their entire methodology of getting more efficient is simply to work faster.

So, they implement sprints, or Agile, or scrum, or you name it. And what you recognize is they didn’t have a resource problem, they had a throughput problem. The throughput problem can be slightly solved using some of these methodologies, by squeezing a little bit more out of their glides.

But what you find, really, is they are developing the wrong thing. They don’t have a strategic element to their businesses. They simply develop whatever the heck they decide is important. Only now they develop it really efficiently. But the output on the other side is still not very beneficial to the business.

If you input a little bit of strategy in front of that and get the business to decide what it is that they want you to develop – then all of a sudden your throughput goes through the roof. And that’s because you have broken down barriers and brought together the [major business elements], and it didn’t take a lot. A little bit of demand management with an approval process can make development 50 percent more efficient — if you can simply get them working on what’s important.

It’s not enough to continue to stab at these small problems while no one has yet said, “Okay, timeout. There is a lot more to this information that we need.” You can take inspiration from the manufacturing crisis in the 1980s. Making an automobile engine conveyor line faster isn’t going to help if you are building the wrong engines or you can’t get the parts in. You have to view it holistically. Once you view it holistically, you can go back and make the assembly lines work faster. Do that and sky is the limit.

Gardner: So IT4IT helps foster “simultaneous IT operations,” a nice and modern follow-on to simultaneous engineering innovations of the past.

Mark, you use IT4IT internally at ServiceNow. How does IT4IT help ServiceNow be a better IT services company?

IT to create and consume products

Bodman: A lot of the activities at ServiceNow are for creating the IT Service Management (ITSM) products that we sell on the market, but we also consume them. As a product manager, a lot of my job is interfacing with other product managers, dealing with integration points, and having data discussions.

As we make the product better, we automatically make our IT organization better because we are consuming it. Our customer is our IT shop, and we deploy our products to manage our products. It’s a very nice, natural, and recursive relationship. As the company gets better at product management, we can get more products out there. And that’s the goal for many IT shops. You are not creating IT for IT’s sake, you are creating IT to provide products to your customers.

Gardner: Rob, at Fruition Partners, a DXE company, you have many clients that use IT4IT. Do you have a use case that demonstrates how powerful it can be?

Akershoek: Yes, I have a good example of an insurance organization where they have been forced to reduce significantly the cost to develop and maintain IT services.

Initially, they said, “Oh, we are going to automate and monitor DevOps.” When I showed them IT4IT they said, “Well, we are already doing that.” And I said, “Why don’t you have the results yet? And they said, “Well, we are working on it, come back in three months.”

IT4IT saved time and created transparency. With that outcome they realized, “Oh, we would have never been able to achieve that if had continued the way we did it in the past.”

But after that period of time, they still were not succeeding with speed. We said, “Use IT4IT, take it to specific application teams, and then move to cloud, in this case, Azure Cloud. Show that you can do it end-to-end from strategy into an operation, end-to-end in three months’ time and demonstrate that it works.”

And that’s what has been done, it saved time and created transparency. With that outcome they realized, “Oh, we would have never been able to achieve that if we had continued the way we did it in the past.” 

Gardner: John, at HPE Pointnext, you are involved with digital transformation, the highest order of strategic endeavors and among the most important for companies nowadays. When you are trying to transform an organization – to become more digital, data-driven, intelligent, and responsive — how does IT4IT help?

Esler: When companies do big, strategic things to try and become a digital enterprise, they implement a lot of tools to help. That includes automation and orchestration tools to make things go faster and get more services out.

But they forget about the operating model underneath it all and they don’t see the value. A big drug company I worked with was expecting a 30 percent cost reduction after implementing such tools, and they didn’t get it. And they were scratching their heads, asking, “Why?”

We went in and used IT4IT as a foundation to help them understand where they needed change. In addition to using some tools that HPE has, that helped them to understand — across different domains, depending on the level of service they want to provide to their customers — what they needed to change. They were able to learn what that kind of organization looks like when it’s all said and done.

Gardner: Lars, Micro Focus has 4,000 to 5,000 developers and needs to put software out in a timely fashion. How has IT4IT helped you internally to become a better development organization?

Streamlining increases productivity

Rossen: We used what is by now a standard technique in IT4IT, to do rationalization. Over a year, we managed to convert it all into a single tool chain that 80 percent of the developers are on.

With that we are now much more agile in delivering products to market. We can do much more sharing. So instead of taking a year, we can do the same easily every three months. But we also have hot fixes and a change focus. We probably have 20 releases a day. And on top of that, we can do a lot more sharing on components. We can align much more to a common strategy around how all our products are being developed and delivered to our customers. It’s been a massive change.

Gardner: Before we close out, I’d like to think about the future. We have established that IT4IT has backward compatibility, that if you are a legacy-oriented IT department, the reference architecture for IT management can be very powerful for alignment to newer services development and use.

But there are so many new things coming on, such as AIOps, AI, machine learning (ML), and data-driven and analytics-driven business applications. We are also finding increased hybrid cloud and multi-cloud complexity across deployment models. And better managing total costs to best operate across such a hybrid IT environment is also very important.

So, let’s take a pause and say, “Okay, how does IT4IT operate as a powerful influence two to three years from now?” Is IT4IT something that provides future-proofing benefits?

The future belongs to IT4IT 

Bennett: Nothing is future-proof, but I would argue that we really needed IT4IT 20 years ago — and we didn’t have it. And we are now in a pretty big mess.

There is nothing magical here. It’s been well thought-out and well-written, but there is nothing new in there. IT4IT is how it ought to have been for a while and it took a group of people to get together and sit down and architect it out, end-to-end.

Theoretically it could have been done in the 1980s and it would still be relevant because they were doing the same thing. There isn’t anything new in IT, there are lots of new-fangled toys. But that’s all just minutia. The foundation hasn’t changed. I would argue that in 2040 IT4IT will still be relevant.

Peruse a Full Library of

IT4IT Reference Architecture


Gardner: Varun, do you feel that organizations that adopt IT4IT are in a better position to grow, adapt, and implement newer technologies and approaches? 

Vijaykumar: Yes, definitely, because IT4IT – although it caters to the traditional IT operating models — also introduces a lot of new concepts that were not in existence earlier. You should look at some of the concepts like service brokering, catalog aggregation, and bringing in the role of a service integrator. All of these are things that may have been in existence, but there was no real structure around them.

IT4IT provides a consolidated framework for us to embrace all of these capabilities and to drive improvements in the industry. Coupled with advances in computing — where everything gets delivered on the fly – and where end users and consumers expect a lot more out of IT, I think IT4IT helps in that direction as well.

Gardner: Lars, looking to the future, how do you think IT4IT will be appreciated by a highly data-driven organization?

Rossen: Well, IT4IT was a data architecture to begin with. So, in that sense it was the first time that IT itself got a data architecture that was generic. Hopefully that gives it a long future.

I also like to think about it as being like roads we are building. We now have the roads to do whatever we want. Eventually you stop caring about it, it’s just there. I hope that 20 years from now nobody will be discussing this, they will just be doing it.

The data model advantage

Gardner: Another important aspect to running a well-greased IT organization — despite the complexity and growing responsibility — is to be better organized and to better understand yourself. That means having better data models about IT. Do you think that IT4IT-oriented shops have an advantage when it comes to better data models about IT?

Bodman: Yes, absolutely. One of the things we just produced within the [IT4IT reference architecture data model] is a reporting capability for key performance indicators (KPI) guidance. We are now able to show what kinds of KPIs you can get from the data model — and be very prescriptive about it.

In the past there had been different camps and different ways of measuring and doing things. Of course, it’s hard to benchmark yourself comprehensively that way, so it’s really important to have consistency there in a way that allows you to really improve.

In the past there had been different camps and different ways of measuring and doing things. It’s hard to benchmark yourself that way. It’s really important to have consistency in a way that allows you to really improve.

The second part — and this is something new in IT4IT that is fundamental — is the value stream has a “request to fulfill (R2F)” capability. It’s now possible to have a top-line, self-service way to engage with IT in a way that’s in a catalog and that is easy to consume and focused on a specific experience. That’s an element that has been missing. It may have been out there in pockets, but now it’s baked in. It’s just fabric, taught in schools, and you just basically implement it.

Rossen: The new R2F capability allows an IT organization to transform, from being a cost center that does what people ask, to becoming a service provider and eventually a service broker, which is where you really want to be.

Esler: I started in this industry in the mainframe days. The concept of shared services was prevalent, so time-sharing, right? It’s the same thing. It hasn’t really changed. It’s evolved and going through different changes, but the advent of the PC in the 1980s didn’t change the model that much.

Now with hyperconvergence, it’s moving back to that mainframe-like thing where you define a machine by software. You can define a data center by software.

Peruse a Full Library of

IT4IT Reference Architecture


Gardner: For those listening and reading and who are intrigued by IT4IT and would like to learn more, where can they go and find out more about where the rubber meets the IT road?

Akershoek: The best way is going to The Open Group website. There’s a lot of information on the reference architecture itself, case studies, and video materials. 

How to get started is typically you can do that very small. Look at the materials, try to understand how you currently operate your IT organization, and plot it to the reference architecture.

That provides an immediate sense of what you may be missing, are duplicating areas, or have too much going on without governance. You can begin to create a picture of your IT organization. That’s the first step to try to create or co-create with your own organization a bigger picture and decide where you want to go next.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Bimodal IT, Business intelligence, Cloud computing, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, ITSM, Micro Focus, multicloud, professional services, The Open Group | Tagged , , , , , , , , , , , , , , | Leave a comment

IT kit sustainability: A business advantage and balm for the planet

The next BriefingsDirect sustainable resources improvement interview examines how more companies are plunging into the circular economyto make the most of their existing IT and business assets.

We’ll now hear how more enterprises are optimizing their IT kit and finding innovative means to reduce waste — as well as reduce energy consumption and therefore their carbon footprint. Stay with us as we learn how a circular economy mindset both improves sustainability as a benefit to individual companies as well as the overall environment.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore the latest approaches to sustainable IT is William McDonough, Chief Executive of McDonough Innovation and Founder of William McDonough and Partners, and Gabrielle Ginér, Head of Environmental Sustainability for BT Group, based in London. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: William, what are the top trends driving the need for reducing waste, redundancy, and inefficiency in the IT department and data center?


McDonough: Materials and energy are both fundamental, and I think people who work in IT systems that are often optimized have difficulty with the concept of waste. What this is about is eliminating the entire concept of waste. So, one thing’s waste is another thing’s food — and so when we don’t waste time, we have food for thought.

A lot of people realize that it’s great to do the right thing, and that would be to not destroy the planet in the process of what you do every day. But it’s also great to do it the right way. When we see the idea of redesigning things to be safe and healthy, and then we find ways to circulate them ad infinitum, we are designing for next use — instead of end of life. So it’s an exciting thing.

Gardner: If my example as an individual is any indication, I have this closet full of stuff that’s been building up for probably 15 years. I have phones and PCs and cables and modems in there that are outdated but that I just haven’t gotten around to dealing with. If that’s the indication on an individual home level, I can hardly imagine the scale of this at the enterprise and business level globally. How big is it?

Devices designed for reuse

McDonough: It’s as big as you think it is, everywhere. What we are looking at is design is the first signal of human intention. If we design these things to be disassembled and reusable, we therefore design for next use. That’s the fundamental shift, that we are now designing differently. We don’t say we design for one-time use: Take, make, waste. We instead design it for what’s next. 

And it’s really important, especially in IT, because these things, in a certain way, they are ephemeral. We call them durables, but they are actually only meant to last a certain amount of time before we move onto the next big thing.

Learn How to Begin

Your IT Circular Economy Journey

If we design your phone in the last 25 years, the odds of you using the same phone for 25 years are pretty low. The notion that we can design these things to become useful again quickly is really part of the new system. We now see the recycling of phone boards that actually go all the way back to base materials in very cost-effective ways. You can mine gold at $210 a ton out there, or you can mine phone boards at about $27,000 a ton. So that’s pretty exciting. 

Gardner: There are clearly economic rationales for doing the right thing. Gabrielle, tell us why this is important to BT as a telecommunications leader.


Ginér: We have seen change in how we deal with and talk to consumers about this. We actually encourage them now to return their phones. We are paying for them. Customers can just walk into a store and get money back. That’s a really powerful incentive for people to return their phones.

Gardner: This concept of design for reuse and recovery is part of the cradle-to-cradle design concept that you have helped establish, William. Tell us why your book, Cradle to Cradle, leads to the idea of a circular economy?

Reuse renews the planet

McDonough: When we first posited Cradle to Cradle, we said you can look at the Earth and realize there are two fundamental systems at play. One is the biological system of which we are a part, the natural systems. And in those systems waste equals food. It wants to be safe and healthy, including the things you wear, the water, the food, all those things, those are biological nutrients.

Then we have technology. Once we started banging on rocks and making metals and plastics and things like that, that’s really technical nutrition. It’s another metabolism. So we don’t want to get the two confused. 

When we talk about lifecycle, we like to refer it to living things have a lifecycle. But your telephone is not a living thing — and we talk about it having a lifecycle, and then an end of life. Well, wait a minute, it’s not alive. It talks to you, but it’s not alive. So really it’s a product or service. 

In Cradle to Cradlewe say there are things that our biology needs to be safe, healthy, and to go back to the soil safely. And then there is technology. Technology needs to come back into technology and to be used over and over again. It’s for our use.

And so, this brings up the concept we introduced, which is product-as-a-service. What you actually want from the phone is not 4,600 different kinds of chemicals. You want a telephone you can talk into for a certain period of time. And it’s a service you want, really. And we see this being products-as-services, and that becomes the circular economy.

Once you see that, you design it for that use. Instead of saying, “Design for end-of-life. I am going to throw it in a landfill,” or something, you say, “I design it for next use. That means it’s designed for disassembly. We know we are going to use it again. It becomes part of a circular economy, which will grow the economy because we are doing it again and again.

Gardner: This approach seems to be win-win-win. There are lots of incentives, lots of rationales for not only doing well, but for doing good as companies. For example, Hewlett Packard Enterprise (HPE) recently announced a big initiative about this.

Another part of this in the IT field that people don’t appreciate is the amount of energy that goes into massive data centers. The hyperscale cloud companies are investing billions of dollars each a year in these new data centers. It financially behooves them to consume less energy, but the amount of energy that data centers need is growing at a fantastic rate, and it’s therefore a larger percentage of the overall carbon footprint.

William, do carbon and energy also need to be considered in this whole circular economy equation?

Intelligent energy management

McDonough: Clearly with the issues concerning climate and energy management, yes. If our energy is coming from fossil fuels, we have fugitive carbon in the atmosphere. That’s something that’s now toxic. We know that. A toxin is material in the wrong place, wrong dose, wrong duration, so this has to be dealt with.

Some major IT companies are leading in this, including AppleGoogleFacebook, and BT. This is quite phenomenal, really. They are reducing their energy consumption by being efficient. They are also adding renewables to their mix, to the point that they are going to be a major part of the power use — but it’s renewably sourced and carbon-free. That’s really interesting.

Learn How to Begin

Your IT Circular Economy Journey

When we realize the dynamic of the energy required to move data — and that the people who do this have the possibility of doing it with renewably powered means – this is a harbinger for something really critical. We can do this with renewable energy while still using electricity. It’s not like asking some heating plant to shift gears quickly or some transportation system to change its power systems; those things are good too, but this industry is based on being intelligent and understanding the statistical significance of what you do.

Gardner: Gabrielle, how is BT looking at the carbon and energy equation and helping to be more sustainable, not only in its own operations, but across your supply chain, all the companies that you work with as partners and vendors?

Ginér: Back to William’s point, two things stand out. One, we are focused on being more energy efficient. Even though we are seeing data traffic grow by around 40 percent per year, we now have nine consecutive years of reducing energy consumption in our networks.

To the second point around renewable energy, we have an ambition to be using 100 percent renewable electricity by 2020. Last year we were at 81 percent, and I am pleased to say that we did a couple of new deals recently, and we are now up at 96 percent. So, we are getting there in terms of the renewables.

What’s been remarkable is how we have seen companies come together in coalitions that have really driven the demand and supply of renewable energy, which has been absolutely fantastic.

As for how we work with our suppliers like HPE, for example, as a customer we have a really important role to play in sending demand signals to our suppliers of what we are looking for. And obviously we are looking for our suppliers to be more sustainable. The initiatives that HPE announced recently in Madrid are absolutely fantastic and are what we are looking for.

Gardner: It’s great to hear about companies like BT that are taking a bellwether approach to this leadership position. HPE is being aggressive in terms of how it encourages companies to recycle and use more data center kit that’s been reconditioned so that you get more and more life out of the same resources.

But if you are not aggressive, if you are not on the leadership trajectory in terms of sustainability, what’s the likely outcome in a few years?

Smart, sustainable IT 

McDonough: This is a key question. When a supplier company like HPE says, “We are going to care about this,” what I like about that is it’s a signal that they are providing services. A lot of the companies — when they are trying to survive in business or trying to move through different agendas to manage modern commerce — they may not have time to figure out how to get renewably powered.

But the ones that do know how to manage those things, it becomes just part of a service. That’s a really elegant thing. So, if a company like HPE says, “Okay, how many problems of yours can we solve? Oh, we will solve that one for you, too. Here, you do what you do, we will all do what we do — and we will all do this together.” So, I think the notion that it becomes part of the service is a very elegant thing

As we see AI coming in, we have to remember there is this thing called human intelligence that goes with it, and there is natural intelligence that goes with being in the world.

Gardner: A lot of companies have sustainability organizations, like BT. But how closely are they aligned with the IT organization? Do IT organizations need to create their own sustainability leaders? How should companies drive a more of the point of the arrow in IT department direction?

McDonough: IT is really critical now because it’s at the core of operations. It touches all the information that’s moving through the system. That’s the place where we can inform the activities and our intentions. But the point today is that humans, as we see artificial intelligence (AI) coming in, we have to remember there is this thing called human intelligence that goes with it, and there is a natural intelligence that goes with being in the world.

We should begin with our values of what is the right thing to do. We talked about what’s right and wrong, or what’s good and bad. Aristotle talked about what is less and more; truth in number. So, when we combine these two, you really have to begin with your values first. Do the right thing, and then go to the value, and do it all the right way.

And that means, let’s not get confused. Because if you are being less bad and you think it’s good, you have to stop and think because you are being bad by definition, just less so. So, we get confused.

Circular Economy Report

Guides Users to 

Increased Sustainability

What we really want to be is more good. Let’s do less bad for sure, but let’s also go out and do more good. And the statistical reference points for data are going to come through the IT to help us determine that. So, the IT department is actually the traffic control for good corporate behavior. 

Gardner: Gabrielle, some thoughts about why sustainability is an important driver for BT in general, and maybe some insights into how the IT function in particular can benefit?

Ginér: I don’t think we need a separate sustainability function for IT. It comes back to what William mentioned about values. For BT, sustainability is part of the company’s ethos. We want to see that throughout our organization. I sit in a central team, but we work closely with IT. It’s part of sharing a common vision and a common goal.

Positive actions, profitable results

Gardner: For those organizations planning on a hybrid IT future, where they are making decisions about how much public cloud, private cloud, and traditional IT — perhaps they should be factoring more about sustainability in terms of a lifecycle of products and the all-important carbon and energy equation.

How do we put numbers on this in ways that IT people can then justify on that all-important total cost of ownership and return on investment types of factoring across hybrid IT choices?

McDonough: Since the only constant in modern business life is high-speed change, you have to have change built into your day-to-day operations. And so, what is the change? The change will have an impact. The question is will it have a positive impact or a negative impact? If we look at the business, we want a positive impact economically; for the environment, we would like to have a positive impact there, too.

Since the only constant in modern business life is high-speed change … for the environment we would like to have a positive impact there, too.

When you look at all of that together as one top-line behavior, you realize it’s about revenue generation, not just about profit derivation. So, you are not just trying to squeeze out every penny to get profit, which is what’s leftover. That’s the manager’s job; you are trying to figure out what’s the right thing to do and bring in the revenue, that’s the executive’s job. 

The executives see this and realize it’s about revenue generation actually. And so, we can balance our CAPEX and our OPEX and we can do optimization across it. That means a lot of equipment that’s sitting out there that might be suboptimal is still serviceable. It’s a valuable asset. Let it run but be ready to refurbish it when the time comes. In the meantime, you are going to shift to the faster, better systems that are optimized across the entire platform. Because then you start saving energy, you start saving money, and that’s all there is to it.

Circular Economy Report

Guides Users to 

Increased Sustainability

Gardner: It seems like we are at the right time in the economy, and in the evolution of IT, for the economics to be working in favor of sustainability initiatives. It’s no coincidence that we are seeing at HPE that they are talking more about the economics of IT as well as sustainability issues. They are very closely linked.

Do you have studies at BT that help you make the economic case for sustainability, and not just that it’s the good or proper thing to do?

Ginér: Oh, yes, most definitely. Just last year through our Energy Efficiency Program, we saved 29 million pounds, and since we began looking at this in 2009-2010, we have saved more than 250 million pounds. So, there is definitely an economic case for being energy efficient.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Government, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, Networked economy, procurement, supply chain | Tagged , , , , , , , , , , | Leave a comment

Industrial-strength wearables combine with collaboration cloud to bring anywhere expertise to intelligent-edge work

The next BriefingsDirect industrial-edge innovation use-case examines how RealWear, Inc. and Hewlett Packard Enterprise (HPE) MyRoom combine to provide workers in harsh conditions ease in accessing and interacting with the best intelligence.

Stay with us to learn how a hands-free, voice-activated, and multimedia wearable computer solves the last few feet issue for delivering a business’ best data and visual assets to some of its most critical onsite workers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to describe the new high-water mark for wearable augmented collaboration technologies are Jan Josephson, Sales Director for EMEA at RealWear, and John “JT” Thurgood, Director of Sales for UK, Ireland, and Benelux at RealWear. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: A variety of technologies have come together to create the RealWear solution. Tell us why nowadays a hands-free, wearable computer needs to support multimedia and collaboration solutions to get the job done.

Thurgood: Over time, our industrial workers have moved through a digitization journey as they find the best ways to maintain and manage equipment in the field. They need a range of tools and data to do that. So, it could be an engineer wearing personal protective equipment in the field. He may be up on scaffolding. He typically needs a big bundle of paperwork, such as visual schematics, and all kinds of authorization documents. This is typically what an engineer takes into the field. What we are trying to do is make his life easier.


You can imagine it. An engineer gets to an industrial site, gets permission to be near the equipment, and has his schematics and drawings he takes into that often-harsh environment. His hands are full. He’s trying to balance and juggle everything while trying to work his way through that authorization process prior to actually getting on and doing the job – of being an engineer or a technician.

We take that need for physical documentation away from him and put it on an Android device, which is totally voice-controlled and hands-free. A gyroscope built into the device allows specific and appropriate access to all of those documents. He can even freeze at particular points in the document. He can refer to it visually by glancing down, because the screen is just below eye-line.

The information is available but not interfering from a safety perspective, and it’s not stopping him from doing his job. He has that screen access while working with his hands. The speakers in the unit also help guide him via verbal instructions through whatever the process may be, and he doesn’t even have to be looking at documentation.

Learn More About Software-Defined

And Hybrid Cloud Solutions

That Reduce Complexity

He can follow work orders and processes. And, if he hits a brick wall — he gets to a problem where even after following work processes, going through documentation, and it this still doesn’t look right — what does he do? Well, he needs to phone a buddy, right? The way he does that is the visual remote guidance (VRG) MyRoom solution from HPE.

He gets the appropriate expert on the line, and that expert can be thousands of miles away. The expert can see what’s going on through the 16-megapixel camera on the RealWear device. And he can talk him through the problem, even in harsh conditions because there are four noise-canceling microphones on the device. So, the expert can give detailed, real-time guidance as to how to solve the problem.

You know, Dana, typically that would take weeks of waiting for an expert to be available. The cost involved in getting the guy on-site to go and resolve the issue is expensive. Now we are enabling that end-technician to get any assistance he needs, once he is at the right place, at the right time.

Gardner: What was the impetus to create the RealWear HMT-1? Was there a specific use case or demand that spurred the design?

Military inspiration, enterprise adoption

Thurgood: Our chief technology officer (CTO), Dr. Chris Parkinson, was working in another organization that was focused on manufacturing military-grade screens. He saw an application opportunity for that in the enterprise environment.

And it now has wide applicability — whether it’s in the oil and gas industry, automotive, and construction. I’ve even had journalists wanting to use this device, like having a mobile cameraman.

He foresaw a wide range of use-cases, and so worked with a team — with our chief executive officer (CEO), Andy Lowery — to pull together a device. That design is IP66-rated, it’s hardened, and it can be used in all weather, from -20C to 50C, to do all sorts of different jobs.

There was nothing in the marketplace that provides these capabilities. We now have more than 10,000 RealWear devices in the field in all sorts of vertical industries.

The impetus was that there was nothing in the marketplace that provides these capabilities. People today are using iPads and tablets to do their jobs, but their hands are full. You can’t do the rest of the tasks that you may need to do using your hands.

We now have more than 10,000 RealWear devices in the field in all sorts of industrial areas. I have named a few verticals, but we’re discovering new verticals day-by-day.

Gardner: Jan, what were some of the requirements that led you to collaborate with HPE MyRoom and VRG? Why was that such a good fit?

Josephson: There are a couple of things HPE does extremely well in this field. In these remote, expert applications in particular, HPE designed their applications really well from a user experience (UX) perspective.


At the end of the day, we have users out there and many of them are not necessarily engineers. So the UX side of an application is very important. You can’t have a lot of things clogging up your screen and making things too complicated. The interface has to be super simple.

The other thing that is really important for our customers is the way HPE does compression with their networked applications. This is essential because many times — if you are out on an oil rig or in the middle of nowhere — you don’t have the luxury of Wi-Fi or a 4G network. You are in the field.

The HPE solution, due to the compression, enables very high-quality video even at very-low bandwidth. This is very important for a lot of our customers. HPE is also taking their platform and enabling it to operate on-premises. That is becoming important because of security requirements. Some of the large users want a complete solution inside of their firewall.

So it’s a very impressive piece of software, and we’re very happy that we are in this partnership with HPE MyRoom.

Gardner: In effect, it’s a cloud application now — but it can become a hybrid application, too.

Connected from the core to the edge

Thurgood: What’s really unique, too, is that HPE has now built-in object recognition within the toolset. So imagine you’re wearing the RealWear HMT-1, you’re looking at a pump, a gas filter, or some industrial object. The technology is now able to identify that object and provide you with the exact work orders and documentation related to it.

We’re now able to expand out from the historic use-case of expert remote visual guidance support into doing so much more. HPE has really pushed the boundaries out on the solution.

Gardner: It’s a striking example of the newfound power of connecting a core cloud capability with an edge device, and with full interactivity. Ultimately, this model brings the power of artificial intelligence (AI) running on a data center to that edge, and so combines it with the best of human intelligence and dexterity. It’s the best of all worlds.

JT, how is this device going to spur new kinds of edge intelligence?

Thurgood: It’s another great question because 5G is now coming to bear as well as Wi-Fi. So, all of a sudden, almost no matter where you are, you can have devices that are always connected via broadband. The connectivity will become ubiquitous.

Learn More About Software-Defined

And Hybrid Cloud Solutions

That Reduce Complexity

Now, what does that do? It means never having an offline device. All of the data, all of your Internet of Things (IoT) analytics and augmented and assisted reality will all be made available to that remote user.

So, we are looking at the superhuman versions of engineers and technicians. Historically you had a guy with paperwork. Now, if he’s always connected, he always has all the right documentation and is able to act and resolve tasks with all of the power and the assistance he needs. And it’s always available right now.

So, yes, we are going to see more intellectual value being moved down to the remote, edge user.

At RealWear, we see ourselves as a knowledge-transfer company. We want the user of this device to be the conduit through which you can feed all cloud-analyzed data. As time goes by, some of the applications will reside in the cloud as well as on the local device. For higher-order analytics there is a hell of a lot of churning of data required to provide the best end results. So, that’s our prediction.

Gardner: When you can extend the best intelligence to any expert around the world, it’s very powerful concept.

For those listening to or reading this podcast, please describe the HMT-1 device. It’s fairly small and resides within a helmet.

Using your headwear

Thurgood: We have a horseshoe-shaped device with a screen out in front. Typically, it’s worn within a hat. Let’s imagine, you have a standard cap on your head. It attaches to the cap with two clips on the sides. You then have a screen that protrudes from the front of the device that is held just below your eye-line. The camera is mounted on the side. It becomes a head-worn tablet computer.

It can be worn in hard hats, bump caps, normal baseball caps, or just with straps (and no hat). It performs regardless of the environment you are in — be that in wind, rain, gales, such as working out on an offshore oil and gas rig. Or if you are an automotive technician, working in a noisy garage, it simply complements the protective equipment you need to use in the field.

Gardner: When you can bring this level of intelligence and instant access of experts to the edge, wherever it is, you’re talking about new economics. These type of industrial use cases can often involve processes where downtime means huge amounts of money lost. Quickly intercepting a problem and solving it fast can make a huge difference.

Do you have examples that provide a sense of the qualitative and quantitative benefits when this is put to good use?

Thurgood: There are a number of examples. Take automotive to start with. If you have a problem with your vehicle today, you typically take it to a dealership. That dealer will try to resolve the issue as quickly as it can. Let’s say the dealership can’t. There is a fault on the car that needs some expert assistance. Today, the dealership phones the head office and says, “Hey, I need an expert to come down and join us. When can you join us?” And there is typically a long delay.

So, what does that mean? That means my vehicle is off the road. It means I have to have a replacement vehicle. And that expert has to come out from head office to spend time traveling to be on-site to resolve the issue.

What can happen now using the RealWear device in conjunction with the HPE VRG MyRoom is that the technician contacts the expert engineer remotely and gets immediate feedback and assistance on resolving the fault. As you can imagine, the customer experience is vastly improved based on resolving the issue in minutes – and not hours, days, or even weeks.

Josephson: It’s a good example because everyone can relate to a car. Also, nowadays the car manufacturers are pushing a lot more technology into the cars. They are almost computers on wheels. When a car has a problem, chances are very slim you will have the skill-set needed in that local garage.

The whole automotive industry has a big challenge because they have all of these people in the field who need to learn a lot. Doing it the traditional way — of getting them all into a classroom for six weeks — just doesn’t cut it. So, it’s now all about incident-based, real-time learning.

Another benefit is that we can record everything in MyRoom. So if I have a session that solves a particular problem, I can take that recording and I have a value of one-to-many rather than one-to-one. I can begin building up my intellectual property, my FAQs, my better customer service. A whole range of values are being put in front here.

Gardner: You’re creating an archive, not just a spot solution. That archive can then be easily accessible at the right time and any place.

Josephson: Right.

Gardner: For those listeners wondering whether RealWear and VRG are applicable to their vertical industry, or their particular problem set, what are couple of key questions that they might ask themselves?

Shared know-how saves time and money

Thurgood: Do your technicians and engineers need to use their hands? Do they need to be hands-free? If so, you need a device like this. It’s voice-controlled, it’s mounted on your head.

Do they wear personal protectant equipment (PPE)? Do they have to wear gloves? If so, it’s really difficult to use a stylus or poke the screen of a tablet. With RealWear, we provide a totally hands-free, eyes-forward, very safe deployment of knowledge-transfer technology in the field.

If you need your hands free in the field, or if you’re working outdoors, up on towers and so on, it’s a good use of the device.

Josephson: Also, if your business includes field engineers that travel, do you have many traveling days where you had to go back because you forgot something, or it wasn’t the right skill-set on the first trip?

If instead you can always have someone available via the device to validate what we think is wrong and actually potentially fix it, I mean, it’s a huge savings. Fewer return or duplicate trips. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, Enterprise transformation, Hewlett Packard Enterprise, Internet of Things, machine learning, mobile computing | Tagged , , , , , , , , | Leave a comment

How the data science profession is growing in value and impact across the business world

The next BriefingsDirect business trends panel discussion explores how the role of the data scientist in the enterprise is expanding in both importance and influence.

Data scientists are now among the most highly sought-after professionals, and they are being called on to work more closely than ever with enterprise strategists to predict emerging trends, optimize outcomes, and create entirely new kinds of business value.  

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about modern data scientists, how they operate, and why a new level of business analysis professional certification has been created by The Open Group, we are joined by Martin Fleming, Vice President, Chief Analytics Officer, and Chief Economist at IBMMaureen Norton, IBM Global Data Scientist Professional Lead, Distinguished Market Intelligence Professional, and author of Analytics Across the Enterprise, and George Stark, Distinguished Engineer for IT Operations Analytics at IBM. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We are now characterizing the data scientist as a profession. Why have we elevated the role to this level, Martin? 


Fleming: The benefits we have from the technology that’s now available allow us to bring together the more traditional skills in the space of mathematics and statistics with computer science and data engineering. The technology wasn’t as useful just 18 months ago. It’s all about the very rapid pace of change in technology.

Gardner: Data scientists used to be behind-the-scenes people; sneakers, beards, white lab coats, if you will. What’s changed to now make them more prominent?


Norton: Today’s data scientists are consulting with the major leaders in each corporation and enterprise. They are consultants to them. So they are not in the back room, mulling around in the data anymore. They’re taking the insights they’re able to glean and support with facts and using them to provide recommendations and to provide insights into the business.

Gardner: Most companies now recognize that being data-driven is an imperative. They can’t succeed in today’s world without being data-driven. But many have a hard time getting there. It’s easier said than done. How can the data scientist as a professional close that gap?


Stark: The biggest drawback in integration of data sources is having disparate data systems. The financial system is always separate from the operational system, which is separate from the human resources (HR) system. And you need to combine those and make sure they’re all in the same units, in the same timeframe, and all combined in a way that can answer two questions. You have to answer, “So what?” And you have to answer, “What if?” And that’s really the challenge of data science.

Gardner: An awful lot still has to go on behind the scenes before you get to the point where the “a-ha” moments and the strategic inputs take place.

Martin, how will the nature of work change now that the data scientist as a profession is arriving – and probably just at the right time?

Fleming: The insights that data scientists provide allow organizations to understand where the opportunities are to improve productivity, of how they can help to make workers more effective, productive, and to create more value. This enhances the role of the individual employees. And it’s that value creation, the integration of the data that George talked about, and the use of analytic tools that’s driving fundamental changes across many organizations.

Captain of the data team

Gardner: Is there any standardization as to how the data scientist is being organized within companies? Do they typically report to a certain C-suite executive or another? Has that settled out yet? Or are we still in a period of churn as to where the data scientist, as a professional, fits in?

Business women touching the certification screen

Norton: We’re still seeing a fair amount of churn. Different organizing approaches have been tried. For example, the centralized center of excellence that supports other business units across a company has a lot of believers and followers.

The economies of scale in that approach help. It’s difficult to find one person with all of the skills you might need. I’m describing the role of consultant to the presidents of companies. Sometimes you can’t find all of that in one individual — but you can build teams that have complimentary skills. We like to say that data science is a team sport.

Gardner: George, are we focusing the new data scientist certification on the group or the individual? Have we progressed from the individual to the group yet?

Stark: I don’t believe we are there yet. We’re still certifying at the individual level. But as Maureen said, and as Martin alluded to, the group approach has a large effect on how you get certified and what kinds of solutions you come up with. 

Gardner: Does the certification lead to defining the managerial side of this group, with the data scientist certified in organizing in a methodological, proven way that group or office?

Learn How to Become

Certified as a

Data Scientist

Fleming: The certification we are announcing focuses not only on the technical skills of a data scientist, but also on project management and project leadership. So as data scientists progress through their careers, the more senior folks are certainly in a position to take on significant leadership and management roles.

And we are seeing over time, as George referenced, a structure beginning to appear. First in the technology industry, and over time, we’ll see it in other industries. But the technology firms whose names we are all familiar with are the ones who have really taken the lead in putting the structure together.

Gardner: How has the “day in the life” of the typical data scientist changed in the last 10 years?

Stark: It’s scary to say, but I have been a data scientist for 30 years. I began writing my own Fortran 77 code to integrate datasets to do eigenvalues and eigenvectors and build models that would discriminate among key objects and allow us to predict what something was.

The difference today is that I can do that in an afternoon. We have the tools, datasets, and all the capabilities with visualization tools, SPSSIBM Watson, and Tableau. Things that used to take me months now take a day and a half. It’s incredible, the change.

Gardner: Do you as a modern data scientist find yourself interpreting what the data science can do for the business people? Or are you interpreting what the business people need, and bringing that back to the data scientists? Or perhaps both?

Collaboration is key

Stark: It’s absolutely both. I was recently with a client, and we told them, “Here are some things we can do today.” And they said, “Well, what I really need is something that does this.” And I said, “Oh, well, we can do that. Here’s how we would do it.” And we showed them the roadmap. So it’s both. I will take that information back to my team and say, “Hey, we now need to build this.”

Gardner: Is there still a language, culture, or organizational divide? It seems to me that you’re talking apples and oranges when it comes to business requirements and what the data and technology can produce. How can we create a Rosetta Stone effect here?

Norton: In the certification, we are focused on supporting that data scientists have to understand the business problems. Everything begins from that.

In the certification, we are focused on supporting that data scientists have to understand the business problems. Everything begins from that. Knowing how to ask the right questions, to scope the problem, and be able to then translate is essential.

Knowing how to ask the right questions, to scope the problem, and be able to then translate [is essential]. You have to look at the available data and infer some, to come up with insights and a solution. It’s increasingly important that you begin with the problem. You don’t begin with your solution and say, “I have this many things I can work with.” It’s more like, “How we are going to solve this and draw on the innovation and creativity of the team?”

Gardner: I have been around long enough to remember when the notion of a chief information officer (CIO) was new and fresh. There are some similarities to what I remember from those conversations in what I’m hearing now. Should we think about the data scientist as a “chief” something, at the same level as a chief technology officer (CTO) or a CIO?

Chief Data Officer defined 

Fleming: There are certainly a number of organizations that have roles such as mine, where we’ve combined economics and analytics. Amazon has done it on a larger scale, given the nature of their business, with supply chains, pricing, and recommendation engines. But other firms in the technology industry have as well.

We have found that there are still three separate needs, if you will. There is an infrastructure need that CIO teams are focused on. There are significant data governance and management needs that typically chief data officers (CDOs) are focused on. And there are substantial analytics capabilities that typically chief analytics officers (CAOs) are focused on.

It’s certainly possible in many organizations to combine those roles. But in an organization the size of IBM, and other large entities, it’s very difficult because of the complexity and requirements across those three different functional areas to have that all embodied in a single individual.

Gardner: In that spectrum you just laid out – analytics, data, and systems — where does The Open Group process for a certified data scientist fit in?

Fleming: It’s really on the analytics side. A lot of what CDOs do is data engineering, creating data platforms. At IBM, we use the term Watson Data Platform because it’s built on a certain technology that’s in the public cloud. But that’s an entirely separate challenge from being able to create the analytics tools and deliver the business insights and business value that Maureen and George referred to.

Gardner: I should think this is also going to be of pertinent interest to government agencies, to nonprofits, to quasi-public-private organizations, alliances, and so forth.

Given that this has societal-level impacts, what should we think about in improving the data scientists’ career path? Do we have the means of delivering the individuals needed from our current educational tracks? How do education and certification relate to each other?

Academic avenues to certification

Fleming: A number of universities have over the past three or four years launched programs for a master’s degree in data science. We are now seeing the first graduates of those programs, and we are recruiting and hiring.

I think this will be the first year that we bring in folks who have completed a master’s in data science program. As we all know, universities change very slowly. It’s the early days, but demand will continue to grow. We have barely scratched the surface in terms of the kinds of positions and roles across different industries. 

That growth in demand will cause many university programs to grow and expand to feed that career track. It takes 15 years to create a profession, so we are in the early days of this.

Norton: With the new certification, we are doing outreach to universities because several of them have master’s in data analytics programs. They do significant capstone-type projects, with real clients and real data, to solve real problems.

We want to provide a path for them into certification so that students can earn, for example, their first project profile, or experience profile, while they are still in school.

Gardner: George, on the organic side — inside of companies where people find a variety of tracks to data scientist — where do the prospects come from? How does organic development of a data scientist professional happen inside of companies?

Stark: At IBM, in our group, Global Services, in particular, we’ve developed a training program with a set of badges. They get rewarded for achievement in various levels of education. But you still need to have projects you’ve done with the techniques you’ve learned througheducation to get to certification.

Having education is not enough. You have to apply it to get certified.

Gardner: This is a great career path, and there is tremendous demand in the market. It also strikes me as a very fulfilling and rewarding career path. What sorts of impacts can these individuals have?

Learn How to Become

Certified as a

Data Scientist

Fleming: Businesses have traditionally been managed through a profit-and-loss statement, an income statement, for the most part. There are, of course, other data sources — but they’re largely independent of each other. These include sales opportunity information in a CRM system, supply chain information in ERP systems, and financial information portrayed in an income statement. These get the most rigorous attention, shall we say.

We’re now in a position to create much richer views of the activity businesses are engaged in. We can integrate across more datasets now, including human resources data. In addition, the nature of machine learning (ML) and artificial intelligence (AI) are predictive. We are in a position to be able to not only bring the data together, we can provide a richer view of what’s transpiring at any point in time, and also generate a better view of where businesses are moving to.

It may be about defining a sought-after destination, or there may be a need to close gaps. But understanding where the business is headed in the next 3, 6, 9, and 12 months is a significant value-creation opportunity.

Gardner: Are we then thinking about a data scientist as someone who can help define what the new, best business initiatives should be? Rather than finding those through intuition, or gut instinct, or the highest paid person’s opinion, can we use the systems to tell us where our next product should come from?

Pioneers of insight

Norton: That’s certainly the direction we are headed. We will have systems that augment that kind of decision-making. I view data scientists as pioneers. They’re able to go into big datadark data, and a lot of different places and push the boundaries to come out with insights that can inform in ways that were not possible before.

It’s a very rewarding career path because there is so much value and promise that a data scientist can bring. They will solve problems that hadn’t been addressed before.

It’s a very exciting career path. We’re excited to be launching the certification program to help data scientists gain a clear path and to make sure they can demonstrate the right skills.

I’s a very rewarding career path because there is so much value and promise that a data scientist can bring. They will solve problems that hadn’t been addressed before.

Gardner: George, is this one of the better ways to change the world in the next 30 years?

Stark: I think so. If we can get more people to do data science and understand its value, I’d be really happy. It’s been fun for 30 years for me. I have had a great time.

Gardner: What comes next on the technology side that will empower the date scientists of tomorrow? We hear about things like quantum computingdistributed ledger, and other new capabilities on the horizon? 

Future forecast: clouds

Fleming: In the immediate future, new benefits are largely coming because we have both public cloud and private cloud in a hybrid structure, which brings the data, compute, and the APIs together in one place. And that allows for the kind of tools and capabilities that necessary to significantly improve the performance and productivity of organizations. 

Blockchain is making enormous progress and very quickly. It’s essentially a data management and storage improvement, but then that opens up the opportunity for further ML and AI applications to be built on top of it. That’s moving very quickly. 

Quantum computing is further down the road. But it will change the nature of computing. It’s going to take some time to get there but it nonetheless is very important and is part of that what we are looking at over the horizon. 

Gardner: Maureen, what do you see on the technology side as most interesting in terms of where things could lead to the next few years for data science? 

Norton: The continued evolution of AI is pushing boundaries. One of the really interesting areas is the emphasis on transparency and ethics, to make sure that the systems are not introducing or perpetuating a bias. There is some really exciting work going on in that area that will be fun to watch going forward. 

Gardner: The data scientist needs to consider not just what canbe done, but what should be done. Is that governance angle brought into the certification process now, or something that it will come later?

Stark: It’s brought into the certification now when we ask about how were things validated and how did the modules get implemented in the environment? That’s one of the things that data scientists need to answer as part of the certification. We also believe that in the future we are going to need some sort of code of ethics, some sort of methods for bias-detection and analysis, the measurement of those things that don’t exist today and that will have to.

Gardner: Do you have any examples of data scientists doing work that’s new, novel, and exciting?

Rock star potential

Fleming: We have a team led by a very intelligent and aggressive young woman who has put together a significant product recommendation tool for IBM. Folks familiar with IBM know it has a large number of products and offerings. In any given client situation the seller wants to be able to recommend to the client the offering that’s most useful to the client’s situation. 

And our recommendation engines can now make those recommendations to the sellers.  It really hasn’t existed in the past and is now creating enormous value — not only for the clients but for IBM as well. 

Gardner: Maureen any examples jump to mind that illustrate the potential for the data scientist? 

Norton: We wrote a book, Analytics Across the Enterprise, to explain examples across nine different business units. There have been some great examples in terms of finance, sales, marketing, and supply chain.

Learn How to Become

Certified as a

Data Scientist

Gardner: Any use-case scenario come to mind where the certification may have been useful?

Norton: Certification would have been useful to an individual in the past because it helps map out how to become the best practitioner you can be. We have three different levels of certification going up to the thought leader. It’s designed to help that professional grow within it.

Stark: A young man who works for me in Brazil built a model for one of our manufacturing clients that identifies problematic infrastructure components and recommends actions to take on those components. And when the client implemented the model, they saw a 60 percent reduction in certain incidents and a 40,000-hour-a-month increase in availability for their supply chain. And we didn’t have a certification for him then — but we will have now. 

Gardner: So really big improvement. It shows that being a data scientist means you’re impactful and it puts you in the limelight.

IBM has built an internal process that matches with The Open Group. Other companies are getting accredited for running a version of the certification themselves, too.

Stark: And it was pretty spectacular because the CIO for that company stood up in front of his whole company — and in front of a group of analysts — and called him out as the data scientist that solved this problem for their company. So, yeah, he was a rock star for a couple days. 

Gardner: For those folks who might be more intrigued with a career path toward certification as a data scientist, where might they go for more information? What are the next steps when it comes to the process through The Open Group, with IBM, and the industry at large? 

Where to begin

Norton: The Open Group officially launched this in January, so anyone can go to The Open Group website and check under certifications. They will be able to read the information about how to apply. Some companies are accredited, and others can get accredited for running a version of the certification themselves. 

IBM recently went through the certification process. We have built an internal process that matches with The Open Group. People can apply either directly to The Open Group or, if they happen to be within IBM or one of the other companies who will certify, they can apply that way and get the equivalent of it being from The Open Group. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Enterprise architect, enterprise architecture, Enterprise transformation, Hadoop, IBM, machine learning, The Open Group | Tagged , , , , , , , , , , , , , | Leave a comment

Why enterprises should approach procurement of hybrid IT in entirely new ways

The next BriefingsDirect hybrid IT management strategies interview explores new ways that businesses should procure and consume IT-as-a-service. We’ll now hear from an IT industry analyst on why changes in cloud deployment models are forcing a rethinking of IT economics — and maybe even the very nature of acquiring and cost-optimizing digital business services.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore the everything-as-a-service business model is Rhett Dillingham, Vice President and Senior Analyst at Moor Insights and Strategy. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving change in the procurement of hybrid- and multi-cloud services?


Dillingham: What began as organic adoption — from the developers and business units seeking agility and speed — is now coming back around to the IT-focused topics of governance, orchestration across platforms, and modernization of private infrastructure.

There is also interest in hybrid cloud, as well as multi-cloud management and governance. Those amount to complexities that the public clouds are not set up for and are not able to address because they are focused on their own platforms.

Learn How to

Better Manage

Multi-Cloud Sprawl

Gardner: So the way you acquire IT these days isn’t apples or oranges, public or private, it’s more like … fruit salad. There are so many different ways to acquire IT services that it’s hard to measure and to optimize. 

Dillingham: And there are trade-offs. Some organizations are focused on and adopt a single public cloud vendor. But others see that as a long-term risk in management, resourcing, and maintaining flexibility as a business. So they’re adopting multiple cloud vendors, which is becoming the more popular strategic orientation.

Gardner: For those organizations that don’t want mismanaged “fruit salad” — that are trying to homogenize their acquisition of IT services even as they use hybrid cloud approaches — does this require a reevaluation of how IT in total is financed? 

Champion the cloud

Dillingham: Absolutely, and that’s something you can address, regardless of whether you’re adopting a single cloud or multiple clouds. The more you use multiple resources, the more you are going to consider tools that address multiple infrastructures — and not base your capabilities on a single vendor’s toolset. You are going to go with a cloud management vendor that produces tools that comprehensively address security, compliance, cost management, and monitoring, et cetera.

Gardner: Does the function of IT acquisitions now move outside of IT? Should companies be thinking about a chief procurement officer (CPO) or chief financial officer (CFO) becoming a part of the IT purchasing equation?

Dillingham: By virtue of the way cloud has been adopted — more by the business units – they got ahead of IT in many cases. This has been pushed back toward gaining the fuller financial view. That move doesn’t make the IT decision-maker into a CFO as much as turn them into a champion of IT. And IT goes back to being the governance arm, where traditionally they been managing cost, security, and compliance.

It’s natural for the business units and developers to now look to IT for the right tools and capabilities, not necessarily to shed accountability but because that is the traditional role of IT, to enable those capabilities. IT is therefore set up for procurement.

IT is best set up to look at the big picture across vendors and across infrastructures rather than the individual team-by-team or business unit-by-business unit decisions that have been made so far. They need to aggregate the cloud strategy at the highest organizational level.

Gardner: A central tenet of good procurement is to look for volume discounts and to buy in bulk. Perhaps having that holistic and strategic approach to acquiring cloud services lends itself to a better bargaining position? 

Learn How to

Make Hybrid IT


Dillingham: That’s absolutely the pitch of a cloud-by-cloud vendor approach, and there are trade-offs. You can certainly aggregate more spend on a single cloud vendor and potentially achieve more discounts in use by that aggregation.

The rebuttal is that on a long-term basis, your negotiating leverage in that relationship is constrained versus if you have adopted multiple cloud infrastructures and can dialogue across vendors on pricing and discounting.

Now, that may turn into more of an 80/20-, 90/10-split than a 50/50-split, but at least by having some cross-infrastructure capability — by setting yourself up with orchestration, monitoring, and governance tools that run across multiple clouds — you are at least in a strategic position from a competitive sourcing perspective.

The trade-off is the cost-aggregation and training necessary to understand how to use those different infrastructures — because they do have different interfaces, APIs, and the automation is different.

Gardner: I think that’s why we’ve seen vendors like Hewlett Packard Enterprise (HPE) put an increased emphasis on multi-cloud economics, and not just the capability to compose cloud services. The issues we’re bringing up force IT to rethink the financial implications, too. Are the vendors on to something here when it comes to providing insight and experience in managing a multi-cloud market?

Follow the multi-cloud tour guide

Dillingham: Absolutely, and certainly from the perspective that when we talk multi-cloud, we are not just talking multiple public clouds. There is a reality of large existing investments in private infrastructure that continue for various purposes. That on-premises technology also needs cost optimization, security, compliance, auditability, and customization of infrastructure for certain workloads.

Consultative input is very valuable when you see how much pattern-matching there is across customers — and not just within the same industry but cross industries.

That means the ultimate toolset to be considered needs to work across both public and private infrastructures. A vendor that’s looking beyond just public cloud, like HPE, and delivers a multi-cloud and hybrid cloud management orientation is set up to be a potential tour guide and strategic consultative adviser. 

And that consultative input is very valuable when you see how much pattern-matching there is across customers – and not just within same industry but across industries. The best insights will come from knowing what it looks like to triage application portfolios, what migrations you want across cloud infrastructures, and the proper set up of comprehensive governance, control processes, and education structures.

Gardner: Right. I’m sure there are systems integrators, in addition to some vendors, that are going to help make the transition from traditional IT procurement to everything-as-a service. Their lessons learned will be very valuable.

That’s more intelligent than trying to do this on your own or go down a dark alley and make mistakes, because as we know, the cloud providers are probably not going to stand up and wave a flag if you’re spending too much money with them.

How to Solve Cost and Utilization

Challenges of

Hybrid Cloud

Dillingham: Yes, and the patterns of progression in cloud orientation are clear for those consultative partners, based on dozens of implementations and executions. From that experience they are far more thoroughly aware of the patterns and how to avoid falling into the traps and pitfalls along the way, more so than a single organization could expect, internally, to be savvy about.

Gardner: It’s a fast-moving target. The cloud providers are bringing out new services all the time. There are literally thousands of different cloud service SKUs for infrastructure-as-a-service, for storage-as-a-service, and for other APIs and third-party services. It becomes very complex, very dynamic.

Do you have any advice for how companies should be better managing cloud adoption? It seems to me there should be collaboration at a higher level, or a different type of management, when it comes to optimizing for multi-cloud and hybrid-cloud economics.

Cloud collaboration strategy 

Dillingham: That really comes back to the requirements of the specific IT organization. The more business units there are in the organization, the more IT is critical in driving collaboration at the highest organizational level and in being responsible for the overall cloud strategy.

Remove Complexity

From Multi-Cloud

And Hybrid IT 

The cloud strategy across the topics of platform selection, governance, process, and people skills — that’s the type of collaboration needed. And it flows into these recommendations from the consultancies of how to avoid the traps and pitfalls, mismanage expectations and goals, resulting in clear outcomes on execution of projects. And it means making sure that security and compliance are considered and involved from a functional perspective – and all the way down the list on making it progress as a long-term success.

The decision of what advice to bring in is really about the topic and the selection on the menu. Have you considered the uber strategy and approach? How well have you triaged your application portfolio? How can you best match capabilities to apps across infrastructures and platforms?

Do you have migration planning? How about migration execution? Those can be similar or separate items. You also have development methodologies, and the software platform choices to best support all of that along with security and compliance expertise. These are all aspects certain consultancies will have expertise on more than others, and not many are going to be strong across all of them. 

Gardner: It certainly sounds like a lot of planning and perhaps reevaluating the ways of the past. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Cloud computing, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, multicloud, supply chain, User experience | Tagged , , , , , , , , | Leave a comment

Manufacturer gains advantage by expanding IoT footprint from many machines to many insights

The next BriefingsDirect manufacturing modernization and optimization case study centers on how a Canadian maker of containers leverages the Internet of Things (IoT) to create a positive cycle of insights and applied learning.

We will now hear how CuBE Packaging Solutions, Inc. in Aurora, Ontario has deployed edge intelligence to make 21 formerly isolated machines act as a single, coordinated system as it churns out millions of reusable package units per month.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we explore how harnessing edge data with more centralized real-time analytics integration cooks up the winning formula for an ongoing journey of efficiency, quality control, and end-to-end factory improvement.

Here to describe the challenges and solutions for injecting IoT into a plastic container’s production advancement success journey is Len Chopin, President at CuBE Packaging Solutions. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Len, what are the top trends and requirements driving the need for you to bring more insights into your production process?


Chopin: The very competitive nature of injection molding requires us to stay ahead of the competition and utilize the intelligent edge to stay abreast of that competition. By tapping into and optimizing the equipment, we gain on downtime efficiencies, improved throughput, and all the things that help drive more to the bottom line.

Gardner: And this is a win-win because you’re not only improving quality but you’re able to improve the volume of output. So it’s sort of the double-benefit of better and bigger.

Chopin: Correct. Driving volume is key. When we are running, we are making money, and we are profitable. By optimizing that production, we are even that much more profitable. And by using analytics and protocols for preventive and predictive maintenance, the IoT solutions drive an increase the uptime on the equipment.

Gardner: Why are sensors in of themselves not enough to attain intelligence at the edge?

Chopin: The sensors are reactive. They give you information. It’s good information. But leaving it up to the people to interpret [the sensor data] takes time. Utilizing analytics, by pooling the data, and looking for trends, means IoT is pushing to us what we need to know and when.

Otherwise we tend to look at a lot of information that’s not useful. Utilizing the intelligent edge means it’s pushing to us the information we need, when we need it, so we can react appropriately with the right resources.

Gardner: In order to understand the benefits of when you do this well, let’s talk about the state at CuBE Packaging when you didn’t have sensors. You weren’t processing and you weren’t creating a cycle of improvement?

Learn How to Automate and

Drive Insights

From Your IIoT Data and Apps

Chopin: That was firefighting mode. You really have no idea of what’s running, how it’s running, is it trending down, is it fast enough, and is it about to go down. It equates to flying blind, with blinders on. It’s really hard in a manufacturing environment to run a business that way. A lot of people do it, and it’s affordable — but not very economical. It really doesn’t drive more value to the bottom line.

Gardner: What have been the biggest challenges in moving beyond that previous “firefighting” state to implementing a full IoT capability?

Chopin: The dynamic within our area in Canada is resources. There is lot of technology out there. We rise to the top by learning all about what we can do at the edge, how we can best apply it, and how we can implement that into a meaningful roadmap with the right resources and technical skills of our IT staff.

It’s a new venture for us, so it’s definitely been a journey. It is challenging. Getting that roadmap and then sticking to the roadmap is challenging, but as we go through the journey we learn the more relevant things. It’s been a dynamic roadmap, which it has to be as the technology evolves and we delve into the world of IoT, which is quite fascinating for us.

Gardner: What would you say has been the hardest nut to crack? Is it the people, the process, or the technology? Which has been the most challenging?

Trust the IoT process 

Chopin: I think the process, the execution. But we found that once you deploy IoT, and you begin collecting data and embarking on analytics, then the creative juices become engaged with a lot of the people who previously were disinterested in the whole process.

But then they help steer the ship, and some will change the direction slightly or identify a need that we previously didn’t know about – a more valuable way than the path we were on. So people are definitely part of the solution, not part of the problem. For us, it’s about executing to their new expectations and applying the information and technology to find solutions to their specific problems.

We have had really good buy-in with the people, and it’s just become about layering on the technical resources to help them execute their vision.

Gardner: You have referred to becoming, “the Google of manufacturing.” What do you mean by that, and how has Hewlett Packard Enterprise (HPE) supported you in gaining that capability and intelligence?

People are definitely part of the solution, not part of the problem. For us, it’s about executing to their new expectations and applying information and technology to find solutions to specific problems.

Chopin: “The Google of manufacturing” was first coined by our owner, JR. It’s his vision so it’s my job to bring it to fruition. The concept is that there’s a lot of cool stuff out there, and we see that IoT is really fascinating.

My job is to take that technology and turn it into an investment with a return on investment (ROI) from execution. How is that all going to help the business? The “Google of manufacturing” is about growth for us — by using any technology that we see fit and having the leadership to be open to those new ideas and concepts. Even without having a clear vision of the roadmap, it means focusing on the end results. It’s been a unique situation. So far it’s been very good for us.

Gardner: How has HPE helped in your ability to exploit technologies both at the edge and at the data center core?

Chopin: We just embarked on a large equipment expansion [with HPE], which is doubling our throughput. Our IT backbone, our core, was just like our previous equipment — very old, antiquated, and not cutting edge at all. It was a burden as opposed to an asset.

Part of moving to IoT was putting in a solid platform, which HPE has provided. We work with our integrator and a project team that mapped out our core for the future. It’s not just built for today’s needs — it’s built for expansion capabilities. It’s built for year-two, year-three. Even if we’re not fully utilizing it today — it has been built for the future.

HPE has more things coming down the pipeline that are built on and integrated to this core, so that there are no real limitations to the system. No longer will we have to scrap an old system and put a new one in. It’s now scalable, which we think of as the platform for becoming the “Google of manufacturing,” and which is going to be critical for us.

Gardner: Future-proofing infrastructure is always one of my top requirements. All right, please tell us about CuBE Packaging, your organization’s size, what you’re doing, and what end products are.

The CuBE takeaway

Chopin: We have a 170,000-square-foot facility, with about 120 employees producing injection-molded plastic containers for the food service industry, for home-meal replacement, and takeout markets, distributed to Canada as well as the US, which is obviously a huge and diverse market.

We also have a focus on sustainability. Our products are reusable and recyclable. They are a premier product that come with a premier price. They are also customizable and brandable, which has been a key to CuBE’s success. We partner with restaurants, with sophisticated customers, who see a value in the specific branding and of having a robust packaging solution.

Gardner: Len, you mentioned that you’re in a competitive industry and that margin is therefore under pressure. For you to improve your bottom line, how do you account for more productivity? How are you turning what we have described in terms of an IoT and data capability into that economic improvement to your business outcome?

Chopin: I refer to this as having a plant within a plant. There is always lot more you can squeeze out of an operation by knowing what it’s up to, not day-by-day, but minute-by-minute. Our process is run quite quickly and so slippage in machine cycle times can occur rapidly. We must grasp the small slippages, or predict failures, or when something is out of technical specifications from the injection molding standpoint or we could be producing a poor-quality product.

Getting a handle on what the machines are doing, minute-by-minute-by-minute gives us the advantage to utilize the assets and the people and so to optimize the uptime, as well as improve our quality, to get more of the best product to the market. So it really does drive value right to the bottom line.

Learn How to Automate and

Drive Insights

From Your IIoT Data and Apps

Gardner: A big buzzword in the industry now is artificial intelligence (AI). We are seeing lots of companies dabble in that. But you need to put in place certain things before you can take advantage of those capabilities that not only react but have the intelligence to prescribe new processes for doing things even more efficiently.

Are you working in conjunction with your integrator and HPE to allow you to exploit AI when it becomes mature enough for organizations like your own?

AI adds uptime 

Chopin: We are already embarking on using sensors for things that were seemingly unrelated. For example, we are picking up data points off of peripheral equipment that feed into the molding process. This provides us a better handle on those inputs to the process, inputs to the actual equipment, rather than focusing on the throughput and of how many parts we get in a given day.

For us, the AI is about that equipment uptime and of preventing any of it going down. By utilizing the inputs to the machines, it can notify us in advance, when we need to be notified.

Rather than monitoring equipment performance manually with a clipboard and a pen, we can check on run conditions or temperatures of some equipment up on the roof that’s critical to the operation. The AI will hopefully alert us to things that we don’t know about or don’t see because it could be at the far end of the operations. Yet there is a codependency on a lot of that pre-upstream equipment that feeds to the downstream equipment.

So for us to gain transparency into that end-to-end process and having intelligence built-in enough to say, “Hey, you have a problem — not yet, but you’re going to have a problem,” allows us to react before the problem occurs and causes a production upset.

Rather than monitoring equipment performance manually with a clipboard and a pen, we can check on run conditions or temperatures of some equipment up on the roof if that is critical to the operations.

Gardner: You can attain a total data picture across your entire product lifecycle, and your entire production facility. Having that allows you to really leverage AI.

Sounds like that means a lot of data over long period of time. Is there anything about what’s happening at that data center core, around storage, that makes it more attractive to do this sooner than later?

Chopin: As I mentioned previously, there are a lot of data points coming off the machines. The bulk of it is useless, other than from an historical standpoint. So by utilizing that data — not pushing forward what we don’t need, but just taking the relevant points — we piggyback on the programmable logic controllers to just gather the data that we need. Then we further streamline that data to give us what we’re looking for within that process.

It’s like pushing out only the needle from the haystack, as opposed to pushing the whole haystack forward. That’s the analogy we use.

Gardner: So being more intelligent about how you gather intelligence?

Chopin: Absolutely! Yes.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, Internet of Things | Tagged , , , , , , , , , | Leave a comment

Why enterprises struggle with adopting public cloud as a culture

The next BriefingsDirect digital strategies interview examines why many businesses struggle with cloud computing adoption, and how they could improve by attaining a culture directed at cloud consumption and total productivity.

Due to inertia, a lack of skills, and even outright hostility, some enterprises are stumbling in their march to cloud use due to behavior and perception — and not the actual technology hurdles.

We will now hear from an observer of cloud adoption patterns on why a cultural solution to adoption may be more important than any other aspect of digital business transformation

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help us explore why cloud inertia can derail business advancement is Edwin Yuen, Senior Analyst for Cloud Services and Orchestration, Data Protection, and DevOps at Enterprise Strategy Group (ESG). [Note: Since this podcast was recorded on Nov. 15, 2018, Yuen has become principal product marketing manager at Amazon Web Services.] The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Edwin, why are enterprises finding themselves unready for public cloud adoption culturally?

Yuen: Culturally the issue with public cloud adoption is whether IT is prepared to bring in public cloud. I bring up the IT issue because public cloud usage is actually really high within business organizations.


At ESG, we have found that cloud use is pretty significant — well over 80 percent are using some sort of public cloud service. It’s very high for infrastructure- (IaaS) and platform-as-a-service (PaaS).

But the key here is, “What is the role of IT?” We see a lot of business end-users and others essentially doing Shadow IT– of going around IT if they feel like their needs are not met. That actually increases the friction between IT and the business.

It also leads to people going into the public cloud before they are ready, before there’s been a proper evaluation – from a technical, cost, or even a convenience point of view. That can potentially derail things.

But there is an opportunity for IT to reset the boundaries and meet the needs of the end users, of thoughtfully getting into the public cloud.

Gardner: We may be at the point of having too much of a good thing. Even if people are doing great things with cloud computing inside of an organization, unless it’s strategically oriented to process, fulfillment, and a business outcome, then the benefits be can lost. 

Plan before you go to public cloud

Yuen: When line of business (LOB) or other groups are not working with core IT in going to the public cloud, they get some advantages from it — but they are not getting the full advantage. It’s like going from an old piece of smartphone technology, at 7 or 8 years old, and then only going up to the fifth or sixth best phone. It’s a significant upgrade, but it’s not the optimal way to do it. They’re not getting the full benefits. 

The question is, “Are you getting the most out of it, and is that thoughtfulness there?” You want to maximize the capabilities and advantages you get from public cloud and minimize the inconvenience and cost. Planning is absolutely critical for that — and it involves core IT.

So how do you bring about a cultural shift that says, “Yes, we are going into the public cloud. We are not trying to stop you. We are not being overly negative. But what we are trying to do is optimize it for everybody across the board, so that we as a company can get the most out of it because there are so many benefits — not just incremental benefits that you get from immediately jumping in.”

Learn About Comparing

 App Deployment


Gardner: IT needs to take a different role, of putting in guardrails in terms of cloud services consumption, compliance, and security. It seems to me that procurement is also a mature function in most enterprises. They may also want to step in.

When you have people buying goods individually on a credit card, you don’t necessarily take advantage of volume purchasing, or you don’t recognize that you can buy things in bulk and distribute them and get better deals or terms. Yet procurement groups are very good at that.

Is there an opportunity to better conduct cloud consumption like with procuring any other business service, with all the checks, balances, and best practices?

Cut cloud costs, buy in bulk

Yuen: Absolutely, and that’s an excellent point. I think people will often leave out procurement, auditing, acquisitions or whatever department that there is for cloud. It becomes critically important organizationally, especially from the end-user point of view.

From the organizational point of view, you can lose economies of scale. A lot of the cloud providers will provide those economies of scales via an enterprise agreement. That allows for purchasing power to be taken.

Yet if individuals go out and leave procurement behind, it’s like shopping at a retailer for groceries without ever checking for sales or coupons. Buying in volume is just a smarter way to centralize the entire organization so you can leverage it. It becomes a better cost for the line of business, obviously. Cloud is really a consumption-based model, so planning needs to be there.

We’ve talked to a lot of organizations. As they jump into cloud, they expect cost savings, but sometimes they get an increase in cost because once you have that consumption model available, people just go ahead and consume.

We’ve talked to a lot of organizations. As they jump into cloud, they expect cost savings, but sometimes they get an increase in cost because once you have that consumption model available, people just go ahead and consume.

And what that generates is a variability in the cost of consumption; a variability in cost of cloud. A lot of companies very quickly realize that they don’t have variable budgets — they have fixed budgets. So they need to think about how they use cloud and the consumption cost for an entire year. You can’t just go about your work and have some flexibility but then find that you are out of budget when you get to the second half of the third or fourth quarter of the fiscal year.

You can’t budget on open-ended consumption. It requires a balance across the organization, where you have flexibility enough to be active — and go into the cloud. But you also need to understand what the costs are throughout the entire lifecycle, especially if you have fixed budgets.

Gardner: If you get to the fourth quarter and you run out of funds, you can’t exactly turn off the mission-critical applications either. You have to find a way to pay for that, and that can wreak havoc, particularly in a public company.

In the public sector, in particular, they are very much geared to a CAPEX budget. In cities, states, and the federal government, they have to bond large purchases, and do that in advance. So, there is dissonance culturally in terms of the economics around cloud and major buying patterns.

Yuen: We absolutely see that. There was an assumption by many that you would simply want to go to an OPEX model and leave the CAPEX model behind. Realistically, what you’re doing is leaving the CAPEX model behind from a consumption point of view — but you’re not leaving it behind from a budgeting and a planning point of view.

The economic reality is that it just doesn’t work that way. People need to be more flexible, and that’s exactly what the providers have been adapting to. But the providers will also have to allow you to consume in a way that allows you to lock down costs. But that only occurs when the organization works together in terms of its total requirements as opposed to just simply going out and using it.

The key for organizational change is to drive a culture where you have flexibility and agility but work within the culture to know what you want to do ahead of time. Then the organization can do the proper planning to be fiscally responsible, and fiscally execute on the operational plan.

Gardner: Going to a cloud model really does force behavioral changes at the holistic business level. IT needs to think more like procurement. Procurement needs to get more technical and savvier about how to acquire and use cloud computing services. This gets complex. There are literally thousands, if not tens of thousands, of SKUs, different types of services, you could acquire from any of the major public cloud providers. 

Then, of course, the LOB people need to be thinking differently about how they use and consume services. They need to think about whether they should coordinate with developers for customization or not. It’s rather complex.

So let’s identify where the cultural divide is. Is it between IT of the old caliber and the new version of IT? Is it a divide between the line of business people and IT? Between development and operations? All the above? How serious is this cultural divide?

Holistic communication plans 

Yuen: It really is all of the above, and in varying areas. What we are seeing is that the traditional roles within an organization have really been monolithic roles. End-users were consumers, the central IT was a provider, and finances were handled by acquisitions and the administration. Now, what we are seeing, is that everybody needs to work together, and to have a much more holistic plan. There needs to be a new level of communication between those groups, and more of a give-and-take.

It’s similar to the running of a restaurant. In the past, we had a diner, that was the end user, and they said: “I want this food.” The chef says, “I am going to cook this food.” The management says, “This food costs this much.” They never really talked to each other.

They would do some back-and-forth dialog, but there wasn’t a holistic understanding of the actual need. And, to be perfectly honest, not everybody was totally satisfied. The diners were not totally satisfied with the meal because it’s wasn’t made the way they wanted. They weren’t going to pay for something they didn’t actually want. Finance fixed the menu prices, but they would have liked to charge a little bit more. The chef really wanted to cook a little bit differently or have the ability to shift things around.

Read Why IT Operations

 And Developers

Should Get Along

The key for improved cloud adoption is opening the lines of communication, bridging the divides, and gaining new levels of understanding. As in the restaurant analogy, the chef says, “Well, I can add these ingredients, but it will change the flavor and it might increase the cost.” And then the finance people say, “Well, if we make better food, then more people will eat it.” Or, “If we lower prices, we will get more economies of scale.” Or, “If we raise prices we will reduce volume of diners down.” It’s all about that balance — and it’s an open discussion among and between those three parts of the organization.

This is the digital transformation we are seeing across the board. It’s about IT being more flexible, listening to the needs of the end users, and being willing to be agile in providing services. In exchange, the end users come to IT first, understand where the cloud use is going, and can IT be responsive. IT knows better what the users want. It becomes not just that they want solutions faster, but by how much. They can negotiate based on actual requirements. 

And then they all work with operations and other teams and say, “Hey, can we get those resources? Should we put them on-premises or off-premises? Should we purchase it? Should we use CAPEX, or should we use OPEX?” It becomes about changing the organization’s communication across the board and having the ability to look at it from more than just one point of view. And, honestly, most organizations really need help in that. 

It’s not just scheduling a meeting and sitting at a table. Most organizations are looking for solutions and software. They need to bridge the gap, provide a translation of where management of software can come together and say, “Hey, here are the costs related to the capacity that we need.” So everyone sits together and says, “Okay, well, if we need more capacity and the cost turns into thisand the capacity turns into that, you can do the analysis. You can determine if it’s better in the cloud, or better on-premises. But it’s about more than just bringing people together and communicating. It has to provide them the information they need in order to have a similar discussion and gain common ground to work together.

Gardner: Edwin, tell us about yourself and ESG.

Pathway to the cloud 

Yuen: Enterprise Strategy Group is a research and analyst firm. We do a lot of work with both vendors and customers, covering a wide range of topics. And we do custom research and syndicated research. And that backs a lot of the findings that we have when we have discussions about where the market is going.

I cover cloud orchestration and services, data protection, and DevOps, which is really the whole spectrum of how people manage resources and how to get the most out of the cloud — the great optimization of all of that. 

As background, I have worked at Hewlett Packard Enterprise (HPE), Microsoft, and at several startups. I have seen this whole process come together for the growth of the cloud, and I have seen different changes — when we had virtualization, when we had great desktops, and seeing how IT and end-users have had to change.

This is a really exciting time as we get public cloud going; more than just an idea. It’s like when we first had the Internet. We are not just talking about cloud, we are talking what we are doing in the cloud and how the cloud helps us. And that’s the sign of the maturity of the market, but also the sign of what we need to do in order to change, in order to take the best out of it.

This is a really exciting time as we get public cloud going. It’s more than an idea. We are talking about how the cloud helps us. That’s a sign of maturity and what we need to do to take the best out of it.

Gardner: Your title is even an indicator that you have to rethink things — not just in slices or categories or buckets — but in the holistic sense. Your long, but very apropos, job title really shows what we have been talking about that companies need to be thinking differently. 

So that gets me to the issue about skills. So maybe the typical IT person — and I don’t want to get into too much of a generalization or even stereotyping — seems to be content to find a requirement set, beaver along in their cubicle, maybe not be too extroverted in terms of their skills or temperament, and really get the job done. It is detail-oriented, it is highly focused.

But in order to accomplish what companies need to do now — to cross-pollinate, break down boundaries, think outside of the box — that requires different skills, not just technical but business; not just business but extroverted or organizationally aggressive in terms of opening up channels with other groups inside the company, even helping people get out of their comfort zone.

So what do you think is the next step when it comes to finding the needed talent and skills to create this new digitally transformed business environment?

Curious IT benefits business

Yuen: In order to find that skill set, you need to expand your boundaries in two ways.

One is the ability to take your natural interest in learning and expand it. I think a lot of us, especially in the IT industry, have been pushed to specialize in certain things and get certifications, and you need to get as deep as possible. We have closed our eyes to having to learn about other technologies or other items.

Most technical people, in general, are actually fairly inquisitive. We have the latest iPhone or Android. We are generally interested. We want to know the market because we want to make the best decisions for ourselves.

We need to apply that generally within our business lives and in our jobs in terms of going beyond IT. We need to understand the different technologies out there. We don’t have to be masters of them, we just need to understand them. If we need to do specialization, we go ahead. But we need to take our natural curiosity — especially in our private lives — and expand that into our work lives and get interested in other areas.

The second area is accepting that you don’t have to be the expert in everything. I think that’s another skill that a lot in business should have. We don’t want to speak up or learn if we fear we can’t be the best or we might get corrected if we are wrong.

But we really need to go ahead and learn those new areas that we are not expert in. We may never be experts, but we want to get that secondary perspective. We want to understand where finance is coming from in terms of budgetary realities. We need to learn about how they do the budget, what the budget is, and what influences the costs.

If we want to understand the end users’ needs, we need to learn more about what their requirements are, how an application affects them, and how it affects their daily lives. So that when we go to the table and they say, “I need this,” you have that base understanding and know their role.

By having greater knowledge and expanding it, that allows you to go ahead and learn a lot more and as you expand from that area. You will discover areas that you might become interested in or that your company needs. That’s where you go ahead, double-down, and take your existing learning capabilities and go really, really deep.

A good example is if I have a traditional IT infrastructure. Maybe I have learned virtual machines, but now I am faced with such things as cloud virtual machines, containers, and Kubernetes, and with serverless. You may not be sure in what direction to go, and with analysis paralysis — you may not do anything. 

What you should do is learn about each of those, how it relates, and what your skills are. If one of those technologies booms suddenly, or it becomes an important point, then you can very quickly make a pivot and learn it — as opposed to just isolating yourself.

So, the ability to learn and expand the skills gap creates opportunities for everybody.

Gardner: Well, we are not operating in a complete vacuum. The older infrastructure vendors are looking to compete and remain viable in the new cloud era. They are trying to bring out solutions that automate. And so are the cloud vendors.

What are you seeing from cloud providers and the vendors as they try to ameliorate these issues? Will new tools, capabilities, and automation help gain that holistic, strategic focus on the people and the process?

Cloud coordinators needed 

Yuen: The providers and vendors are delivering the tools and interfaces to do what we call automation and orchestration. Sometimes those two terms get mixed together, but generally I see them as separate. Automation is taking an existing task, or a series of tasks or process, and making it into a single, one-button-click type of thing. The best way I would describe it is almost like an Excel macro. You have steps 1, 2, 3, and 4, I am going to go ahead and do 1, 2, 3 and 4 as a script with a single button.

But orchestration is taking those processes and coordinating them. What if I need to have decision points in coordination? What if I need to decide when to run this and when not to run that? The cloud providers are delivering the APIs, entry points, and the data feedback so you have the process information. You can only automate based on the information coming in. We are not blindly saying we are going to do one, two and three or A, B and C; we are going to react based on the situation.

So we really must rely on the cloud providers to deliver that level of information and the APIs to execute on what we want to do. 

And, meanwhile, the vendors are creating the ability to bring all of those tools together as an information center, or what we traditionally have called a monitoring tool. But it’s really cloud management where we see across all of the different clouds. We can see all of the workloads and requirements. Then we can build out the automation and orchestration around that.

The vendors are creating the ability to bring all of those tools together as an information center, what we traditionaly called a monitoring tool. But it’s really cloud management across all of the different clouds. 

Some people are concerned that if we build a lot of automation and orchestration, that they will automate themselves out of a job. But realistically what we have seen is with cloud and with orchestration is that IT is getting more complex, not less complex. Different environments, different terminologies, different way to automate, the complexities of integrating more than just the steps that IT has – this has created a whole new area for IT professionals to get into. Instead of deciding what button to press and doing the task, they will automate the tests. Then we are left to focus on determining the proper orchestration, of coordinating amongst all the other areas.

So as the management has gone up a level, the skills and the capabilities for the operators are also going to go up.

Gardner: It seems to me that this is a unique time in the long history of IT. We can now apply those management principles and tools not just to multicloud or public cloud, but across private cloud, legacy, bare-metal, virtualization, managed service providers, and SaaS applications. Do you share my optimism that if you can, in effect, adjust to cloud heterogeneity that you can encompass all of IT heterogeneity and get comprehensive, data-driven insights and management for your entire IT apparatus regardless of where it resides, how it operates, and how it’s even paid for?

Seeing through the clouds

Yuen: Absolutely! I mean that’s where we are going to end up. It’s an inverse of the mindset that we currently have in IT, which is we maintain a specific type of infrastructure, we optimize and modify it, and then the end result is it’s going to impact the application in a positive way, we hope.

What we are doing now is we are inverting that thinking. We are managing applications and the applications help deliver the proper experience. That’s what we are monitoring the most, and it doesn’t matter what the underlying infrastructure or the systems are. It’s not that we don’t care, we just don’t care necessarily what the systems are.

How to Put the Right People

 In the Right Roles 

For Digital Transformation

Once we care about the application, then we look at the underlying infrastructure, and then we optimize that. And that infrastructure could be in the public cloud, across multiple providers, it could be in a private cloud, or a traditional backend and large mainframe systems.

It’s not that we don’t care about those backend systems. In fact, we care just as much as we did before – it’s just that we don’t have to have that alignment. Our orientation isn’t system-based or application-based. Now, there potentially could be anything — and the vendors with systems management software, they are extending that out.

So it doesn’t matter if it’s a VMware system, or a bare metal system, or a public cloud. We are just managing the end-result relative to how those systems operate. We are going to let the tools go ahead and make sure they execute.

Our ability to understand and monitor is going to be critical. It’s going to allow us to extend out and manage across all the different environments effectively. But most importantly, it’s all affecting the application at the top. So you’re becoming a purveyor and providing better skills to the end-users and to finance.

Gardner: While you’re getting a better view application-by-application, you’re also getting a better opportunity to do data analysis across all of these different deployments. You can find ways of corralling that data and its metadata and move the workloads into the proper storage environment that best suits your task at the moment under the best economics of the moment.

Not only is there an application workload benefit, but you can argue that there is an opportunity to finally get a comprehensive view of all of the IT data and then manage that data into the right view — whether it’s a system of record benefit, application support benefit or advanced analytics, and even out to the edge.

Do you share my view that the applications revolution you are describing also is impacting how data is going to be managed, corralled, and utilized?

Data-driven decision making

Yuen: It is, and that data viewpoint is applicable in many ways. It’s one of the reasons why data protection and analysis of that data becomes incredibly important. From the positive side, we are going to get a wealth of data that we need in order to do the optimizations.

If I want to know the best location for my apps, I need all the information to understand that. Now that we are getting that data in, it can be passed to machine learning (ML) or artificial intelligence (AI) systems that can make decisions for us going forward. Once we train the models, they can be self-learning, self-healing, and self-operating. That’s going to relieve a lot of work from us.

Data also impacts the end-users. People are taking in data, and they understand that they can use it for secondary users. It can be used for development, it can be used for sales. I can make copies of that data, so I don’t want to touch the production data all the time. There is so much insight I can provide to the end users. In fact, the explosion of data is a leading cause of increased IT complexity.

We want to maximize the information that we get out of all that data, to maximize the information the end-users are getting out of it, and also leverage our tools to minimize the negative impact it has for management.

Gardner: What should enterprises be doing differently in order to recognize the opportunity, but not fall to the wayside in terms of these culture and adoption issues?

Come together, right now, over cloud

Yuen: The number one thing is to start talking and developing a measured, sustainable approach to going into the cloud. Come together and have that discussion, and don’t be afraid to have that discussion. Whether you’re ready for cloud or you’ve already gone in and need to rein it back in. No matter what you need to do, you always have that centralized approach because that approach is not going to be a one-time thing. You don’t make a cloud plan and then not revisit it for 20 years — you live it. It’s an ongoing, living, breathing thing — and you’re always going to be making adjustments.

But bring the team together, develop a plan, build an approach to cloud that you’re going to be living with. Consider how you want to make decisions and bring that together with how you want to interact with each other. That plan is going to help build the communication plan and build the organization to help make that cultural shift.

Companies honestly need to do an assessment of what they have. It’s surprising that a lot of companies just don’t know how much cloud they are using. They don’t know where it’s going. And even if it’s not in the cloud yet, they don’t know what they need.

A lot of the work is understanding what you have. Once you build out the plan of what you want to do, you essentially get your house in order, understand what you have, then you know where you want to go, where you are, and then you can begin that journey.

The biggest problem we have right now is companies that try and do both at the same time. They move forward without planning it out. They may potentially move forward without understanding what they already have, and that leads to inefficiencies and cultural conflicts, and the organizational skills gaps issues that we talked about.

Learn About Smooth Transitions to Multicloud

Gain Insights to

Easier Digital Transformations

So again, lay out a plan and understand what you have, those are the first two steps. Then look for solutions to help you understand and capture the information about the resources you already have and how you are using them. By pulling those things together, you can really go forward and get the best use out of cloud.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing, Data center transformation, DevOps, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, multicloud, procurement, serverless, User experience | Tagged , , , , , , , , | Leave a comment

Who, if anyone, is in charge of multi-cloud business optimization?

The next BriefingsDirect composable cloud strategies interview explores how changes in business organization and culture demand a new approach to leadership over such functions as hybrid and multi-cloud procurement and optimization.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

We’ll now hear from an IT industry analyst about the forces reshaping the consumption of hybrid cloud services and why the model around procurement must be accompanied by an updated organizational approach — perhaps even a new office or category of officer in the business category.

Here to help explore who — or what — should be in charge of spurring effective change in how companies acquire, use, and refine their new breeds of IT is John Abbott, Vice President of Infrastructure and Co-Founder of The 451 Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What has changed about the way that IT is being consumed in companies? Is there some gulf between how IT was acquired and the way it is being acquired now?

Abbott: I think there is, and it’s because of the rate of technology change. The whole cloud model is up over traditional IT and is being modeled in a way that we probably didn’t foresee just 10 years ago. So, CAPEX to OPEX, operational agility, complexity, and costs have all been big factors.

But now, it’s not just cloud, it’s multi-cloud as well. People are beginning to say, “We can’t rely on one cloud if we are responsible citizens and want to keep our IT up and running.” There may be other reasons for going to multi-cloud as well, such as cost and suitability for particular applications. So that’s added further complexity to the cloud model. 


Also, on-premises deployments continue to remain a critical function. You can’t just get rid of your existing infrastructure investments that you have made over many, many years. So, all of that has upended everything. The cloud model is basically simple, but it’s getting more complex to implement as we speak.

Gardner: Not surprisingly, costs have run away from organizations that haven’t been able to be on top of a complex mixture of IT infrastructure-as-a-service (IaaS)platform-as-a-service (PaaS), and software-as-a-service (SaaS). So, this is becoming an economic imperative. It seems to me that if you don’t control this, your runaway costs will start to control you.

Abbott: Yes. You need to look at the cloud models of consumption, because that really is the way of the future. Cloud models can significantly reduce cost, but only if you control it. Instant sizes, time slices, time increments, and things like that all have a huge effect on the total cost of cloud services.

Also, if you have multiple people in an organization ordering particular services from their credit cards, that gets out of control as well. So you have to gain control over your spending on cloud. And with services complexity — I think Amazon Web Services (AWS) alone has hundreds of price points — things are really hard to keep track of.

Gain New Insights Into

Managing the Next Wave

Of IT Disruption

Gardner: When we are thinking about who — or what — has the chops to know enough about the technology, understand the economic implications, be in a position to forecast cost, budget appropriately, and work with the powers that be who are in charge of enterprise financial functions — that’s not your typical IT director or administrator.

IT Admin role evolves in cloud 

Abbott: No. The new generation of generalist IT administrators – the people who grew up with virtualization— don’t necessarily look at the specifics of a storage platform, or compute platform, or a networking service. They look at it on a much higher level, and those virtualization admins are the ones I see as probably being the key to all of this.

But they need tools that can help them gain command of this. They need, effectively, a single pane of glass — or at least a single control point — for these multiple services, both on-premises and in the cloud. 

Also, as the data centers become more distributed, going toward the edge, that adds even further complexity. The admins will need new tools to do all of that, even if they don’t need to know the specifics of every platform.

Gardner: I have been interested and intrigued by what Hewlett Packard Enterprise (HPE) has been doing with such products as HPE OneSphere, which, to your point, provides more tools, visibility, automation, and composability around infrastructure, cloud, and multi-cloud.

But then, I wonder, who actually will best exploit these tools? Who is the target consumer, either as an individual or a group, in a large enterprise? Or is this person or group yet to be determined?

Abbott: I think they are evolving. There are skill shortages, obviously, for managing specialist equipment, and organizations can’t replace some of those older admin types. So, they are building up a new level of expertise that is more generalist. It’s those newer people coming up, who are used to the mobile world, who are used to consumer products a bit more, that we will see taking over.

We are going toward everything-as-a-service and cloud consumption models. People have greater expectations on what they can get out of a system as well. 

Also, you want the right resources to be applied to your application. The best, most cost-effective resources; it might be in the cloud, it might be a particular cloud service from AWS or from Microsoft Azure or from Google Cloud Platform, or it might be a specific in-house platform that you have. No one is likely to have of all that specific knowledge in the future, so it needs to be automated.

We are going toward everything-as-a-service and cloud consumption models. People have greater expectations on what they can get out of a system as well.

We are looking at the developers and the systems architects to pull that together with the help of new automation tools, management consoles, and control plans, such as HPE OneSphere and HPE OneView. That will pull it together so that the admin people don’t need to worry so much. A lot of it will be automated.

Gardner: Are we getting to a point where we will look for an outsourced approach to overall cloud operations, the new IT procurement function? Would a systems integrator, or even a vendor in a neutral position, be able to assert themselves on best making these decisions? What do you think comes next when it comes to companies that can’t quite pull this off by themselves?

People and AI partnership prowess

Abbott: The role of partners is very important. A lot of the vertically oriented systems integrators and value-added resellers, as we used to call them, with specific application expertise are probably the people in the best position.

We saw recently at HPE Discover the announced acquisition of BlueData, which allows you to configure in your infrastructure a particular pool for things like big data and analytics applications. And that’s sort of application-led. 

The experts in data analysis and in artificial intelligence (AI), the data scientists coming up, are the people that will drive this. And they need partners with expertise in vertical sectors to help them pull it together.

Gardner: In the past when there has been a skills vacuum, not only have we seen a systems integration or a professional services role step up, we have also seen technology try to rise to the occasion and solve complexity. 

Where do you think the concept of AIOps, or using AI and machine learning (ML) to help better identify IT inefficiencies, will fit in? Will it help make predictions or recommendations as to how you run your IT?

Gain New Insights Into

Managing the Next Wave

Of IT Disruption

Abbott: There is a huge potential there. I don’t think we have actually seen that really play out yet. But IT tools are in a great position to gather a huge amount of data from sensors and from usage data, logs, and everything like that and pull that together, see what the patterns are, and recommend and optimize for that in the future.

I have seen some startups doing system tuning, for example. Experts who optimize the performance of a server usually have a particular area of expertise, and they can’t really go beyond that because it’s huge in itself. There are around 100 “knobs” on a server that you can tweak to up the speed. I think you can only do that in an automated fashion now. And we have seen some startups use AI modeling, for instance, to pull those things together. That will certainly be very important in the future.

Gardner: It seems to me a case of the cobbler’s children having no shoes. The IT department doesn’t seem to be on the forefront of using big data to solve their problems.

Abbott: I know. It’s really surprising because they are the people best able to do that. But we are seeing some AI coming together. Again, at the recent HPE Discover conference, HPE InfoSight made news as a tool that’s starting to do that analysis more. It came from the Nimble acquisition and began as a storage-specific product. Now it’s broadening out, and it seems they are going to be using it quite a lot in the future.

Gardner: Perhaps we have been looking for a new officer or office of leadership to solve multi-cloud IT complexity, but maybe it’s going to be a case of the machines running the machines.

Faith in future automation 

Abbott: A lot of automation will be happening in the future, but that takes trust. We have seen AI waves [of interest] over the years, of course, but the new wave of AI still has a trust issue. It takes a bit of faith for users to hand over control.

But as we have talked about, with multi-cloud, the edge, and things like microservices and containers — where you split up applications into smaller parts — all of that adds to the complexity and requires a higher level of automation that we haven’t really quite got to yet but are going toward.

Gardner: What recommendations can we conjure for enterprises today to start them on the right path? I’m thinking about the economics of IT consumption, perhaps getting more of a level playing field or a common denominator in terms of how one acquires an operating basis using different finance models. We have heard about the use of these plans by HPE, HPE GreenLake Flex Capacity, for example.

I wrote a research paper on essentials of edge-to-cloud and hybrid management. We recommend a proactive cloud strategy. Think out where to put your workloads and how to distribute them across different clouds.

What steps would you recommend that organizations take to at least get them on the path toward finding a better way to procure, run, and optimize their IT?

Abbott: I actually recently wrote a research paper for HPE on the eight essentials of edge-to-cloud and hybrid IT management. The first thing we recommended was a proactive cloud strategy. Think out your cloud strategy, of where to put your workloads and how to distribute them around to different clouds, if that’s what you think is necessary.

Then modernize your existing technology. Try and use automation tools on that traditional stuff and simplify it with hyperconverged and/or composable infrastructure so that you have more flexibility about your resources.

Make the internal stuff more like a cloud. Take out some of that complexity. It’s has to be quick to implement. You can’t spend six months doing this, or something like that.

Gain New Insights Into

Managing the Next Wave

Of IT Disruption

Some of these tools we are seeing, like HPE OneView and HPE OneSphere, for example, are a better bet than some of the traditional huge management frameworks that we used to struggle with.

Make sure it’s future-proof. You have to be able to use operating system and virtualization advances [like containers] that we are used to now, as well as public cloud and open APIs. This helps accelerate things that are coming into the systems infrastructure space.

Then strive for everything-as-a-service, so use cloud consumption models. You want analytics, as we said earlier, to help understand what’s going on and where you can best distribute workloads — from the cloud to the edge or on-premises, because it’s a hybrid world and that’s what we really need.

And then make sure you can control your spending and utilization of those services, because otherwise they will get out of control and you won’t save any money at all. Lastly, be ready to extend your control beyond the data center to the edge as things get more distributed. A lot of the computing will increasingly happen close to the edge.

Gardner: Micro data centers at the edge

Computing close to the edge

Abbott: Yes. That’s has to be something you start working on now. If you have software-defined infrastructure, that’s going to be easier to distribute than if you are still wedded to particular systems, as the old, traditional model was.

Gardner: We have talked about what companies should do. What about what they shouldn’t do? Do you just turn off the spigot and say no more cloud services until you get control?

It seems to me that that would stifle innovation, and developers would be particularly angry or put off by that. Is there a way of finding a balance between creative innovation that uses cloud services, but within the confines of an economic and governance model that provides oversight, cost controls, and security and risk controls?

Abbott: The best way is to use some of these new tools as bridging tools. So, with hybrid management tools, you can keep your existing mission-critical applications running and make sure that they aren’t disrupted. Then, gradually you can move over the bits that make sense onto the newer models of cloud and distributed edge.

Gain New Insights Into

Managing the Next Wave

Of IT Disruption

You don’t do it in one big bang. You don’t lift-and-shift from one to another, or react, as some people have, to reverse back from cloud if it has not worked out. It’s about keeping both worlds going in a controlled way. You must make sure you measure what you are doing, and you know what the consequences are, so it doesn’t get out of control.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Cloud computing, data center, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, multicloud | Tagged , , , , , , , , , | Leave a comment

A discussion with IT analyst Martin Hingley on the culmination of 30 years of IT management maturity

The next BriefingsDirect hybrid IT strategies interview explores how new maturity in the management and composition of multiple facets of IT — from cloud to bare-metal, from serverless to legacy systems — amount to a culmination of 30 years of IT evolution.

We’ll hear now from an IT industry analyst about why – for perhaps the first time — we’re able to gain an uber-view over all of IT operations. And we’ll explore how increased automation over complexity such as hybrid and multicloud deployments sets the stage for artificial intelligence (AI) in IT operations, or AIOps.

It may mean finally mastering IT heterogeneity and giving businesses the means to truly manage how they govern and sustain all of their digital business assets.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to help us define the new state of total IT management is Martin Hingley, President and Market Analyst at ITCandor Limited, based in Oxford, UK. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Looking back at IT operations, it seems that we have added a lot of disparate and hard-to-manage systems – separately and in combination — over the past 30 years. Now, with infrastructure delivered as services and via hybrid deployment models, we might need to actually conquer the IT heterogeneity complexity beast – or at least master it, if not completely slay it.

Do you agree that we’re entering a new era in the evolution of IT operations and approaching the need to solve management comprehensively, over all of IT?

Hingley: I have been an IT industry analyst for 35 years, and it’s always been the same. Each generation of systems comes in and takes over from the last, which has always left operators with the problem of trying to manage the new with the old.


A big shift was the client/server model in the late 1980s and early 1990s, with the influx of PC servers and the wonderful joy of having all these new systems. The problem was that you couldn’t manage them under the same regime. And we have seen a continuous development of that problem over time.

It’s also a different problem depending on the size of organization. Small- to medium-sized (SMB) companies can at least get by with bundled systems that work fine and use Microsoft operating systems. But the larger organizations generate a huge mixture of resources.

Cloud hasn’t helped. Cloud is very different from your internal IT stuff — the way you program it, the way you develop applications. It has a wonderful cost proposition; at least initially. It has a scalability proposition. But now, of course, these companies have to deal with all of this [heterogeneity].

Now, it would be wonderful if we get to a place where we can look at all of these resources. A starting point is to think about things as a service catalog, at the center of your corporate apps. And people are beginning that as a theory, even if it doesn’t sit in everybody’s brain.

So, you start to be able to compose all of this stuff. I like what Hewlett Packard Enterprise (HPE) is doing [with composable infrastructure]. … We are now getting to the point where you can do it, if you are clever. Some people will, but it’s a difficult, complex subject.

Gardner: The idea of everything-as-a-service gives you the opportunity to bring in new tools. Because organizations are trying to transform themselves digitally — and the cloud has forced them to think about operations and development in tandem — they must identify the most efficient mix of cloud and on-premises deployments.

They also have to adjust to a lack of skills by automating and trying to boil out the complexity. So, as you say, it’s difficult.

But if 25 percent of companies master this, doesn’t that put them in a position of being dominant? Don’t they gain an advantage over the people who don’t?

Hingley: Yes, but my warning from history is this. With mainframes, we thought we had it all sorted out. We didn’t. We soon had client/server, and then mini-computers with those UNIX systems, all with their own virtualizations and all that wonderful stuff. You could isolate the data in one partition from application data from a different application. We had all of that, and then along comes the x86 server.

How to Remove Complexity
From Multi-cloud
And Hybrid IT

It’s an architectural issue rather than a technology issue. Now we have cloud, which is very different from the on-premises stuff. My warning is let’s not try and lock things down with technology. Let’s think about it as architecture. If we can do that, maybe we can accommodate neuromorphic and photonic and quantum computing within this regime in the future. Remember, the people who really thought they had it worked out in previous generations found out that they really hadn’t. Things moved on.

Gardner: And these technology and architectural transitions have occurred more frequently and accelerated in impact, right?

Beyond the cloud, IT is life

Hingley: I have been thinking about this quite a lot. It’s a weird thing to say, but I don’t think “cloud” is a good name anymore. I mean, if you are a software company, you’d be an idiot if you didn’t make your products available as a service.

Every company in the world uses the cloud at some level. Basically there is no longer choice about whether we use a cloud. All those companies that thought they didn’t, when people actually looked, found they were using the cloud a lot in different departments across the organization. So it’s a challenge, yet things constantly change.

If you look 20 years in the future, every single physical device we use will have some level of compute built into it. I don’t think people like you and I are going to be paid lots of money for talking about IT as if it were a separate issue. 

It is the world economy, it just is; so, it becomes about how well you manage everything together.

If you look 20 years in the future, every single physical device we use will have some level of compute built into it.  … It becomes the world economy. It becomes about how well you manage everything together.

As this evolves, there will be genuinely new things … to manage this. It is possible to manage your resources in a coherent way, and to sit over the top of the heterogeneous resources and to manage them.

Gardner: A tandem trend to composability is that more-and-more data becomes available. At the edge, smart homes, smart cities, and also smarter data centers. So, we’re talking about data from every device in the data center through the network to the end devices, and back again. We can even determine how the users consume the services better and better.

We have a plethora of IT ops data that we’re only starting to mine for improving how IT manages itself. And as we gain a better trail of all of that data, we can apply machine learning (ML) capabilities, to see the trends, optimize, and become more intelligent about automation. Perhaps we let the machines run the machines. At least that’s the vision.

Do you think that this data capability has pushed us to a new point of manageability? 

Data’s exploding, now what? 

Hingley: A jetliner flying across the Atlantic creates 5TB of data; each one. And how many fly across the Atlantic every day? Basically you need techniques to pick out the valuable bits of data, and you can’t do it with people. You have to use AI and ML.

The other side is, of course, that data can be dangerous. We see with the European Union (EU) passing the General Data Protection Regulation (GDPR), saying it’s a citizens’ right within the EU to have privacy protected and data associated with them protected. So, we have all sorts of interesting things going on.

The data is exploding. People aren’t filtering it properly. And then we have potential things like autonomous cars, which are going to create massive amounts of data. Think about the security implications, somebody hacking into your system while you are doing 70 miles an hour on a motorway.

I always use the parable of the seeds. Remember that some seeds fall on fallow ground, some fall in the middle of the field. For me, data is like that. You need to work out which bits of it you need to use, you need to filter it in order to get some reasonable stuff out of it, and then you need to make sure that whatever you are doing is legal. I mean, it’s got to be fun.

How to Remove Complexity
From Multi-cloud
And Hybrid IT

Gardner: If businesses are tasked with this massive and growing data management problem, it seems to me they ought to get their IT house in order. That means across a vast heterogeneity of systems, deployments, and data types. That should happen in order to master the data equation for your lines of business applications and services.

How important is it then for AIOps — applying AI principles to the operations of your data centers – to emerge sooner rather than later?

You can handle the truth 

Hingley: You have to do it. If you look at GDPR or Sarbanes-Oxley before that, the challenge is that you need a single version of the truth. Lots of IT organizations don’t have a single version of the truth.

If they are subpoenaed to supply every email that it has the word “Monte Carlo” in it, they couldn’t do it. There are probably 25 copies of all the emails. There’s no way of organizing it. So data governance is hugely important, it’s not nice to have, it’s essential to have. Under new regulations coming, and it’s not just EU, GDPR is being adopted in lots of countries.

It’s essential to get your own house in order. And there’s so much data in your organization that you are going to have to use AI and ML to be able to manage it. And it has to go into IT Ops. I don’t think it’s a choice, I don’t think many people are there yet. I think it’s nonetheless a must do.

Gardner: We’ve heard recently from HPE about the concept of a Composable Cloud, and that includes elevating software-defined networking (SDN) to a manageability benefit. This helps create a common approach to the deployment of cloud, multi-cloud, and hybrid-cloud.

It’s essential that you get your house in order. And there’s so much data in your organization that you are going to have to use AI and ML to be able to manage it. And it has to go into IT Ops.

Is this the right direction to go? Should companies be thinking about a common denominator to help sort through the complexity and build a single, comprehensive approach to management of this vast heterogeneity?

Hingley: I like what HPE is doing, in particular the mixing of the different resources. You also have the HPE GreenLake model underneath, so you can pay for only what you use. By the way, I have been an analyst for 35 years, if every time the industry started talking about the need to move from CAPEX to OPEX had actually shifted, we would have been at 200 percent OPEX by now.

In the bad times, we move toward OPEX. In the good times, we secretly creep back toward CAPEX because it has financial advantages. You have to be able to mix all of these together, as HPE is doing.

Moreover, in terms of the architecture, the network fabric approach, the software-defined approach, the API connections, these are essential to move forward. You have to get beyond point products. I hope that HPE — and maybe couple of other vendors — will propose something that’s very useful and that helps people sort this new world out.

How to Remove Complexity
From Multi-cloud
And Hybrid IT

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor:Hewlett Packard Enterprise.

You may also be interested in:

Posted in Cloud computing | Leave a comment

How global HCM provider ADP mines an ocean of employee data for improved talent management

The next BriefingsDirect big data analytics and artificial intelligence (AI) strategies discussion explores how human capital management (HCM) services provider ADP unlocks new business insights from vast data resources.

With more than 40 million employee records to both protect and mine, ADP is in a unique position to leverage its business data network for unprecedented intelligence on employee trends, risks, and productivity. ADP is entering a bold new era in talent management by deploying advanced infrastructure to support data assimilation and refinement of a vast, secure data lake as foundations for machine learning (ML).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Unpack how advances in infrastructure, data access, and AI combine to produce a step-change in human capital analytics with panelists Marc Rind, Vice President of Product Development and Chief Data Scientist at ADP Analytics and Big Data, and Dr. Eng Lim Goh, Vice President and Chief Technology Officer for High Performance Computing and Artificial Intelligence at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Marc, what’s unique about this point in time that allows organizations such as ADP to begin to do entirely new and powerful things with its vast data?

Rind: What’s changed today is the capability to take data — and not just data that you originally collect for a certain purpose, I am talking about the “data exhaust” — and to start using that data for purposes that are not the original intention you had when you started collecting it.


We pay one in six full-time employees in the US, so you can imagine the data that we have around the country, and around the world of work. But it’s not just data about how they get paid — it’s how they are structured, what kind of teams are they in, advances, bonuses, the types of hours that they work, and everything across the talent landscape. It’s data that we have been able to collect, curate, normalize, and then aggregate and anonymize to start leveraging to build some truly fascinating insights that our clients are able to leverage.

Gardner: It’s been astonishing to me that companies like yours are now saying they want all of the data they can get their hands on — not just structured data, but all kinds of content, and bringing in third-party data. It’s really “the more, the merrier” when it comes to the capability to gather entirely new insights.

The vision of data insight 

Rind: Yes, absolutely. Also there have been advances in methodologies to handle this data — like you said, unstructured data, non-normalized data, taking data from across hundreds of thousands of our clients, all having their own way that they define, categorize, and classify their workforces.

Learn How IT Best Supports 

The Greatest Data Challenges

Now we are able to make sense of all of that — across the board — by using various approaches to normalize, so that we can start building insights across the board. That’s something extremely exciting for us to be able to leverage.

Gardner: Dr. Goh, it’s only been recently that we have been able to handle such vast amounts of data in a simplified way and at a manageable cost. What are partners like HPE bringing to the table to support these data platforms and approaches that enable organizations like ADP to make analytics actionable?

Goh: As Marc mentioned, these are massive amounts of data, not just the data you intend to keep, but also the data exhaust. He also mentioned the need to curate it. So the idea for us in terms of data strategy with our partners and customers is, one, to retain data as much as you can.


Secondly, we ensure that you have the tools to curate it, because there is no point having massive amounts of data over decades – and when you need them to train a machine –  and you don’t know where all of the data is. You need to curate it from the beginning, and if you have not, start curating your data now.

The third area is to federate. So retain, curate, and federate. Why is the third part, to federate, important? As many huge enterprises evolve and grow, a lot of the data starts to get siloed. Marc mentioned a data lake. This is one way to federate, whereby you can cut across the silos so that you can train the machine more intelligently. 

We at HPE build the tools to provide for the retention, curation, and federation of all of that data.

Gardner: Is this something you are seeing in many different industries? Where are people leveraging ML, AI, and this new powerful infrastructure? 

Goh: It all begins with what I call the shift. The use of these technologies emerged when industries shifted from when prediction decisions were made using rules and scientific law-based models. 

Then came a recent reemergence of ML, where instead of being based on laws and rules, you evolve your model more from historical data. So data becomes important here, because the intelligence of your model is dependent on the quantity and quality of the data you have. And by using this approach you are seeing many new use cases emerge, of using the ML approach on historical data.

One example would be farming. Instead of spraying the entire crop field, they just squirt specifically at the weeds and avoid the crops.

Gardner: This powerful ML example is specific to a vertical industry, but talent management insights can be used by almost any business. Marc, what has been the challenge to generate talent management insights based on historical data? 

Rind: It’s fascinating because Dr. Goh’s example pertains to talent management, too. Everyone that we work with in the HCM space is looking to gain an advantage when it comes to finding, keeping, and retaining their best talent.

We look at a vast amount of employment data. From that, we can identify people who ended up leaving an organization voluntarily versus those who stayed and grew, why they were able to grow, based on new opportunities, promotions, different methods of work, and by being on different teams. Similar to the agriculture example, we have been able to use the historical data to find patterns, and then identify those who are the “crops” and determine what to do to keep them happier for longer retention.

It’s fascinating because Dr. Goh’s example pertains to talent management, too. Everyone that we work with in the HCM space is looking to gain an advantage when it comes to finding, keeping, and retaining their best talent.

This is a big shift in the talent management space. We are leveraging vast data — but not presenting too much data to an HCM professional. We spend a lot of time handling it on their behalf so the HCM professional and even managers can have the insights pushed to them, rather than be bombarded with too much data.

At the end of the day, we are using AI to say, “Hey, here are the people you should go speak with. Or this manager has a lot of high-risk employees. Or this is a critical job role that you might see higher than expected turnover with.” We can point the managers in that direction and allow them to figure out what to do about it. And that’s a big shift in simplifying analysis, and at the same time keeping the data secure.

Data that directs, doesn’t distract 

Goh: What Marc described is very similar to what our customers are doing by converting their call center voice recordings into text. They then anonymize it but gain the ability to figure out the sentiment of their customers.

The sentiment analysis of the text — after converting from a voice recording – helps them better understand churn. In the telco industry, for example, they are very concerned about churn, which means a customer leaving you for another vendor.

Yes, it’s very similar. First you go through a massive amount of historical data, and then use smart tools to convert the data to make it useable, and then a different set of tools analyzes it all — to gain such insights as the sentiment of your customers.

Gardner: When I began recording use case discussions around big data, AI, and ML, I would talk to organizations like refineries or chemical plants. They were delighted if they could gain a half-percent or a full percent of improvement. That alone meant billions of dollars to them. 

But you all are talking about the high-impact improvement for employees and talent. It seems to me that this isn’t just shaving off a rounding number of improvement. Marc, this type of analysis can make or break a company’s future. 

So let’s look at the stakes here. When we talk about improving talent management, this isn’t trivial. This could mean major improvement for any cdanStaveMen66ompany.

Learn How IT Best Supports 

The Greatest Data Challenges

Rind: Every company. Any leader of an organization will tell you that their most important resource is the people that work for the company. And that value is not an easy thing to measure.

We are not talking about how much more we can save on our materials, or how to be smarter in electricity savings. You are talking about people. At the end of the day, they are not a resource as much as they are human beings. You want to figure out what makes them tick, gain insight into where people need to be growing, and where you should spend the human time with them. 

Where the AI comes in is to provide that direction and offer suggestions and recommendations on how to keep those people there, happy and productive.

Another part of keeping people productive is in automating the processes necessary for managers. We still have a lot of users punching clocks, managing time, and approving pay cards and processing payroll. And there are a lot of manual things that go on and there is still a lot of paperwork 

We are using AI to simplify and make recommendations to handle a lot of those pieces, so the HR professional can be focused on the human part — to help grow careers rather than be stuck processing paperwork and running reports.

Cost-effective AI, ML has arrived 

Gardner: We’re now seeing AI and ML have a major impact on one of the most important resources and assets a company can have, human capital. At the same time, we’re seeing the cost and complexity of IT infrastructure that support AI go down thanks to things like hyperconverged infrastructure (HCI), lower cost of storage, capability to create whole data centers that can be mirrored, backed up, and protected — as well as ongoing improvements in composable infrastructure.

Are we at the point where the benefits of ML and AI are going up while the cost and composability of the underlying infrastructure are going down?

Goh: Absolutely. That’s the reason we have a reemergence of AI throughmachine learning of historical data. These methods were already available decades ago, but the infrastructure was just too costly to amass enough data for the machine to be intelligent. You just couldn’t get enough compute power to go through that data for the machine to be intelligent. It wasn’t until now that the various infrastructure required came down in cost, and therefore you see this reemergence of ML.

If one were to ask why in the last few years there has been a surge to AI, it would be lower cost of compute capability. We have reached a certain point where it is cost-effective enough to amass the data. Also because of the Internet, the data has become more easily accessible in the last few years. 

Gardner: Marc, please tell us about ADP. People might be familiar with your brand through payroll processing, but there’s a lot more to it.

Find, manage, and keep talent 

Rind: At ADP, or Automatic Data Processing, data is our middle name. We’ve been working at a global scale for 70 years, now with $12 billion in revenue and supporting over 600,000 businesses — ranging from multinational corporations to three-person small businesses. We process $2 trillion in payroll and taxes, running about 40 million employee records per month. The amount of data we have been collecting is across the board, not just payroll.

Talent management is a huge thing now in the world of work — to find and keep the best resources. Moving forward, there is a need to understand innovative engagement of that workforce, to understand the new world of pay and micro-pay, and new models where people are paid almost immediately. 

The contingent workforce means a work market where people are moving away from traditional jobs. So there are lots of different areas within the world of payroll processing and talent management. It has really gotten exciting.

This could mean major improvement for any company. Where the artificial intelligence comes in is to provide that direction and offer suggestions and recommendations on how to keep those people there happy and productive.

With all of this — optimizing your workforce – also brings better understanding of where to save the organization from lost dollars. Because of the amounts of data, we can inform a client not just on, “Okay, this is what your cost of turnover is based on who is leaving and how long it takes them to get productive again, and the cost of recruiting.”

We can also show how your HCM compares against others in your field. It’s one thing to share some information. It’s another to give an insight on how others have figured this out or are handling this better. You gain the potential to save more by learning about other methods out there that you should explore to improve talent retention.

Once you begin generating cost savings for an organization — be it in identifying people who are leaving, getting them on-boarded better, or reducing cost from overtime – it shows the power of the insights and of having that kind of data. And that’s not just about your own organization, but it’s in how you compare to your peers.

So that’s very exciting for us.

All-access data analytics

Goh: Yes, we are very keen to get such reports on intelligence with regards to our talent. It’s become very difficult to hire and retain data scientists focused on ML and AI. These reports can be helpful in hiring and to understand if they are satisfied in their jobs.

Rind: That’s where we see the future of work, and the future of pay, going. We have the organization, the clients, and the managers — but at the end, it’s also data insights for the employees. We are in a new world of transparency around data. People understand more, they are more accepting of information as long as they are not bombarded with it.

As an employee, your partner in your career growth and your happiness at work is your employer. That’s the best partnership, where the employer understands how to put you into the right place to be more productive and knows what makes you tick. There should be understanding of the employees’ strengths, to make sure they use those strengths every day, and anticipate what makes them happier and more productive employees.

Those conversations start to happen because of the data transparency. It’s really very exciting. We think this data is going to help guide the employees, managers, and human resources (HR) professionals across the organizations.

Learn How IT Best Supports 

The Greatest Data Challenges

Gardner: ADP is now in a position where your value-added analysis services are at the level where boards of directors and C-suite executives will be getting the insights. Did that require a rethinking of ADP’s role and philosophy?

Rind: Through our journey we discovered that providing insights to the HR professional is one thing. But we realized that to fully unleash and unlock the value in the data, we needed to get it into the hands of the managers and executives in the C-suite.

And the best way to do that was to build ADP’s mobile app. It’s been in the top three of the most downloaded applications from the business section on the iTunes Store. People initially got this application to check their paystub and manage their deductions, et cetera. But now, that application is starting to push up to the managers, to the executives, insights about their organization and what’s going on. 

A key part was to understand the management persona. They are busy running their organizations, and they don’t have the time to pore through the data like a data scientist might to find the insights.

So we built our engine to find and highlight the most important critical data points based on their statistical significance. Do you have an outlier? Are you in the bottom 10 percent as an organization in such areas as new hire attrition? Finding those insights and pushing them to the manager and executive gets them these headlines.

Next, as they interact with the application, we gain intelligence about what’s important to that manager and executive. We can then then push out the insights related to what’s most important to them. And that’s where we see these value-added services going. An executive is going to care about some things differently than a supervisor or a line manager might.

We can generate the insights based on their own data when they need it through the application versus them having to go in and get it. I think that push model is a big win for us, and we are seeing a lot of excitement from our clients as they are start using the app.

Gardner: Dr. Goh, are you seeing other companies extend their business models and rethinking who and what they are due to these new analytics opportunities?

Data makes all the difference

Goh: Yes, yes, absolutely. The industry has shifted from one where your differentiated asset was your method and filed patent, to one where your differentiated asset is the data. Data becomes your defensible asset, because from that data you can build intelligent systems to make better decisions and better predictions. So you see that trend. 

In order for this trend to continue, the infrastructure must be there to continually reduce cost, so you can handle the growing amounts of data and not have the cost become unmanageable. This is why HPE has gone with the edge-to-cloud hybrid approach, where the customer can implement this amassing of data in a curated and federated way. They can handle it in the most cost-effective way, depending on their operating or capital budgets.

Gardner: Marc, you have elevated your brand and value through trends analysis around pay equity or turnover trends, and gaining more executive insights around talent management. But that wouldn’t have been possible unless you were able to gain the right technology.

What do you have under the hood? And what choices have you made to support this at the best cost?

Rind: We build everything in our own development shop. We collect all the data on our Cloudera [big data lake] platform. We use various frameworks to build the insights and then push those applications out through our ADP Data Cloud.

We have everything open via a RESTful API, so those insights can permeate throughout the entire ADP ecosystem — everyone from a practitioner getting insights as they on-board a new employee and on out to the recruiting process. So having that open API is a critical part of all of this. 

Gardner: Dr. Goh, one of the things I have seen in the market is that the investments that companies like ADP make in the infrastructure to support big data analytics and AI sets in motion a virtuous adoption benefit. The investments to process the data leads to an improvement in analytics, which then brings in more interest in consumption of those analytics, which leads to the need for more data and more analytics.

It seems to me like it’s a gift that keeps giving and it grows in value over time.

Steps in the data journey 

Goh: We group our customers on this AI journey into three different groups: Early, started, and advanced. About 70 percent of our customers are in the early phase, about 20 percent in the started phase, where they have already started on the project, and about 10 percent are in the advanced phase.

The advanced-phase customers are like the automotive customers who are already on autonomous vehicles but would like us to come in and help them with infrastructure to deal with the massive amounts of data. 

But the majority of our customers are in the early phase. When we engage with them, the immediate discussion is about how to get started. For example, “Let’s pick a low-hanging fruit that has an outcome that’s measurable; that would be interesting.”

We work with the customer to decide on an outcome to aim for, for the ML project. Then we talk about gaining access to the data. Do they have sufficient data? If so, does it take a long time to clean it out and normalize it, so you can consume it?

After that phase, we start a proof of concept (POC) for that low-hanging fruit outcome — and hopefully it turns out well. From there the early customer can approach their management for solid funding to get them started on an operational project.

We are using AI to simplify and make recommendations to handle a lot of those pieces, so the HR professional can be focused on the human part — to help grow careers rather than be stuck processing paperwork and running reports.

That’s typically how we do it. It always starts with the outcome, and what we are aiming for this machine to be trained at. Once they have gone through the learning phase, what is it they are trying to achieve, and would that achievement be meaningful for the company? A low-hanging fruit POC doesn’t have to be that complex. 

Gardner: Marc, any words of wisdom looking back with 20/20 hindsight? When it comes to the investments around big data lakes, AI, and analytics, what would you tell those just getting started?

Rind: Much to Dr. Goh’s point, picking a manageable project is a very important idea. Go for something that is tangible, and that you have the data for. It’s always important to get a win instead of boiling the ocean, to prove value upfront.

A lot of large organizations — instead of building data lakes, they end up with a bunch of data puddles. Large companies can suffer from different groups building their own.

We have committed to localizing all of the data into a single data lake. The reason is that you can quickly connect data that you would never have thought to connect before. So understanding what the sales and the service process is, and how that might impact or inform the product or vice versa, is only possible if you start putting all of your data together. Once you get it together, just work on connecting it up. That’s key to opening up the value across your organization. 

Connecting the data dots 

Goh: It helps you connect more dots. 

Gardner: The common denominator here is that there is going to be more and more data. We’re starting to see the Internet of Things (IoT) and the Industrial Internet of Things (IIoT) bring in even more data.

Even relevant to talent management, there are more ways of gathering even more data about what people are doing, how they are working, what their efficacy is in the field, especially across organizational boundaries like contingent workforces, being able to measure what they are doing and then pay them accordingly.

Marc, do you see ever more data coming online to then need to be measured about how people work?

Rind: Absolutely! There is no way around it. There are still a lot of disconnected points of data, for sure. The connection points are going to just continue to be made possible, so you get a 360-degree view of the world at work. From that you can understand better how they are working, how to make them more productive and engaged, and bringing flexibility to allow them to work the way they want. But only by connecting up data across the board and pulling it all together would that be possible.

Learn How IT Best Supports 

The Greatest Data Challenges

Gardner: We haven’t even scratched the surface of incentivization trends. The more data allows you to incentivize people on a micro basis in near-real time, is such an interesting new chapter. We will have to wait for another day, another podcast, to get into all of that.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, Cyber security, data analysis, data center, Data center transformation, enterprise architecture, Hadoop, Hewlett Packard Enterprise, Information management, machine learning, Software, User experience | Tagged , , , , , , , , , , , , | Leave a comment

Inside story: How HP Inc. moved from a rigid legacy to data center transformation

The Next BriefingsDirect data center architecture modernization journey interview explores how HP Inc. (HPI) has rapidly separated and modernized a set of data centers as part of its splitting off from what has become Hewlett Packard Enterprise (HPE).

We will now learn how HP Inc. has taken four shared data centers and transitioned to two agile ones, with higher performance, lower costs, and an obsolescence-resistant and strategic infrastructure design.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to help us define the data center of the future are Sharon Bottome, Vice President and Head of Infrastructure Services at HPI, and Piyush Agarwal, Senior Director of Infrastructure Services, also at HPI. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. 

Here are some excerpts:

Gardner: We know the story of HP Inc. splitting off into a separate company from HPE in 2015. Yet, it remains unusual. Most IT modernization efforts combine — or at the least replicate — data centers. You had to split off and modernize your massive infrastructures at the same time, and you are still in the process of doing that.

Sharon, what have been the guiding principles as you created new IT choices from a combined corporate legacy? 


Bottome: When the split happened, leadership had to make a lot of decisions around speed and agility just to get the split done. A new underlying IT infrastructure wasn’t necessarily the key decision maker for how the split went.

We therefore ended up on shared infrastructure in four data centers, which then ended up being shared again as HPE split off assets to Micro Focus and DXC Technology in 2017. We ended up in a situation of having four data centers with shared infrastructure across four newly separated companies.

As you can imagine, we have a different imperative now that we are a new and separate company. HPI is very aggressive and wants to be very fast and agile. So we really need to continue and finish what was an initial separation of all of the infrastructure.

Gardner: Is it fair to say, Piyush, that this has been an unprecedented affair at such scale and complexity?

Agarwal: Yes, that is true. If you look at what some of the other organizations and companies have done, there have been a $5 billion and $10 billion company that have undertaken such data center transformations. But the old Hewlett-Packard as a joint company was a $100 billion company, so separating the data centers for a $100 billion company is a huge effort.

So, yes, companies have done this in the past, but the amount of time they had — versus the amount of time we are seeking to do the separation makes this totally unthinkable. We are still on that journey.

Gardner: What is new in 2018 IT that allows you to more aggressively go at something like this? What has helped you to do this that was not available just a few years ago? 

Bottome: First, the driver for us is we really want to be independent. We want to truly transform our services. That means it’s much more about the experiences — and not just the technology.

We have embarked predominantly on HPE gear. We architected the new data centers using the newest technologies, whether it’s 3PAR, HPE Synergy, and some of the other hardware. That allows us to take about 800 applications and 22,000 operating systems instances and migrate those. It’s just a huge undertaking.

Learn How the Future

of Hybrid IT

Can Be Made Simple 

But by using a lot of the new technology and hardware, we have to then transform our own processes and all the services to go along with that.

Gardner: Piyush, what have you learned in terms of the underlying architecture? One of my favorite sayings is, “Architecture is destiny.” If you make the right architecture decisions, many other things then fall into place.

What have you done on an architectural level that’s allowed this to go more smoothly?

Simpler separation solutions

Agarwal: It’s more about a philosophy than just an architecture, in my view. It goes to the previous question you asked. Why is it simpler now? Just after the separation, there was a philosophy around going to public cloud. Everybody thought that we would save a lot of money by just going to the public cloud. 

But in the last two or three years, we realized that the total cost of ownership (TCO) in a public cloud – especially if the applications are not architected for public cloud – means we are not going to save much. So based on that that epiphany, we said, “Hey, is it the right time to look at our enterprise data center and architect it in such a way that it provides cloud-like functionality and still offers flexibility in terms of how much we pay?”

Having HPE Synergy as the underlying composable infrastructure really helps with all of that. Obviously, the newer software-defined data center (SDDC) architectures are also playing a major role. So now, where the application is hosted is less of a concern, because — thanks to the software-defined architecture and best-fit model — we may be able to move the workloads around over time.

Gardner: Where you are on this journey? How does that extend around the world?

Multicloud, multinational

Bottome: We are going from four data centers in Texas — two in Austin and two in Houston – down to two, one each in Houston and Plano. We are deploying those two with full resiliency, redundancy, and disaster recovery.

Gardner: And how does that play into your global reach? How are you using hybrid IT to bring these applications to your global workforce?

Bottome: Anyone who says they are not in a multicloud environment is certainly fooling themselves. We basically are already in a multicloud environment. We have many, many platforms in other people’s clouds in addition to our core data centers. We also have, obviously, our customer resource management (CRM) as a cloud service, and we are moving our enterprise resource planning (ERP) into another cloud.

How do we support all of these cloud environments? We have partners along with us. We are very much out-sourced, too.

So it’s a multicloud environment and managing that and changing operations to be able to support that is one of the things we are doing with this transformation. How do we support all of these cloud environments? We have partners along with us. We are using managed service providers (MSPs). We are very much outsourced, too. So it’s a journey with them on learning how to have them all supported across all of these multiple clouds.

Ticketing transformed 

Gardner: You mentioned management as being so important. Piyush, when it comes to some of the newer management capabilities we are hearing about – such as HPE OneSphere — what have you learned along the journey so far? Do both HPE OneView and HPE OneSphere play a role as a continuum?

Agarwal: It’s difficult to get into the technology of OneView versus OneSphere. But the predictive analytics that every provider uses to support us is remarkably different, even in just the last five years.

When we were going through this request for proposal (RFP) process for MSPs for our new data center transformation and services, every provider was showing us the software and intelligence on how tickets can be closed — even before the tickets are generated.

So that’s a huge leap from what we saw four or five years ago. Back then the cost of play was about being in a low-cost location because employee costs were 80 percent of the total. But new automation and intelligence into the ticketing systems is a way to move forward. That’s what will drive the service efficiencies and cost reductions.

Gardner: Sharon, as you continue on your transformation journey, are you able to do more for less?

Bottome: This is actually a great success story for us. In the new data center transformation and the services transformation RFP that Piyush was mentioning, we actually are getting $50 million a year in savings every year over five years. That’s allowed us, obviously, to reinvest that money in other areas. So, yes, it’s been a great success story. 

We are transforming a lot of the services — not just in the data center. It’s also about how our user base will experience interacting with IT as we move to more of these management platforms with this transformation. 

Gardner: How will this all help your IT operations people to be more efficient?

IT our way, with our employees 

Agarwal: When we talk about IT services, there is always a pendulum. If you go back 15 or 20 years, there used to be articles about how Solectron moved all of their IT to IBM. In 2001, there were so many of those kinds of deals.

But within one to two years people realized how difficult it was. The success of the businesses depended not just on IT outsourcing, but in keeping the critical talent to manage the business expectations and manage the service providers. 

Where we are now with HPI, over the period of the last three years, we have learned how to work in a managed services environment. What that means is how to get the best out of a supplier but still maintain the critical knowledge of the environment within our own IT.

Learn How the Future

of Hybrid IT

Can Be Made Simple 

Our own employees can therefore run the IT tomorrow on some other service provider, if we so choose. It maintains the healthy mix of relationships between the suppliers and our employees. So, we haven’t gone too far right or too far left in terms of how the IT should be run from a service provider perspective. 

With this transformation, that thought process was reinforced. We realized when we began this transformation process that we didn’t yet have critical mass to run our IT services internally. Over the period of the last one-and-a-half years, we have gained that critical mass back. 

From an HPI IT operations team’s perspective, it generates confidence back — versus having a victim mentality of, “Oh, it’s a supplier and the suppliers are going to do it,” that is opposed to having the confidence ourselves to deliver on that accountability with our own IT employees. They are the ones driving our supplier to do the transformation, and to do the operations afterward. 

Gardner: We have also seen an increase in automation, orchestration, and some very powerful tools, many of them data-driven. How have automation techniques helped you in this process of regaining and keeping control? 

Automation advantages 

Agarwal: DevOps provides, on the one hand, the overall infrastructure, orchestration, and agility to provision. Being part of the previous Hewlett Packard Company, we always had the latest and greatest of those tools. We were a testing ground for those tools. We always relied on automated ways of provisioning, and for quick provisioning. 

If I look at that from a transformation perspective, we will continue to use those orchestration and provisioning tools. Our internal cloud is heavily reliant on such cloud service automation (CSA). For other technologies, we rely on server automation for all of the Linux and Unix platforms. We always have that mix of quick provisioning.

At the same time, we will continue to encourage our developers to encompass these infrastructure technologies in their DevOps models. We are not there yet, where the application tier integrates with the infrastructure tier to provide a true DevOps model, but I think we are going to see it in the next one to two years.

Gardner: Is there a rationalization process for your data? What’s the underlying data transformation story that’s a subset of the general data center modernization story?

Application rationalization remains an ongoing exercise for us. In a true sense, we had 1,200 applications. We are bringing that down to 800. The application and data center transformations are going in parallel.

Agarwal: Our CIO was considered one of the most transformative in 2015. There is a Forbes article on it. As part of 2015 separation, we undertook a couple of transformation journeys. The data center transformation was one, but the other one was the application transformation. Sharon mentioned that for our CRM application, we moved to Microsoft Dynamics. We are consolidating our ERP.

Application rationalization (AR) remains an ongoing exercise for us. In a true sense, we had 1,200 to 1,300 applications. We are trying to bring that down to 800. Then, there is a further reduction plan over the next two to three years. Certainly the application and data center transformations are going in parallel.

But from a data perspective — looking at data in general or of having data totally segregated from the applications layer — I don’t think we are doing that yet.

Where we are in the overall journey of applications transformation, with the number of applications we have, in my view, the data and segregation of applications is at a much higher level of efficiency. Once we have data center transformation and consolidated applications and reduce those by as many as possible, then we will take a look at segregating the data layer from the applications layer.

Gardner: When you do this all properly, what other paybacks do you get? What have been some of the unexpected benefits?

Getting engaged 

Bottome: We received great financial benefits, as I mentioned. But some of the other areas include the end-user experience. Whether it’s time-to-fix by improving the experience of our employees interacting with IT support, we’re seeing efficiencies there with automation. And we are going to bring a lot more efficiency to our own teams.

And one of the measurements that we have internally is an employee satisfaction measure. I found this to be very interesting. For the infrastructure organization, the IT internal personnel, their engagement score went up 40 points from before we started this transformation. You could see that not only are they getting rescaled or retooled, we make sure we have enough of that expertise in-house, and their engagement scores went up right along with that. It helped us on keeping our employees very motivated and engaged.

Gardner: People like to work with modern technology more than the old stuff, is that not true?

Agarwal: Yes, for sure. I want to work with the iPhone X not iPhone 7.

Gardner: What have you learned that you could impart to others? Now, not many others are going to be doing this reverse separation, modernization, consolidation, application, rationalization process at the same time — while keeping the companies operating.

But what would you tell other people who are going about application and data center modernization?

Prioritize your partners

Bottome: Pick your partner carefully. Picking the right partner is very, very important, not only the technology partner but any of the other partners along the journey with you, be it application migration or your services partners. Our services partner is DXC. And the majority of the data center is built on HPE gear, along with Arista and Brocade.

Also, make sure that you truly understand all of the other transformations that get impacted by the transformation you’re on. In all honesty, I’ve had some bumps along the way because there was so much transformation going on at once. Make sure those dependencies are fully understood.

Gardner: Piyush, what have you learned that you would impart to others?

Agarwal: It goes back to one of the earlier questions. Understand the business drivers in addition to picking your partners. Know your own level of strength at that point in time.

Learn How the Future

of Hybrid IT

Can Be Made Simple 

If we had done this a year and a half ago, the confidence level and our own capability to do it would have been different. So, picking your partner and having confidence in your own abilities are both very important.

Bottome: Thank you, Dana. It was exciting to talk about something that has been a lot of work but also a lot of satisfaction and an exciting journey.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, data center, Data center transformation, DevOps, disaster recovery, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, HP, hyperconverged infrastructure, managed services, Micro Focus, multicloud, professional services, Security, server, Software-defined storage, storage, User experience, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

Dark side of cloud—How people and organizations are unable to adapt to improve the business

The next BriefingsDirect cloud deployment strategies interview explores how public cloud adoption is not reaching its potential due to outdated behaviors and persistent dissonance between what businesses can do and will do with cloud strengths.

Many of our ongoing hybrid IT and cloud computing discussions focus on infrastructure trends that support the evolving hybrid IT continuum. Today’s focus shifts to behavior — how individuals and groups, both large and small, benefit from cloud adoption. 

It turns out that a dark side to cloud points to a lackluster business outcome trend. A large part of the disappointment has to do with outdated behaviors and persistent dissonance between what line of business (LOB) practitioners can do and will do with their newfound cloud strengths. 

We’ll now hear from an observer of worldwide cloud adoption patterns on why making cloud models a meaningful business benefit rests more with adjusting the wetware than any other variable.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to help explore why cloud failures and cost overruns are dogging many enterprises is Robert Christiansen, Vice President, Global Delivery, Cloud Professional Services and Innovation at Cloud Technology Partners (CTP), a Hewlett Packard Enterprise (HPE) company. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is happening now with the adoption of cloud that makes the issue of how people react such a pressing concern? What’s bringing this to a head now?


Christiansen: Enterprises are on a cloud journey. They have begun their investment, they recognize that agility is a mandate for them, and they want to get those teams rolling. They have already done that to some degree and extent. They may be moving a few applications, or they may be doing wholesale shutdowns of data centers. They are in lots of different phases in adoption situations. 

What we are seeing is a lack of progress with regard to the speed and momentum of the adoption of applications into public clouds. It’s going a little slower than they’d like.

Gardner: We have been through many evolutions, generations, and even step-changes in technology. Most of them have been in a progressive direction. Why are we catching our heels now?

Christiansen: Cloud is a completely different modality, Dana. One of the things that we have learned here is that adoption of infrastructure that can be built from the ground-up using software is a whole other way of thinking that has never really been the core bread-and-butter of an infrastructure or a central IT team. So, the thinking and the process — the ability to change things on the fly from an infrastructure point of view — is just a brand new way of doing things. 

And we have had various fits and starts around technology adoption throughout history, but nothing at this level. The tool kits available today have completely changed and redefined how we go about doing this stuff.

Gardner: We are not just changing a deployment pattern, we are reinventing the concept of an application. Instead of monolithic applications and systems of record that people get trained on and line up around, we are decomposing processes into services that require working across organizational boundaries. The users can also access data and insights in ways they never had before. So that really is something quite different. Even the concept of an application is up for grabs.

Christiansen: Well, think about this. Historically, an application team or a business unit, let’s say in a bank, said, “Hey, I see an opportunity to reinvent how we do funding for auto loans.”

Listen to the podcast

We worked with a company that did this. And historically, they would have had to jump through a bunch of hoops. They would justify the investment of buying new infrastructure, set up the various components necessary, maybe landing new hardware in the organization, and going into the procurement process for all of that. Typically, in the financial world, it takes months to make that happen.

Today, that same team using a very small investment can stand up a highly available redundant data center in less than a day on a public cloud. In less than a day, using a software-defined framework. And now they can go iterate and test and have very low risk to see if the marketplace is willing to accept the kind of solution they want to offer.

And that just blows apart the procedural-based thinking that we have had up to this point; it just blows it apart. And that thinking, that way of looking at stuff is foreign to most central IT people. Because of that emotion, going to the cloud has come in fits and starts. Some people are doing it really well, but a majority of them are struggling because of the people issue.

Gardner: It seems ironic, Robert, because typically when you run into too much of a good thing, you slap on governance and put in central command and control, and you throttle it back. But that approach subverts the benefits, too.

How do you find a happy medium? Or is there such a thing as a happy medium when it comes to moderating and governing cloud adoption?

Control issues

Christiansen: That’s where the real rub is, Dana. Let’s give it an analogy. At Cloud Technology Partners (CTP), we do cloud adoption workshops where we bring in all the various teams and try to knock down the silos. They get into these conversations to address exactly what you just said. “How do we put governance in place without getting in the way of innovation?”

It’s a huge, huge problem, because the central IT team’s whole job is to protect the brand of the company and keep the client data safe. They provide the infrastructure necessary for the teams to go out and do what they need to do.

When you have a structure like that but supplied by the public clouds like Amazon (AWS)Google, and Microsoft Azure, you still have the ability to put in a lot of those controls in the software. Before it was done either manually or at least semi-manually.

The central IT team’s whole job is to protect the brand of the company and keep the client data safe. They provide the infrastructure necessary for the teams to go out and do what they need to do.

The challenge is that the central IT teams are not necessarily set up with the skills to make that happen. They are not by nature software development people. They are hardware people. They are rack and stack people. They are people who understand how to stitch this stuff together — and they may use some automation. But as a whole it’s never been their core competency. So therein lies the rub: How do you convert these teams over to think in that new way?

At the same time, you have the pressing issue of, “Am I going to automate myself right out of a job?” That’s the other part, right? That’s the big, 800-pound gorilla sitting in the corner that no one wants to talk about. How do you deal with that?

Gardner: Are we talking about private cloud, public cloud, hybrid cloud, hybrid IT — all the above when it comes to these trends?

Public perceptions 

Christiansen: It’s mostly public cloud that you see the perceived threats. The public cloud is perceived as a threat to the current way of doing IT today, if you are an internal IT person. 

Let’s say that you are a classic compute and management person. You actually split across both storage and compute, and you are able to manage and handle a lot of those infrastructure servers and storage solutions for your organization. You may be part of a team of 50 in a data center or for a couple of data centers. Many of those classic roles literally go away with a public cloud implementation. You just don’t need them. So these folks need to pivot or change into new roles or reinvent themselves.

Let’s say you’re the director of that group and you happen to be five years away from retirement. This actually happened to me, by the way. There is no way these folks want to give up the range right before their retirement. They don’t want to reinvent their roles just before they’re going to go into their last years. 

They literally said to me, “I am not changing my career this far into it for the sake of a public cloud reinvention.” They are hunkering down, building up the walls, and slowing the process. This seems to be an undercurrent in a number of areas where people just don’t want to change. They don’t want any differences.

Gardner: Just to play the devil’s advocate, when you hear things around serverless, when we see more operations automation, when we see artificial intelligence (AI)Ops use AI and machine learning (ML) — it does get sort of scary. 

You’re handing over big decisions within an IT environment on whether to use public or private, some combination, or multicloud in some combination. These capabilities are coming into fruition.

Maybe we do need to step back and ask, “Just because you can do something, should you?” Isn’t that more than just protecting my career? Isn’t there a need for careful consideration before we leap into some of these major new trends?

Transform fear into function 

Christiansen: Of course, yeah. It’s a hybrid world. There are applications where it may not make sense to be in the public cloud. There are legacy applications. There are what I call centers of gravity that are database-centric; the business runs on them. Moving them and doing a big lift over to a public cloud platform may not make financial sense. There is no real benefit to it to make that happen. We are going to be living between an on-premises and a public cloud environment for quite some time. 

The challenge is that people want to create a holistic view of all of that. How do I govern it in one view and under one strategy? And that requires a lot of what you are talking about, being more cautious going forward.

And that’s a big part of what we have done at CTP. We help people establish that governance framework, of how to put automation in place to pull these two worlds together, and to make it more seamless. How do you network between the two environments? How do you create low-latency communications between your sources of data and your sources of truth? Making that happen is what we have been doing for the last five or six years.

We help establish that governance framework, of how to put automation in place to pull these two worlds together, and to make it more seamless. 

The challenge we have, Dana, is that once we have established that — we call that methodology the Minimum Viable Cloud (MVC). And after you put all of that structure, rigor, and security in place — we still run into the problems of motion and momentum. Those needed governance frameworks are well-established.

Gardner: Before we dig into why the cloud adoption inertia still exists, let’s hear more about CTP. You were acquired by HPE not that long ago. Tell us about your role and how that fits into HPE.

CTP: A cloud pioneer

Christiansen: CTP was established in 2010. Originally, we were doing mostly private cloud, OpenStack stuff, and we did that for about two to three years, up to 2013.

I am one of the first 20 employees. It’s a Boston-based company, and I came over with the intent to bring more public cloud into the practice. We were seeing a lot of uptick at the time. I had just come out of another company called Cloud Nation that I owned. I sold that company; it was an Amazon-based, Citrix-for-rent company. So imagine, if you would, you swipe a credit card and you get NetScaler, XenApp and XenDesktop running on top of AWS way back in 2012 and 2013. 

I sold that company, and I joined CTP. We grew the practice of public cloud on Google, Azure, and AWS over those years and we became the leading cloud-enabled professional services organization in the world.

We were purchased by HPE in October 2017, and my role since that time is to educate, evangelize, and press deeply into the methodologies for adopting public cloud in a holistic way so it works well with what people have on-premises. That includes the technologies, economics, strategies, organizational change, people, security, and establishing a DevOps practice in the organization. These are all within our world.

We do consultancy and professional services advisory types of things, but on the same coin, we flip it over, and we have a very large group of engineers and architects who are excellent on keyboards. These are the people who actually write software code to help make a lot of this stuff automated to move people to the public clouds. That’s what we are doing to this day.

Gardner: We recognize that cloud adoption is a step-change, not an iteration in the evolution of computing. This is not going from client/server to web apps and then to N-Tier architectures. We are bringing services and processes into a company in a whole new way and refactoring that company. If you don’t, the competition or a new upstart unicorn company is going to eat your lunch. We certainly have seen plenty of examples of that. 

So what prevents organizations from both seeing and realizing the cloud potential? Is this a matter of skills? Is it because everyone is on the cusp of retirement and politically holding back? What can we identify as the obstacles to overcome to break that inertia?

A whole new ball game

Christiansen: From my perspective, we are right in the thick of it. CTP has been involved with many Fortune 500 companies throughthis process.

The technology is ubiquitous, meaning that everybody in the marketplace now can own pretty much the same technology. Dana, this is a really interesting thought. If a team of 10 Stanford graduates can start up a company to disrupt the rental car industry, which somebody has done, by the way, and they have access to technologies that were only once reserved for those with hundreds of millions of dollars in IT budgets, you have all sorts of other issues to deal with, right?

So what’s your competitive advantage? It’s not access to the technologies. The true competitive advantage now for any company is the people and how they consume and use the technology to solve a problem. Before [the IT advantage] was reserved for those who had access to the technology. That’s gone away. We now have a level playing field. Anybody with a credit card can spin up a big data solution today – anybody. And that’s amazing, that’s truly amazing.

For an organization that had always fallen back on their big iron or infrastructure — those processes they had as their competitive advantage — that now has become a detriment. That’s now the thing that’s slowing them down. It’s the anchor holding them back, and the processes around it. That rigidity of people and process locks them into doing the same thing over and over again. It is a serious obstacle. 

Untangle spaghetti systems 

Another major issue came very much as a surprise, Dana. We observed it over the last couple of years of doing application inventory assessments for people considering shutting down data centers. They were looking at their applications, the ones holding the assets of data centers, as not competitive. And they asked, “Hey, can we shut down a data center and move a lot of it to the public cloud?”

We at CTP were hired to do what are called application assessments, economic evaluations. We determine if there is a cost validation for doing a lift-and-shift [to the public cloud]. And the number-one obstacle was inventory. The configuration management data bases (CMDBs), which hold the inventory of where all the servers are and what’s running on them for these organizations, were wholly out of date. Many of the CMDBs just didn’t give us an accurate view of it all. 

When it came time to understand what applications were actually running inside the four walls of the data centers — nobody really knew. As a matter of fact, nobody really knew what applications were talking to what applications, or how much data was being moved back and forth. They were so complex; we would be talking about hundreds, if not thousands, of applications intertwined with themselves, sharing data back and forth. And nobody inside organizations understood which applications were connected to which, how many there were, which ones were important, and how they worked.

When it came time to understand what applications were actually running inside of the four walls of the data centers — no one really knew. Nobody knew what applications were talking to what applications, or how much data was being moved back and forth.

Years of managing that world has created such a spaghetti mess behind those walls that it’s been exceptionally difficult for organizations to get their hands around what can be moved and what can’t. There is great integration within the systems.

The third part of this trifecta of obstacles to moving to the cloud is, as we mentioned, people not wanting to change their behaviors. They are locked in to the day-to-day motion of maintaining those systems and are not really motivated to go beyond that.

Gardner: I can see why they would find lots of reasons to push off to another day, rather than get into solving that spaghetti maze of existing data centers. That’s hard work, it’s very difficult to synthesize that all into new apps and services.

Christiansen: It was hard enough just virtualizing these systems, never mind trying to pull it all apart.

Gardner: Virtualizing didn’t solve the larger problem, it just paved the cow paths, gained some efficiency, reduced poor server utilization — but you still have that spaghetti, you still have those processes that can’t be lifted out. And if you can’t do that, then you are stuck.

Christiansen: Exactly right.

Gardner: Companies for many years have faced other issues of entrenchment and incumbency, which can have many downsides. Many of them have said, “Okay, we are going to create a Skunk Works, a new division within the company, and create a seed organization to reinvent ourselves.” And maybe they begin subsuming other elements of the older company along the way.

Is that what the cloud and public cloud utilization within IT is doing? Why wouldn’t that proof of concept (POC) and Skunk Works approach eventually overcome the digital transformation inertia?

Clandestine cloud strategists

Christiansen: That’s a great question, and I immediately thought of a client who we helped. They have a separate team that re-wrote or rebuilt an application using serverless on Amazon. It’s now a fairly significant revenue generator for them, and they did it almost two and-a-half years ago.

It uses a few cloud servers, but mostly they rely on the messaging backbones and non-server-based platform-as-a-service (PaaS) layers of AWS to solve their problem. They are a consumer credit company and have a lot of customer-facing applications that they generate revenue from on this new platform.

The team behind the solution educated themselves. They were forward-thinkers and saw the changes in public cloud. They received permission from the business unit to break away from the central IT team’s standard processes, and they completely redefined the whole thing.

The team really knocked it out of the park. So, high success. They were able to hold it up and tried to extend that success back into the broader IT group. The IT group, on the other hand, felt that they wanted more of a multicloud strategy. They weren’t going to have all their eggs in Amazon. They wanted to give the business units options, of either going to Amazon, Azure, or Google. They wanted to still have a uniform plane of compute for on-premises deployments. So they brought in Red Hat’s OpenShift, and they overlaid that, and built out a [hybrid cloud] platform.

Now, the Red Hat platform, I personally had had no direct experience, but I had heard good things about it. I had heard of people who adopted it and saw benefits. This particular environment though, Dana, the business units themselves rejected it.

The core Amazon team said, “We are not doing that because we’re skilled in Amazon. We understand it, we’re using AWS CloudFormation. We are going to write code to the applications, we are going to use Lambda whenever we can.” They said, “No, we are not doing that [hybrid and multicloud platform approach].”

Other groups then said, “Hey, we’re an Azure shop, and we’re not going to be tied up around Amazon because we don’t like the Amazon brand.” And all that political stuff arose, they just use Azure, and decided to go shooting off on their own and did not use the OpenShift platform because, at the time, the tool stacks were not quite what they needed to solve their problems.

The company ended up getting a fractured view. We recommended that they go on an education path, to bring the people up to speed on what OpenShift could do for them. Unfortunately, they opted not to do that — and they are still wrestling with this problem.

CTP and I personally believe that this was an issue of education, not technology, and not opportunity. They needed to lean in, sponsor, and train their business units. They needed to teach the app builders and the app owners on why this was good, the advantages of doing it, but they never invested the time. They built it and hoped that the users would come. And now they are dealing with the challenges of the blowback from that.

Gardner: What you’re describing, Robert, sounds an awful lot like basic human nature, particularly with people in different or large groups. So, politics, right? The conundrum is that when you have a small group of people, you can often get them on board. But there is a certain cut-off point where the groups are too large, and you lose control, you lose synergy, and there is no common philosophy. It’s Balkanization; it’s Europe in 1916.

Christiansen: Yeah, that is exactly it.

Gardner:Very difficult hurdles. These are problems that humankind has been dealing with for tens of thousands of years, if not longer. So, tribalism, politics. How does a fleet organization learn from what software development has come up with to combat some of these political issues? I’m thinking of Agile methodologiesscrums, and having short bursts, lots of communication, and horizontal rather than command-and-control structures. Those sorts of things.

Find common ground first

Christiansen: Well, you nailed it. How you get this done is the question. How do you get some kind of agility throughout the organization to make this happen? And there are successes out there, whole organizations, 4,000 or 5,000 or 6,000 people, have been able to move. And we’ve been involved with them. The best practices that we see today, Dana, are around allowing the businesses themselves to select the platforms to go deep on, to get good at.

Let’s say you have a business unit generating $300 million a year with some service. They have money, they are paying the IT bill. But they want more control, they want more the “dev” from the DevOps process.

The best practices that we see today are around allowing the businesses themselves to select the cloud platforms to go deep on, to get good at. … They want the “dev” from the DevOps process.

They are going to provide much of that on their own, but they still need core common services from central IT team. This is the most important part. They need the core services, such as identity and access management, key management, logging and monitoring, and they need networking. There is a set of core functions that the central team must provide.

And we help those central teams to find and govern those services. Then, the business units [have cloud model choice and freedom as long as they] consume those core services — the access and identity process, the key management services, they encrypt what they are supposed to, and they use the networking functions. They set up separation of the services appropriately, based on standards. And they use automation to keep them safe. Automation prevents them from doing silly things, like leaving unencrypted AWS S3 buckets open to the public Internet, things like that.

You now have software that does all of that automation. You can turn those tools on and then it’s like a playground, a protected playground. You say, “Hey, you can come out into this playground and do whatever you want, whether it’s on Azure or Google, or on Amazon or on-premises.”

 “Here are the services, and if you adopt them in this way, then you, as the team, can go deep, you can use Application programming interface (API) calls, you can use CloudFoundation or Python or whatever happens to be the scripting language you want to build your infrastructure with.”

Then you have the ability to let those teams do what they want. If you notice, what it doesn’t do is overlay a common PaaS layer, which isolates the hyperscale public cloud provider from your work. That’s a whole other food fight, religious battle, Dana, around lock-in and that kind of conversation.

Gardner: Imposing your will on everyone else doesn’t seem to go over very well.

So what you’re describing, Robert, is a right-sizing for agility, and fostering a separate-but-equal approach. As long as you can abstract to the services level, and as long as you conform to a certain level of compliance for security and governance — let’s see who can do it better. And let the best approach to cloud computing win, as long as your processes end up in the right governance mix.

Development power surges

Christiansen: People have preferences, right? Come on! There’s been a Linux and .NET battle since I have been in business. We all have preferences, right? So, how you go about coding your applications is really about what you like and what you don’t like. Developers are quirky people. I was a C programmer for 14 years, I get it.

The last thing you want to do is completely blow up your routines by taking development back and starting over with a whole bunch of new languages and tools. Then they’re trying to figure out how to release code, test code, and build up a continuous integration/continuous delivery pipeline that is familiar and fast.

These are really powerful personal stories that have to be addressed. You have to understand that. You have to understand that the development community now has the power — they have the power, not the central IT teams. That shift has occurred. That power shift is monumental across the ecosystem. You have to pay attention to that.

If the people don’t feel like they have a choice, they will go around you, which is where the problems are happening.

Gardner: I think the power has always been there with the developers inside of their organizations. But now it’s blown out of the development organization and has seeped up right into the line of business units.

Christiansen: Oh, that’s a good point.

Gardner: Your business strategy needs to consider all the software development issues, and not just leave them under the covers. We’re probably saying the same thing. I just see the power of development choice expanding, but I think it’s always been there.

But that leads to the question, Robert, of what kind of leadership person can be mindful of a development culture in an organization, and also understand the line of business concerns. They must appreciate the C-suite strategies. If you are a public company, keeping Wall Street happy, and keeping the customer expectations met because those are always going up nowadays.

It seems to me we are asking an awful lot of a person or small team that sits at the middle of all of this. It seems to me that there’s an organizational and a talent management deficit, or at least something that’s unprecedented.

Tech-business cross-pollination

Christiansen: It is. It really is. And this brings us to a key piece to our conversation. And that is the talent enablement. It is now well beyond how we’ve classically looked at it.

Some really good friends of mine run learning and development organizations and they have consulting companies that do talent and organizational change, et cetera. And they are literally baffled right now at the dramatic shift in what it takes to get teams to work together.

In the more flexible-thinking communities of up-and-coming business, a lot of the folks that start businesses today are technology people. They may end up in the coffee industry or in the restaurant industry, but these folks know technology. They are not unaware of what they need to do to use technology.

So, business knowledge and technology knowledge are mixing together. They are good when they get swirled together. You can’t live with one and not have the other.

For example, a developer needs to understand the implications of economics when they write something for cloud deployment. If they build an application that does not economically work inside the constructs of the new world, that’s a bad business decision, but it’s in the hands of the developer.

It’s an interesting thing. We’ve had that need for developer-empowerment before, but then you had a whole other IT group put restrictions on them, right? They’d say, “Hey, there’s only so much hardware you get. That’s it. Make it work.” That’s not the case anymore, right?

We have created a whole new training track category called Talent Enablement that CTP and HPE have put together around the actual consumers of cloud. 

At the same time, you now have an operations person involved with figuring out how to architect for the cloud, and they may think that the developers do not understand what has to come together.

As a result, we have created a whole new training track category called Talent Enablement that CTP and HPE have put together around the actual consumers of cloud.

We have found that much of an organization’s delay in rolling this out is because the people who are consuming the cloud are not ready or knowledgeable enough on how to maximize their investment in cloud. This is not for the people building up those core services that I talked about, but for the consumers of the services, the business units.

We are rolling that out later this year, a full Talent Enablement track around those new roles.

Gardner: This targets the people in that line of business, decision-making, planning, and execution role. It brings them up to speed on what cloud really means, how to consume it. They can then be in a position of bringing teams together in ways that hadn’t been possible before. Is that what you are getting at?

Teamwork wins 

Christiansen: That’s exactly right. Let me give you an example. We did this for a telecommunications company about a year ago. They recognized that they were not going to be able to roll out their common core services.

The central team had built out about 12 common core services, and they knew almost immediately that the rest of the organization, the 11 other lines of business, were not ready to consume them.

They had been asking for it, but they weren’t ready to actually drive this new Ferrari that they had asked for. There were more than 5,000 people who needed to be up-skilled on how to consume the services that a team of about 100 people had put together.

Now, these are not classic technical services like AWS architecture, security frameworks, or Access control list (ACL) and Network ACL (NACL) for networking traffic, or how you connect back and backhaul, that kind of stuff. None of that.

I’m talking about how to make sure you don’t get a cloud bill that’s out of whack. How do I make sure that my team is actually developing in the right way, in a safe way? How do I make sure my team understands the services we want them to consume so that we can support it?

It was probably 10 or 12 basic use domains. The teams simply didn’t understand how to consume the services. So we helped this organization build a training program to bring up the skills of these 4,000 to 5,000 people.

Now think about that. That has to happen in every global Fortune 2000 company where you may only have a central team of a 100, and maybe 50 cloud people. But they may need to turn over the services to 1,000 people.

We have a massive, massive, training, up-skilling, and enablement process that has to happen over the next several years.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in application transformation, Cloud computing, cloud messaging, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, multicloud, procurement, serverless | Tagged , , , , , , , , , , , , | Leave a comment

How SMBs impacted by natural disasters gain new credit thanks to a finance matchmaker app

The next BriefingsDirect digital business innovation panel discussion explores how a finance matchmaker application assists small businesses impacted by natural disasters in the United States. 

By leveraging the data and trust inherent in established business networks, Apparent Financing by SAP creates digital handshakes between lenders and businesses in urgent need of working capital financing.

The solution’s participants — all in the SAP Ariba Network — are putting the innovative model to good use by initially assisting businesses impacted directly or via supply chain disruptions from natural disasters such as forest fires and hurricanes

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn how data-driven supplier ecosystems enable new kinds of matchmaker finance relationships that work rapidly and at low risk, we are joined by our panel, Vishal Shah, Co-Founder and General Manager of Apparent Financing by SAP; Alan Cohen, Senior Vice President and General Manager of Payments and Financing at SAP Ariba, and Winslow Garnier, President of Garnier Group Technology Solutions, LLC in San Diego, California. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Vishal, what’s unique about this point in time that allows organizations like Apparent Financing to play matchmaker between lenders and businesses?


Shah: The historical problem that limited small businesses from accessing financial services with ease was lack of trust and transparency. It’s also popularly known as the information asymmetry problem

At this point in time there are three emerging trends and forces that are transforming the small business finance industry.

The first one is the digitalization of small businesses, such as from digital bookkeeping systems that are becoming more affordable and accessible — even to the smallest of businesses globally. 

The second force is the financial industry innovation. The financial crisis of 2008 actually unlocked new opportunities and created a developed industry called FinTech. This industry’s strong focus on delivering the frictionless customer experience is the key enabler. 

And the third force is technological innovation. This includes cloud computing, mobility, and application programming interfaces (APIs). They combine to make it economically feasible to gain access to financial information about small businesses that is stored in today’s digital bookkeeping systems and e-commerce platforms. It’s the confluence of these three forces that solve that information asymmetry problem, leading to both reduction of risk and cost to serve small businesses.

Gardner: Alan Cohen, why is this new business climate for small- to medium-sized businesses (SMBs) a perfect fit for something like the SAP Ariba Network? Tell us how your business model and business network are helping Apparent Financing with its task.


Cohen: Think about it in two ways. First, think differently about combining the physical and the financial supply chains. Historically, the Ariba Network has been focused on connecting buyers with their suppliers. Now we are taking the next step in this evolution to better connect the physical with the financial supply chain to provide choice and value to suppliers about access to capital. 

The second piece of it is in leveraging the data. There’s a ton of excitement in this world for artificial intelligence (AI) and machine learning (ML), and I am a big proponent of all of that. These are going to be awesome technologies that will help society and businesses as they evolve. It’s super important to keep in mind that the strength of the Ariba Network is not just its size — $2.1 trillion in annual spend, 3.4 million buyers and suppliers — it’s in the data. The intelligence drawn from this transactional data will enable lenders to make risk-adjusted lending decisions. 

And that real data value goes beyond just traditional lending. It also helps lenders assess risk differently. This will help transform how lending is done to small and medium-sized businesses as time evolves. 

Gardner: Some of these trends have been in the works for 20 or 30 years but are now coming together in a way that can help real people benefit in real situations. Winslow, please tell us about Garnier Group Technology Solutions and how you have been able to benefit from this new confluence of financing, data, and business platforms.

Rapid recovery resources

Garnier: Garnier Group Technology Solutions provides intrusion detection, installation services, security cameras, and Wi-Fi installation primarily for corporations and municipalities. We are a supplier and an installer with consistent requirements for working capital to keep our business functioning correctly. 


A major challenge showed up for us in late 2017 when the Southern California fires took place. We had already ordered product for several installation sites. Because of the fires, those sites actually burned down. The time needed to recover from already having spent the capital, plus the fact that the business was no longer coming our way, created a real need for us. 

We previously looked at working capital lines and other resources. The challenge, though, is that it is fairly complex. Our company is really good at what we do, but we are not good at finding financing and taking the time to interview multiple banks, multiple lenders. The process to just find the right type of lender to work with us — that in itself could take four to six months. 

In this case, we did not have the time or the manpower to do the due diligence necessary to make that all happen for us. Also, on a day-to-day basis, in dealing with large corporations, we can hope to get paid in 30 days, but in reality that doesn’t happen. But we still need to pay our suppliers to maintain our credit terms and get delivery when required by making sure they get paid on the terms that we have agreed to. 

We were fortunate to then be introduced to Vishal [Shah at Apparent Financing]. From that point on, he turned into a one-stop shop for us. He took what we had and worked with it, under the SAP guidance. That helped us to have confidence that we were working with a credible source, and that they would deliver on what we agreed to. 

Gardner: We see that SMBs can be easily disrupted, they are vulnerable, and they have lag times between when they can get paid and when they have to pay their own suppliers. And they make up a huge part of the overall economy.

Vishal, this seems like a big market opportunity and addressable market. Yet traditional finance organizations mostly ignore this segment. Why is that? Why has bringing finance options to companies like Garnier Group been problematic in the past? 

Bank shies, Network tries

Shah: Going back to early 2008 when the global financial crisis started, there was a lot of supply in the market and small businesses did not have to struggle as much to get access to capital.

Since then, banks have been faced with increasing regulatory burdens, as well as the fact that the cost to serve SMBs became much larger. Therefore the mainstream banks have shied away from lending to and serving this market. That has been one of the big factors.

The second is that banks have not truly embraced the power of technology. They haven’t focused on delivering customer-centric propositions. Most of the banks today are very product-centric organizations, and very siloed in their approach to serving customers.

The fundamental problems were, one, the structure of the banks and the way they were incentivized to serve this market. And secondly, the turn of events that happened post the financial crisis, which effectively resulted in the traditional lenders just backing out from this market, significantly reducing the supply side of the equation.

Banks have not truly embraced the power of technology. They haven’t focused on delivering customer-centric propositions. Most banks today are very product-centric and siloed.

Gardner: Alan, it’s a great opportunity to show how this model can work by coming to the rescue of SMB organizations impacted by natural disasters. But it seems to me that this is a bellwether for a future wave of business services because of the transparency, data-driven intelligence, security, and mission-critical nature of SAP and SAP Ariba’s networks.

Do you see this as I do, as an opening inning in a longer game? Should we be thinking newly about how business networks and data-driven intelligence fosters entirely new markets and new business models?

SMB access to financing evolves

Cohen: Absolutely. I see this as the early stages of an evolution. There are a few reasons. One is ease. Winslow talked about it. It can be very hard for small businesses to access different banks or lenders to get financing. They need an easier way to do it. We have seen transformation in consumer banking, but that transformation has not followed through into business banking. So I think one opportunity is in bringing ease to the process transformation.

Another piece is trust. What I mean by that is the data from SAP and SAP Ariba is high-quality data that lenders can trust. And being able to trust that information is a big part of this process.

Finally, like with any network, being able to connect businesses with lenders has to evolve — just as Ariba has connected buyers with suppliers to transact. This is a natural evolution of the SAP Ariba Network. 

I am very excited. And while we are still early in a longer journey, this process will fundamentally change how business banking is done.

Gardner: Winslow, you had an hour of need. Certainly by circumstances that were beyond your control. You heard from Vishal. What happened next? How were they able to match you up with financing, and what was the outcome?

Garnier: The really unique thing here is that we were able to submit a single application to allow us to have offers by more than one lender. We decided on and agreed that it made sense select Fundation as the lender of choice.  All the lenders were competitive, but Fundation had a couple of features that were specific to our business and worked better for us.

I have to tell you, at first I was skeptical that we would get this done soon enough. At the same time, we had confidence — having worked through the SAP Ariba Network previously. Once we submitted the application, we stopped looking for other resources because we felt that this would work for us. Fortunately, it did end up that way.

Within 30 days we were talking with lenders. We received a term sheet to understand what would be available for us. That gave us time internally to make decisions on what would work best. We closed on the transaction and it’s been a good working relationship between us and Fundation ever since.

Gardner: Is this going to be more than a one-shot deal, a new business operating model for you all? Are you going to be able to take a revolving line of credit and thereby have a more secure approach to business? This may even allow you to increase the risk you are willing to take to find new clients. So is this a one-shot, band aid — or is this something that’s changed your business model?

Not just reparations, relationships 

Garnier: Oh, absolutely. Having a revolving line of credit has become a staple for us because it’s a way to maximize our cash flow within our business. We can add additional clients now and take on new jobs that we may have still taken on, but we would have had to push them out later in time.

We are able to deliver our services faster at this point in time. And so it is the absolute right solution for what we needed and what we will continue to use over time.

Having a revolving line of credit has become a staple for us because it’s a way to maximize our cash flow within our business. We can add additional clients and take on new jobs.

Gardner: Vishal, it’s clear that organizations like Garnier Group are benefiting from this new model. It’s clear that SAP and SAP Ariba have the platform, the data, and the integrity and trust to deliver on it.

But another big component here is to make sure that the financing organizations are comfortable, eager, and are gaining the right information to make their lending decisions. Tell us about that side of the equation. How do organizations like Fundation and others view this, and how do you keep them eager to find new credit opportunities?

Shah: If you think of Fundation, they are not a typical bank. They are willing to look at any e-commerce platform and any technology service providers as new distribution channels through which they can access new markets and a new customer base.

Beyond that, they are using these channels as a way to market their own products and solutions. They have much bigger reasons to look at these ecosystems that we have developed over the years. 

In my view, traditional banks and lending institutions look at businesses like Garnier Group using what I call the rearview mirror. What I mean by that is lenders mostly base their lending decisions or credit decisions by obtaining information from credit bureaus, which they believe is an indicator of past performance. And that good indicator of their past performance is also taken as an indicator of good future performance, which, yes, does work in some cases — but not in all. 

By working with us, lenders like Fundation can not only look at traditional data sources like credit bureaus, they are able to also assess the financial health and the risk of lending to a business through alternative data sources like the one Alan mentioned, which is the SAP Ariba supply chain data. This provides them an increased degree of confidence before they make prudent lending decisions. 

The data in itself doesn’t create the value. When processed in an appropriate manner — and when we learn from the insights the data provides – then our lending partner gains a precise view of both the historical business performance and a realistic view of the future position and future cash flow positions of a small business. That is an incredibly powerful proposition for our lending partners to comfortably and confidently lend to businesses such as Garnier Group.

Gardner: This appears to be a win, win, win. So far, everybody seems to be benefiting. Yet this could not have happened until the innovation of the model was recognized, and then executed on.

So how did this come about, Alan? How did such payments and financing innovation get started? SAP.iO Venture Studio got involved with Apparent Financing. How did SAP, SAP Ariba, and Apparent Financing come together to allow this sort of innovation to take place — and not just remain in theory?

Data serves to simplify commerce 

Cohen: Like anything, it begins with the marketplace and looking at a problem. At the end of the day, financing is very inefficient and expensive for both suppliers and lenders. 

From a supplier perspective, we saw this as an overly complex process. And it’s not always the most competitive because people don’t have the time. From a lender perspective, originating loans and mitigating risk are very important. Yet this process hasn’t gone through a transformation.

We looked at it all and said, “Gosh, how can we better leverage the Ariba Network and the data involved in it to help solve this problem?”

SAP.iO is a venture part of SAP that incubates new businesses. About a year-and-a-half ago, we began bringing this to market to challenge how things had been done and to open up new opportunities. It’s a very innovative approach to challenge the status quo, to get businesses and lenders to think and look at this differently and seize opportunities.

And if you think about what the SAP Ariba Network is, we run commerce. And we want the lenders to fund commerce. We are simply helping to bring these two together, leveraging some incredible data insights along with the security and trust of the SAP and SAP Ariba brands.

Gardner: Of course, it’s important to have the underlying infrastructure in place to provide such data availability, trust, integrity, and support of the mission-critical nature. But in more and more cases nowadays, the user experience and simplicity elements are terribly important.

Winslow, when it came to how you interacted with the process, did you find it simple? Did you find it direct? How important was that for you as an SMB to be able to take advantage of this?

Garnier: We found it very straightforward. It didn’t require us going outside of the data we have internally. We didn’t have to bring in our outside accounting firm or a legal firm to begin the process. We were able to interface by e-mail and simple phone calls. It was so simple. I’m still surprised that, based on our previous experiences, we were able to get this to happen as quickly as it did.

Gardner: Vishal, how do you account for the ability to make this simple and direct for both sides of the equation? Is there something about the investments SAP has made over the years in technology and the importance of the user experience?

How do you attribute getting from what could be a very complex process to something that’s boiled down to its essential simplicity?

Transparent transactions build trust 

Shah: A lot of people misunderstand the user experience and co-relate that to developing a very nice front end, creating an online experience, and making it seamless and easy to use. I think that is only a part of the truth, and part of the story.

What goes on behind that nice-looking user interface is really eliminating what I call the friction pointsin a customer’s journey. And a lot of those friction points are actually introduced because of manual processes behind those nice-looking screens.

What goes on behind that nice-looking user interface is really eliminating what I call the friction points in a customer’s journey. A lot of those friction points are actually introduced because of manual processes behind the nice-looking screens.

Secondly, there are a lot of exceptions — business exceptions — when you’re trying to facilitate a complex transaction like a financial credit transaction.

You must overcome these challenges. You must ensure that customers and borrowers have a seamless customer experience. We provide a transparent process, accessible to them so they know every single point in time: Where they are with their credit process, are they approved, are they disapproved, are they waiting on certain decisions, or are they negotiating the deal with the partner?

That is one element, we bring in an increased level of transparency and openness to the process. Traditionally these services have been opaque. Historically, businesses submit applications to banks and literally wait for weeks to get a decision. They don’t know what’s going on inside the four walls of the bank for those many weeks.

The second thing we did is to help our partners understand the exceptions that they traditionally encounter in their credit decision process. As a result, they can reduce those manual exceptions or completely eliminate them with the help of technology.

Again, the insights we generated from the data that we already had about the businesses helped us overcome those challenges and overcome the friction points in the entire interaction on both sides.

Gardner: Alan Cohen, where do you go next with this particular program around financing? Is this a bellwether for other types of business services that depend on the platform, the data integrity, and the simplicity of the process?

Win-win lending scenarios 

Cohen: Simplicity is, I think, first and foremost. Vishal and Winslow talked about it. Just as you can get a consumer loan online, it should be just as simple for a business to get access to capital online. Make that a pleasurable process, not a complex process that takes a long time. Simplicity cannot be underrated to help drive this change.

When it comes to the data, we’ve only scratched the surface of what can be done. We talked about risk-adjusted lending decisions based on transactional information. What we’ll see more of is price elasticity, around both risk and demand, come into play as banks help to better manage their portfolio — not with theoretical information but through practical information. They’ll have better insights to manage their portfolios.

Let’s not lose sight of what we’re trying to accomplish: Broaden the capital availability to the community of businesses. There are so many different types of lending scenarios that could happen. You’ll see more of those scenarios become available to businesses over time in a much more efficient, cost-effective, and economic manner.

It’s not just a shifting of cost. It will be an elimination of cost — where both parties win in this process.

Gardner: Winslow, for other SMBs that face credit issues or didn’t pursue revolving credit because of the complexity, what advice can you offer? What recommendations might you have for organizations to rethink their financing now that there are processes like what Apparent Financing provides?

Garnier: If I take a step back, we made the classic mistake that we should have put in place a bank line of credit prior to this event happening for us. The challenge was the time needed for the vetting process. We would rather pursue new clients than spend our time having to work with the different lenders.

Financing really is something that I think most small businesses should pursue, but I highly recommend they pursue it under something like what Apparent Financing has arranged. That’s because of the simplicity, the one-stop portal to find what you are looking for, the efficiency of the process, and the quality of the lenders.

All the folks that we ended up speaking to were very capable, and they wanted to do business with us, which was really outstanding. It was very different from the pushback and the, “We’ll let you know within the next 30 to 60 days or so.” That is very challenging.

We have not only added new clients since we put in the revolving credit, but our DUNS score has improved, and our credit-rating has continued to improve. It’s low risk for an SMB to look at a platform like Apparent Financing to see if this could be useful to them. I highly recommend it. It’s been nothing but a positive experience for us.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Ariba, Business networks, ERP, Networked economy, SAP, SAP Ariba, Spot buying, supply chain | Tagged , , , , , , , , , , , , , , , | Leave a comment

The Open Group panel explores ways to help smart cities initiatives overcome public sector obstacles

Smart city graphic

Credit: Wikimedia Commons

The next BriefingsDirect thought leadership panel discussion focuses on how The Open Group is spearheading ways to make smart cities initiatives more effective.

Many of the latest technologies — such as Internet of Things (IoT) platforms, big data analytics, and cloud computing — are making data-driven and efficiency-focused digital transformation more powerful. But exploiting these advances to improve municipal services for cities and urban government agencies face unique obstacles. Challenges range from a lack of common data sharing frameworks, to immature governance over multi-agency projects, to the need to find investment funding amid tight public sector budgets.

The good news is that architectural framework methods, extended enterprise knowledge sharing, and common specifying and purchasing approaches have solved many similar issues in other domains.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

BriefingsDirect recently sat down with a panel to explore how The Open Group is ambitiously seeking to improve the impact of smart cities initiatives by implementing what works organizationally among the most complex projects.

The panel consists of Dr. Chris Harding, Chief Executive Officer atLacibusDr. Pallab Saha, Chief Architect at The Open Group; Don Brancato, Chief Strategy Architect at BoeingDon Sunderland, Deputy Commissioner, Data Management and Integration, New York City Department of IT and Telecommunications, and Dr. Anders Lisdorf, Enterprise Architect for Data Services for the City of New York. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chris, why are urban and regional government projects different from other complex digital transformation initiatives?

Chris Harding


Harding: Municipal projects have both differences and similarities compared with corporate enterprise projects. The most fundamental difference is in the motivation. If you are in a commercial enterprise, your bottom line motivation is money, to make a profit and a return on investment for the shareholders. If you are in a municipality, your chief driving force should be the good of the citizens — and money is just a means to achieving that end.

This is bound to affect the ways one approaches problems and solves problems. A lot of the underlying issues are the same as corporate enterprises face.

Bottom-up blueprint approach

Brancato: Within big companies we expect that the chief executive officer (CEO) leads from the top of a hierarchy that looks like a triangle. This CEO can do a cause-and-effect analysis by looking at instrumentation, global markets, drivers, and so on to affect strategy. And what an organization will do is then top-down.

In a city, often it’s the voters, the masses of people, who empower the leaders. And the triangle goes upside down. The flat part of the triangle is now on the top. This is where the voters are. And so it’s not simply making the city a mirror of our big corporations. We have to deliver value differently.

There are three levels to that. One is instrumentation, so installing sensors and delivering data. Second is data crunching, the ability to turn the data into meaningful information. And lastly, urban informatics that tie back to the voters, who then keep the leaders in power. We have to observe these in order to understand the smart city.

Pallab Saha


Saha: Two things make smart city projects more complex. First, typically large countries have multilevel governments. One at the federal level, another at a provincial or state level, and then city-level government, too.

This creates complexity because cities have to align to the state they belong to, and also to the national level. Digital transformation initiatives and architecture-led initiatives need to help.

Secondly, in many countries around the world, cities are typically headed by mayors who have merely ceremonial positions. They have very little authority in how the city runs, because the city may belong to a state and the state might have a chief minister or a premier, for example. And at the national level, you could have a president or a prime minster. This overall governance hierarchy needs to be factored when smart city projects are undertaken.

These two factors bring in complexity and differentiation in how smart city projects are planned and implemented.

Sunderland: I agree with everything that’s been said so far. In the particular case of New York City — and with a lot of cities in the US — cities are fairly autonomous. They aren’t bound to the states. They have an opportunity to go in the direction they set.

The problem is, of course, the idea of long-term planning in a political context. Corporations can choose to create multiyear plans and depend on the scale of the products they procure. But within cities, there is a forced changeover of management every few years. Sometimes it’s difficult to implement a meaningful long-term approach. So, they have to be more reactive.

Create demand to drive demand


Credit: Wikimedia Commons

Driving greater continuity can nonetheless come by creating ongoing demand around the services that smart cities produce. Under [former New York City mayor] Michael Bloomberg, for example, when he launched 311 and, he had a basic philosophy which was, you should implement change that can’t be undone.

If you do something like offer people the ability to reduce 10,000 [city access] phone numbers to three digits, that’s going to be hard to reverse. And the same thing is true if you offer a simple URL, where citizens can go to begin the process of facilitating whatever city services they need.

In like-fashion, you have to come up with a killer app with which you habituate the residents. They then drive demand for further services on the basis of it. But trying to plan delivery of services in the abstract — without somehow having demand developed by the user base — is pretty difficult.

By definition, cities and governments have a captive audience. They don’t have to pander to learn their demands. But whereas the private sector goes out of business if they don’t respond to the demands of their client base, that’s not the case in the public sector.

The public sector has to focus on providing products and tools that generate demand, and keep it growing in order to create the political impetus to deliver yet more demand.

Gardner: Anders, it sounds like there is a chicken and an egg here. You want a killer app that draws attention and makes more people call for services. But you have to put in the infrastructure and data frameworks to create that killer app. How does one overcome that chicken-and-egg relationship between required technical resources and highly visible applications?

Lisdorf: The biggest challenge, especially when working in governments, is you don’t have one place to go. You have several different agencies with different agendas and separate preferences for how they like their data and how they like to share it.

This is a challenge for any Enterprise Architecture (EA) because you can’t work from the top-down, you can’t specify your architecture roadmap. You have to pick the ways that it’s convenient to do a project that fit into your larger picture, and so on.

It’s very different working in an enterprise and putting all these data structures in place than in a city government, especially in New York City.

Gardner: Dr. Harding, how can we move past that chicken and egg tension? What needs to change for increasing the capability for technology to be used to its potential early in smart cities initiatives?

Framework for a common foundation 

Harding: As Anders brought up, there are lots of different parts of city government responsible for implementing IT systems. They are acting independently and autonomously — and I suspect that this is actually a problem that cities share with corporate enterprises.

Very large corporate enterprises may have central functions, but often that is small in comparison with the large divisions that it has to coordinate with. Those divisions often act with autonomy. In both cases, the challenge is that you have a set of independent governance domains — and they need to share data. What’s needed is some kind of framework to allow data sharing to happen.

This framework has to be at two levels. It has to be at a policy level — and that is going to vary from city to city or from enterprise to enterprise. It also has to be at a technical level. There should be a supporting technical framework that helps the enterprises, or the cities, achieve data sharing between their independent governance domains.

Gardner: Dr. Saha, do you agree that a common data framework approach is a necessary step to improve things?

Saha: Yes, definitely. Having common data standards across different agencies and having a framework to support that interoperability between agencies is a first step. But as Dr. Anders mentioned, it’s not easy to get agencies to collaborate with one another or share data. This is not a technical problem. Obviously, as Chris was saying, we need policy-level integration both vertically and horizontally across different agencies.

Some cities set up urban labs as a proof of concept. You can make assessment on how the demand and supply are aligned.

One way I have seen that work in cities is they set up urban labs. If the city architect thinks they are important for citizens, those services are launched as a proof of concept (POC) in these urban labs. You can then make an assessment on whether the demand and supply are aligned.

Obviously, it is a chicken-and-egg problem. We need to go beyond frameworks and policies to get to where citizens can try out certain services. When I use the word “services” I am looking at integrated services across different agencies or service providers.

The fundamental principle here for the citizens of the city is that there is no wrong door, he or she can approach any department or any agency of the city and get a service. The citizen, in my view, is approaching the city as a singular authority — not a specific agency or department of the city.

Gardner: Don Brancato, if citizens in their private lives can, at an e-commerce cloud, order almost anything and have it show up in two days, there might be higher expectations for better city services.

Is that a way for us to get to improvement in smart cities, that people start calling for city and municipal services to be on par with what they can do in the private sector?

Public- and private-sector parity

Don Brancato


Brancato: You are exactly right, Dana. That’s what’s driven the do it yourself (DIY) movement. If you use a cell phone at home, for example, you expect that you should be able to integrate that same cell phone in a secure way at work. And so that transitivity is expected. If I can go to Amazon and get a service, why can’t I go to my office or to the city and get a service?

This forms some of the tactical reasons for better using frameworks, to be able to deliver such value. A citizen is going to exercise their displeasure by their vote, or by moving to some other place, and is then no longer working or living there.

Traceability is also important. If I use some service, it’s then traceable to some city strategy, it’s traceable to some data that goes with it. So the traceability model, in its abstract form, is the idea that if I collect data it should trace back to some service. And it allows me to build a body of metrics that show continuously how services are getting better. Because data, after all, is the enablement of the city, and it proves that by demonstrating metrics that show that value.

So, in your e-commerce catalog idea, absolutely, citizens should be able to exercise the catalog. There should be data that shows its value, repeatability, and the reuse of that service for all the participants in the city.

Gardner: Don Sunderland, if citizens perceive a gap between what they can do in the private sector and public — and if we know a common data framework is important — why don’t we just legislate a common data framework? Why don’t we just put in place common approaches to IT?

Sunderland: There have been some fairly successful legislative actions vis-à-vis making data available and more common. The Open Data Law, which New York City passed back in 2012, is an excellent example. However, the ability to pass a law does not guarantee the ability to solve the problems to actually execute it.

Donald Sunderland


In the case of the service levels you get on Amazon, that implies a uniformity not only of standards but oftentimes of [hyperscale] platform. And that just doesn’t exist [in the public sector]. In New York City, you have 100 different entities, 50 to 60 of them are agencies providing services. They have built vast legacy IT systems that don’t interoperate. It would take a massive investment to make them interoperate. You still have to have a strategy going forward.

The idea of adopting standards and frameworks is one approach. The idea is you will then grow from there. The idea of creating a law that tries to implement uniformity — like an Amazon or Facebook can — would be doomed to failure, because nobody could actually afford to implement it.

Since you can’t do top-down solutions — even if you pass a law — the other way is via bottom-up opportunities. Build standards and governance opportunistically around specific centers of interest that arise. You can identify city agencies that begin to understand that they need each other’s data to get their jobs done effectively in this new age. They can then build interconnectivity, governance, and standards from the bottom-up — as opposed to the top-down.

Gardner: Dr. Harding, when other organizations are siloed, when we can’t force everyone into a common framework or platform, loosely coupled interoperability has come to the rescue. Usually that’s a standardized methodological approach to interoperability. So where are we in terms of gaining increased interoperability in any fashion? And is that part of what The Open Group hopes to accomplish?

Not something to legislate

Harding: It’s certainly part of what The Open Group hopes to accomplish. But Don was absolutely right. It’s not something that you can legislate. Top-down standards have not been very successful, whereas encouraging organic growth and building on opportunities have been successful.

The prime example is the Internet that we all love. It grew organically at a time when governments around the world were trying to legislate for a different technical solution; the Open Systems Interconnection (OSI) model for those that remember it. And that is a fairly common experience. They attempted to say, “Well, we know what the standard has to be. We will legislate, and everyone will do it this way.”

That often falls on its face. But to pick up on something that is demonstrably working and say, “Okay, well, let’s all do it like that,” can become a huge success, as indeed the Internet obviously has. And I hope that we can build on that in the sphere of data management.

It’s interesting that Tim Berners-Lee, who is the inventor of the World Wide Web, is now turning his attention to Solid, a personal online datastore, which may represent a solution or standardization in the data area that we need if we are going to have frameworks to help governments and cities organize.

A prime example is the Internet. It grew organically when governments were trying to legislate a solution. That often falls on its face. Better to pick up on something that is working in practice.

Gardner: Dr. Lisdorf, do you agree that the organic approach is the way to go, a thousand roof gardens, and then let the best fruit win the day?

Lisdorf: I think that is the only way to go because, as I said earlier, any top-down sort of way of controlling data initiatives in the city are bound to fail.

Gardner: Let’s look at the cost issues that impact smart cities initiatives. In the private sector, you can rely on an operating expenditure budget (OPEX) and also gain capital expenditures (CAPEX). But what is it about the funding process for governments and smart cities initiatives that can be an added challenge?

How to pay for IT?

Brancato: To echo what Dr. Harding suggested, cost and legacy will drive a funnel to our digital world and force us — and the vendors — into a world of interoperability and a common data approach.

Cost and legacy are what compete with transformation within the cities that we work with. What improves that is more interoperability and adoption of data standards. But Don Sunderland has some interesting thoughts on this.

Sunderland: One of the great educations you receive when you work in the public sector, after having worked in the private sector, is that the terms CAPEX and OPEX have quite different meanings in the public sector.

Governments, especially local governments, raise money through the sale of bonds. And within the local government context, CAPEX implies anything that can be funded through the sale of bonds. Usually there is specific legislation around what you are allowed to do with that bond. This is one of those places where we interact strongly with the state, which stipulates specific requirements around what that kind of money can be used for. Traditionally it was for things like building bridges, schools, and fixing highways. Technology infrastructure had been reflected in that, too.

What’s happened is that the CAPEX model has become less usable as we’ve moved to the cloud approach because capital expenditures disappear when you buy services, instead of licenses, on the data center servers that you procure and own.

This creates tension between the new cloud architectures, where most modern data architectures are moving to, and the traditional data center, server-centric licenses, which are more easily funded as capital expenditures.

The rules around CAPEX in the public sector have to evolve to embrace data as an easily identifiable asset [regardless of where it resides]. You can’t say it has no value when there are whole business models being built around the valuation of the data that’s being collected.

There is great hope for us being able to evolve. But for the time being, there is tension between creating the newer beneficial architectures and figuring out how to pay for them. And that comes down to paying for [cloud-based operating models] with bonds, which is politically volatile. What you pay for through operating expenses comes out of the taxes to the people, and that tax is extremely hard to come by and contentious.

So traditionally it’s been a lot easier to build new IT infrastructure and create new projects using capital assets rather than via ongoing expenses directly through taxes.

Gardner: If you can outsource the infrastructure and find a way to pay for it, why won’t municipalities just simply go with the cloud entirely?

Cities in the cloud, but services grounded

1024px-Smart_City_NanshaSaha: Across the world, many governments — not just local governments but even state and central governments — are moving to the cloud. But one thing we have to keep in mind is that at the city level, it is not necessary that all the services be provided by an agency of the city.

It could be a public/private partnership model where the city agency collaborates with a private party who provides part of the service or process. And therefore, the private party is funded, or allowed to raise money, in terms of only what part of service it provides.

Many cities are addressing the problem of funding by taking the ecosystem approach because many cities have realized it is not essential that all services be provided by a government entity. This is one way that cities are trying to address the constraint of limited funding.

Gardner: Dr. Lisdorf, in a city like New York, is a public cloud model a silver bullet, or is the devil in the details? Or is there a hybrid or private cloud model that should be considered?

Lisdorf: I don’t think it’s a silver bullet. It’s certainly convenient, but since this is new technology there are lot of things we need to clear up. This is a transition, and there are a lot of issues surrounding that.

One is the funding. The city still runs in a certain way, where you buy the IT infrastructure yourself. If it is to change, they must reprioritize the budgets to allow new types of funding for different initiatives. But you also have issues like the culture because it’s different working in a cloud environment. The way of thinking has to change. There is a cultural inertia in how you design and implement IT solutions that does not work in the cloud.

There is still the perception that the cloud is considered something dangerous or not safe. Another view is that the cloud is a lot safer in terms of having resilient solutions and the data is safe.

This is all a big thing to turn around. It’s not a simple silver bullet. For the foreseeable future, we will look at hybrid architectures, for sure. We will offload some use cases to the cloud, and we will gradually build on those successes to move more into the cloud.

Gardner: We’ve talked about the public sector digital transformation challenges, but let’s now look at what The Open Group brings to the table.

Dr. Saha, what can The Open Group do? Is it similar to past initiatives around TOGAFas an architectural framework? Or looking at DoDAF, in the defense sector, when they had similar problems, are there solutions there to learn from?

Smart city success strategies

Saha: At The Open Group, as part of the architecture forum, we recently set up a Government Enterprise Architecture Work Group. This working group may develop a reference architecture for smart cities. That would be essential to establish a standardization journey around smart cities.

One of the reasons smart city projects don’t succeed is because they are typically taken on as an IT initiative, which they are not. We all know that digital technology is an important element of smart cities, but it is also about bringing in policy-level intervention. It means having a framework, bringing cultural change, and enabling a change management across the whole ecosystem.

At The Open Group work group level, we would like to develop a reference architecture. At a more practical level, we would like to support that reference architecture with implementation use cases. We all agree that we are not going to look at a top-down approach; no city will have the resources or even the political will to do a top-down approach.

Given that we are looking at a bottom-up, or a middle-out, approach we need to identify use cases that are more relevant and successful for smart cities within the Government Enterprise Architecture Work Group. But this thinking will also evolve as the work group develops a reference architecture under a framework.

Gardner: Dr. Harding, how will work extend from other activities of The Open Group to smart cities initiatives?

Collective, crystal-clear standards 

Harding: For many years, I was a staff member, but I left The Open Group staff at the end of last year. In terms of how The Open Group can contribute, it’s an excellent body for developing and understanding complex situations. It has participants from many vendors, as well as IT users, and from the academic side, too.

Such a mix of participants, backgrounds, and experience creates a great place to develop an understanding of what is needed and what is possible. As that understanding develops, it becomes possible to define standards. Personally, I see standardization as kind of a crystallization process in which something solid and structured appears from a liquid with no structure. I think that the key role The Open Group plays in this process is as a catalyst, and I think we can do that in this area, too.

Gardner: Don Brancato, same question; where do you see The Open Group initiatives benefitting a positive evolution for smart cities?

Brancato: Tactically, we have a data exchange model, the Open Data Element Framework that continues to grow within a number of IoT and industrial IoT patterns.  That all ties together with an open platform, and into Enterprise Architecture in general, and specifically with models like DODAF, MODAF, and TOGAF.

Data catalogs provide proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and are traceable up to the strategy.

We have a really nice collection of patterns that recognize that the data is the mechanism that ties it together. I would have a look at the open platform and the work they are doing to tie-in the service catalog, which is a collection of activities that human systems or machines need in order to fulfill their roles and capabilities.

The notion of data catalogs, which are the children of these service catalogs, provides the proof of the activities of human systems, machines, and sensors to the fulfillment of their capabilities and then are traceable up to the strategy.

I think we have a nice collection of standards and a global collection of folks who are delivering on that idea today.

Gardner: What would you like to see as a consumer, on the receiving end, if you will, of organizations like The Open Group when it comes to improving your ability to deliver smart city initiatives?

Use-case consumer value

Sunderland: I like the idea of reference architectures attached to use cases because — for better or worse — when folks engage around these issues — even in large entities like New York City — they are going to be engaging for specific needs.

Reference architectures are really great because they give you an intuitive view of how things fit. But the real meat is the use case, which is applied against the reference architecture. I like the idea of developing workgroups around a handful of reference architectures that address specific use cases. That then allows a catalog of use cases for those who facilitate solutions against those reference architectures. They can look for cases similar to ones that they are attempting to resolve. It’s a good, consumer-friendly way to provide value for the work you are doing.

Gardner: I’m sure there will be a lot more information available along those lines at

When you improve frameworks, interoperability, and standardization of data frameworks, what success factors emerge that help propel the efforts forward? Let’s identify attractive drivers of future smart city initiatives. Let’s start with Dr. Lisdorf. What do you see as a potential use case, application, or service that could be a catalyst to drive even more smart cities activities?

Lisdorf: Right now, smart cities initiatives are out of control. They are usually done on an ad-hoc basis. One important way to get standardization enforced — or at least considered for new implementations – is to integrate the effort as a necessary step in the established procurement and security governance processes.

Whenever new smart cities initiatives are implemented, you would run them through governance tied to the funding and the security clearance of a solution. That’s the only way we can gain some sort of control.

This approach would also push standardization toward vendors because today they don’t care about standards; they all have their own. If we included in our procurement and our security requirements that they need to comply with certain standards, they would have to build according to those standards. That would increase the overall interoperability of smart cities technologies. I think that is the only way we can begin to gain control.

Gardner: Dr. Harding, what do you see driving further improvement in smart cities undertakings?

Prioritize policy and people 

Harding: The focus should be on the policy around data sharing. As I mentioned, I see two layers of a framework: A policy layer and a technical layer. The understanding of the policy layer has to come first because the technical layer supports it.

The development of policy around data sharing — or specifically on personal data sharing because this is a hot topic. Everyone is concerned with what happens to their personal data. It’s something that cities are particularly concerned with because they hold a lot of data about their citizens.

Gardner: Dr. Saha, same question to you.

Saha: I look at it in two ways. One is for cities to adopt smart city approaches. Identify very-high-demand use cases that pertain to environmental mobility, or the economy, or health — or whatever the priority is for that city.

Identifying such high-demand use cases is important because the impact is directly seen by the people, which is very important because the benefits of having a smarter city are something that need to be visible to the people using those services, number one.

The other part, that we have not spoken about, is we are assuming that the city already exists, and we are retrofitting it to become a smart city. There are places where countries are building entirely new cities. And these brand-new cities are perfect examples of where these technologies can be tried out. They don’t yet have the complexities of existing cities.

It becomes a very good lab, if you will, a real-life lab. It’s not a controlled lab, it’s a real-life lab where the services can be rolled out as the new city is built and developed. These are the two things I think will improve the adoption of smart city technology across the globe.

Gardner: Don Brancato, any ideas on catalysts to gain standardization and improved smart city approaches?

City smarts and safety first 

Brancato: I like Dr. Harding’s idea on focusing on personal data. That’s a good way to take a group of people and build a tactical pattern, and then grow and reuse that.

In terms of the broader city, I’ve seen a number of cities successfully introduce programs that use the notion of a safe city as a subset of other smart city initiatives. This plays out well with the public. There’s a lot of reuse involved. It enables the city to reuse a lot of their capabilities and demonstrate they can deliver value to average citizens.

In order to keep cities involved and energetic, we should not lose track of the fact that people move to cities because of all of the cultural things they can be involved with. That comes from education, safety, and the commoditization of price and value benefits. Being able to deliver safety is critical. And I suggest the idea of traceability of personal data patterns has a connection to a safe city.

Traceability in the Enterprise Architecture world should be a standard artifact for assuring that the programs we have trace to citizen value and to business value. Such traceability and a model link those initiatives and strategies through to the service — all the way down to the data, so that eventually data can be tied back to the roles.

For example, if I am an individual, data can be assigned to me. If I am in some role within the city, data can be assigned to me. The beauty of that is we automate the role of the human. It is even compounded to the notion that the capabilities are done in the city by humans, systems, machines, and sensors that are getting increasingly smarter. So all of the data can be traceable to these sensors.

Gardner: Don Sunderland, what have you seen that works, and what should we doing more of?

Mobile-app appeal

Sunderland: I am still fixated on the idea of creating direct demand. We can’t generate it. It’s there on many levels, but a kind of guerrilla tactic would be to tap into that demand to create location-aware applications, mobile apps, that are freely available to citizens.

The apps can use existing data rather than trying to go out and solve all the data sharing problems for a municipality. Instead, create a value-added app that feeds people location-aware information about where they are — whether it comes from within the city or without. They can then become habituated to the idea that they can avail themselves of information and services directly, from their pocket, when they need to. You then begin adding layers of additional information as it becomes available. But creating the demand is what’s key.

When 311 was created in New York, it became apparent that it was a brand. The idea of getting all those services by just dialing those three digits was not going to go away. Everybody wanted to add their services to 311. This kind of guerrilla approach to a location-aware app made available to the citizens is a way to drive more demand for even more people.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Cloud computing, Cyber security, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, multicloud, Platform 3.0, professional services, Security, The Open Group | Tagged , , , , , , , , , , , , | Leave a comment

The new procurement advantage: How business networks generate multi-party ecosystem solutions

The next BriefingsDirect intelligent enterprise discussion explores new opportunities for innovation and value creation inside of business networks and among their powerful ecosystem of third party services providers.

We now explore how business and technology platforms have evolved into data insights networks, and why third-party businesses and modern analytics solutions are joining forces to create entirely new breeds of digital commerce and supply chain knowledge benefits.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To explain how business ecosystems are becoming incubators for value-added services for both business buyers and sellers, we welcome Sean Thompson, Senior Vice President and Global Head of Business Development and Ecosystem at SAP Ariba. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why is now the right time to highlight collaboration inside of business ecosystems?

Thompson: It’s a fascinating time to be alive when you look at the largest companies on this planet, the five most valuable companies: AppleAmazonGoogleMicrosoft, and Facebook — they all share something in common, and that is that they have built and hosted very rich ecosystems.

Ecosystems enrich the economy

These platforms represent wonderful economics for the companies themselves. But the members of the ecosystems also enjoy a very profitable place to do business. This includes the end-users profiting from the network effect that Facebook provides in terms of keeping in touch with friends, etc., as well as the advertisers who get value from the specific targeting of Facebook users based on end-user interests and values.

sean thompson (1)


So, it’s an interesting time to look at where these companies have taken us in the overall economy. It’s also an indication for other parts of the technology world that ecosystems in the cloud era are becoming more important. In the cloud era, you have multitenancy where you have the hosts of these applications, like SAP Ariba, using multitenant platforms. No longer are these applications delivered on-premise.

Now, it’s a cloud application enjoyed by more than 3.5 million organizations around the world. It’s hosted by SAP Ariba in the cloud. As a result, you have a wonderful ecosystem that evolved around a particular audience to which you can provide new value. For us, at SAP Ariba, the opportunity is to have an open mindset, much like the companies that I mentioned.

It is a very interesting time because business ecosystems now matter more than ever in the technology world, and it’s mainly due to cloud computing.

Gardner: These platforms create escalating value. Everybody involved is a winner, and the more they play, the more winnings there are for all. The participation grows the pie, builds a virtuous adoption cycle.

Is that how you view business ecosystems, as an ongoing value-added creation mechanism? How do you define a business ecosystem, and how is that different from five years ago?

Thompson: I say this to folks that I work with everyday — not only inside of SAP Ariba, but also to members of our partner community, our ecosystem – “We are privileged in that not every company can talk about an ecosystem, mainly because you have to have relevance in order for such an ecosystem to develop.”

wrote an article recently wherein I was reminded of growing up in Montana. I’m a big fly fisherman. I grew up with a fly rod in my hand. It didn’t dawn on me until later in my professional life that I used to talk about ecosystems as a kid. We used to talk about the various bug hatches that would happen and how that would make the trout go crazy.

I was taught by my dad about the certain ecosystems that supported different bugs and the different life that the trout feed on. In order to have an ecosystem — whether it was fly-fishing as a kid in the natural environment or business ecosystems built today in the cloud — it starts with relevance. Do you have relevance, much like Microsoft had relevance back in the personal computer (PC) era?

Power of relevance

Apple created the PC era, but Microsoft decided to license the PC operating system (OS) to many and thus became relevant to all the third-party app developers. The Mac was closed. The strategy that Apple had in the beginning was to control this closed environment. That led to a wonderful user experience. But it didn’t lead to a place where third-party developers could build applications and get them sold.

Windows and a Windows-compatible PC environment created a profitable place that had relevance. More PC manufacturers used Windows as a standard, third-party app developers could build and sell the applications through a much broader distribution network, and that then was Microsoft’s relevance in the early days of the PC.

Other ecosystems have to have relevance, too. There have to be the right conditions for third parties to be attracted, and ultimately — in the business world — it’s all about, if you will, profit. Can I enjoy a profitable existence by joining the ecosystem?

You have to have the right conditions for third parties to be attracted. In the business world, it’s all about profit. Can I enjoy a profitable existence by joining the ecosystem?

At SAP Ariba, I always say, we are privileged because we dohave relevance. also had relevance in its early days when it distributed its customer resource management (CRM) app widely and efficiently. They pioneered the notion of only needing a username, a password, and credit card to distribute and consume a CRM app. Once that Sales Force Automation app was widely distributed, all of a sudden you had an ecosystem that began to pay attention because of the relevancy that Salesforce had. It was able to turn the relevancy of the app into an ecosystem that was based on a platform, and they introduced and the AppExchange for the third parties to extend the value of the applications and the platform.

It’s very similar to what we have here at SAP Ariba. The relevance in the ecosystem is supported by market relevance from the network. So it’s a fascinating time.

Gardner: What exactly is the relevance with the SAP Ariba platform? You’re in an auspicious place — between buyers and sellers at the massive scale that the cloud allows. And increasingly the currency now is data, analytics, and insights.

Global ERP efficiency

Thompson: It’s very simple. I first got to know Ariba professionally back in the 1990s. I was at Deloitte, where I was one of those classic re-engineering consultants in the mid-90s. Then during the Y2K era, companies were getting rid of the old mainframes because they thought the code would fail when the calendar turned over to the year 2000. That was a wonderful perfect storm in the industry and led to the first major wave of consuming enterprise resource planning (ERP) technology and software.

Ariba was born out of that same era, with an eye toward procurement and helping the procurement organization within companies better manage spend.

ERP was about making spend more efficient, too, and making the organization more efficient overall. It was not just about reducing waste inherent within the silos of an organization. It was also about the waste in how companies spent money, managed suppliers, and managed spend against contracts that they had with those suppliers.

And so, Ariba — not unlike Salesforce and other business applications that became relevant — was the first to focus on the buyer, in particular the buyer within the procurement organization. The focus was on using a software application to help companies make better decisions around who they are sourcing from, their supply chain, and driving end-users to buy based on contracts that can be negotiated. It became an end-to-end way of thinking about your source-to-settle process. That was very much an application-led approach that SAP Ariba has had for the better part of 20 years.

When SAP bought Ariba in 2012, it included Ariba naturally within the portfolio of the largest ERP provider, SAP. But instead of thinking of it as a separate application, now Ariba is within SAP, enabling what we call the intelligent enterprise. The focus remains on making the enterprise more intelligent.

Pioneers in the cloud

SAP Ariba was also one of the first to pioneer moving from an on-premises world into the cloud. And by doing so, Ariba created a business network. It was very early in pioneering the concept of a network where — by delighting the buyer and the procurement organization – that organization also brought in their suppliers with them.

Ariba early on had the concept of, “Let’s create a network where it’s not just one-to-one between a buyer and a supplier. Rather let’s think about it as a network — as a marketplace — where suppliers can make connections with many buyers.”

And so, very early on, SAP Ariba created a business network. That network today is made up 3.5 million buyers and sellers doing $2.2 trillion annually in commerce through the Ariba Network.

Now, as you pointed out, the currency is all about data. Because we are in the cloud, a network, and multitenant, our data model is structured in such a way that is far better than in an on-premises world. We now live within a cloud environment with a consistent data structure. Everybody is operating within the same environment, with the same code base. So now the data we have within SAP Ariba — within that digital commerce data set — becomes incredibly valuable to third parties. They can think about how they can enhance that value.

Because we are in a cloud, a network, and multitenant, our data model is structured in a way that’s far better than in an on-premises world. We now live in a cloud environment with a consistent data structure.

As an example, we are working with banks today that are very interested in using data to inform new underwriting models. A supplier will soon be able to log-in to the SAP Ariba Network and see that there are banks offering them loans based on data available in the network. It informs about new loans at better rates because of the data value that the SAP Ariba Network provides. The notion of an ecosystem is now extending to very interesting places like banking, with financial service providers being part of a business network and ecosystem.

We are going beyond the traditional old applications — what we used to call independent software vendors (ISVs). We’re now bringing in service providers and data services providers. It’s very interesting to see the variety of different business models joining today’s ecosystems.

Gardner: Another catalyst to the power and value of the network and the platform is that many of these third parties are digital organizations. They’re sharing their value and adding value as pure services so that the integration pain points have been slashed. It’s much easier for a collaborative solution to come together.

Can you provide any other examples, Sean, of how third parties enter into a platform-network ecosystem and add value through digital transformation and innovation?

Relationships rule 

Thompson: Yes. When you look back at my career, 25 years ago, I met SAP for the first time when I was with Deloitte. And Deloitte is still a very strong partner of SAP, a very strong player within the technology industry as a systems integrator (SI) and consulting organization.

We have enjoyed relationships with Deloitte, Accenture, IBMCapgemini, and many other organizations. Today they play a role — as they did in the past — of delivering value to the end customer by providing expertise, human capital, and intellectual property that is inherent in their many methodologies — change management methodologies, business process change methodologies. And there’s still a valuable role for these professional services organizations, consultants, and SIs today.

But their role has evolved, and it’s a fascinating evolution. It’s no longer customizing on-premises software. Back in the day, when I was at Deloitte, we made a lot of money by helping companies adopt an application like an SAP or an Oracle ERP and customizing it. But you ended up customizing for one and building a single-family home, if you will, that was isolated. You ended up forking the code, if you will, so that you had a very difficult time upgrading because you customized the code so much that you then fell behind.

Now, on cloud, the SI is no longer customizing on-premises, it’s now configuring cloud environments. That configuring of cloud environments allows for not only the customer to never be left behind — a wonderful value for the industry in general — but it also allows the SI to play a new role.

That role is now a hybrid of both consulting and of helping companies to understand how to adopt and change their multicloud processes to become more efficient. The SIs are also becoming [cloud service providers] themselves because – what they used to do in customizing on-premises — they’re now building extensions to clouds and among clouds.

They can create extensions of a solution like SAP Ariba for certain industries, like oil and gas, for example. You will see SAP continue to evolve its relationships with these service providers so that those services companies begin to look more like hybrid business models — where they enjoy some intellectual property and extensions to cloud environments, as well as monetizing their methodologies as they have in the past.

This is a fascinating evolution that’s profitable for those companies because they go from a transactional business model — where they have to sell one client at a time and one implementation at a time — to monetizing based on a subscription model, much like we in the ISV world have done.

There are many other examples of new and interesting ways within the SAP Ariba ecosystem and network of buyers and suppliers where third-party ecosystem participants gather additional data about suppliers — and sometimes about buyers. For example, in helping both suppliers and buyers manage their risk better in terms of financial risk, for supply chain disruption, and if you want to ensure there isn’t slave labor in your supply chain, or if there is sufficient diversity in your supply chain.

The supplier risk category for us is very important. It requires an ecosystem of provider data that enriches the supplier profile. And that can then become an enhancement to the overall value of the business network.

We are now able to reach out and offer ways in which third parties can contribute their intellectual property — be it a methodology, data, analytics, or financial services. And that’s why it’s a really exciting time to be in the environment we are today.

Gardner: This network effect certainly relates to solution sets like financial services and risk management. You mentioned also that it pertains to such vertical industries like oil and gas, pharmaceutical, life sciences, and finance. Does it also extend to geographies and a localization-solution benefit? Does it also pertain to going downstream for small- to medium-sized businesses (SMBs) that might not have been able to afford or accommodate this high-level collaboration?

Reach around the world

Thompson: Absolutely, and it’s a great question. I remember the first wave of ERP and it marked a major consumption of technology to improve business. And that led to a tremendous amount of productivity gains that we’ve enjoyed through the growth of the world economy. Business productivity through technology investment has led to a tremendous amount of growth in the economy.

Now, you ask, “Does this extend?” And that’s what’s so fascinating about cloud and when you combine cloud with the concept of ecosystem — because everybody enjoys a benefit from that.

As an example, you mentioned localization. Within SAP Ariba, we are all about intelligent business commerce, and how can we make business commerce more efficient all around the world. That’s what we are about.

In some countries, business commerce involves the good old-fashioned invoicing, orders, and taxation tasks. At Ariba, we don’t want to solve all of that so-called last mile of the tax data and process needed in for invoices in, say, Mexico.

And that’s what’s so fascinating about cloud and when you combine cloud with the concept of ecosystem — because everybody enjoys a benefit.

We want to work with members of the ecosystem that do that. An example is Thomson Reuters, whose business is in part about managing a database of local tax data that is relevant to what’s needed in these different geographies.

By having one relationship with a large provider of that data and being able to distribute that data to the end users — which are companies in places like Mexico and Korea that need a solution – means they are going to be compliant with the local authorities and regulations thanks to up-to-date tax data.

That’s an example of an extremely efficient way for us to distribute to the globe based on cloud and an ecosystem from within which Thomson Reuters provides that localized and accurate tax data.

Support for all sizes

You also asked about SMBs. Prior to being at SAP Ariba, I was part of an SMB support organization with the portfolio of Business ByDesign and Business One, which are smaller ERP applications designed for SMBs. And one of them, Business ByDesign, is a cloud-based offering.

In the past, the things that large companies were able to do were often too expensive for SMBs. That’s because they required on-premises data centers, with servers, software consultants, and all of the things that large enterprises could afford to drive innovation in the pre-cloud world. This was all just too expensive for SMBs.

Now the distribution model is represented by cloud and the multitenant nature of these solutions that allow for configuration — as opposed to costly and brittle customization. They now have an easy upgrade path and all the wonderful benefits of the cloud model. And when you combine that with a business solutions ecosystem then you can fully support SMBs.

For example, within SAP Ariba, we have an SMB consulting organization focused on helping midsize companies adopt solutions in an agile way, so that it’s not a big bang. It’s not an expensive consulting service, instead it’s prescriptive in terms of how you should begin small and grow in terms of adopting cloud solutions.

Such an SMB mindset has enabled us to take the same SAP Ariba advantage of no code, to just preconfigure it, and start small. As we like to say at SAP Ariba, it’s a T-shirt size implementation: small, medium, and large.

That’s an example of how the SMB business segment really benefits from this era of cloud and ecosystem that drives efficiency for all of us.

Gardner: Given that the value of any business network and ecosystem increases with the number of participants – including buyers, sellers, and third-party service providers — what should they be thinking to get in the best position to take advantage of these new trends, Sean? What should you be thinking in order to begin leveraging and exploiting this overall ecosystem approach and its benefits?

Thompson: I’m about to get on an airplane to go to South Korea. In some of these geographies where we do business, the majority of businesses are SMBs.

And I am still shocked that some of these companies have not prioritized technology adoption. I’m still surprised that there are a lot of industries, and a lot of companies in different segments, that are still very much analog. They are doing business the way they’ve been doing business for many years, and they have been resistant to change because their cottage industry has allowed them to maintain, if you will, Excel spreadsheet approaches to business and process.

I spent a decade of my life at Microsoft, and when we looked at the different ways Excel was used we were fascinated by the fact that Excel in many ways was used as a business system. Oftentimes, that was very precarious because you can’t manage a business on Excel. But I still see that within companies today.

The number one thing that every business owner needs to understand is that we are in an exponential time of transformation. What was linear in terms of how we expect transformation is now in an exponential phase. Disruption of industries is happening in real time and rapidly. If you’re not prioritizing and investing in technology — and not thinking of your business as a technology business — then you will get left behind.

Never underestimate the impact that technology can have to drive topline growth. But technology also preserves the option value for your company in the future because disruption is happening. It’s exponential and cloud is driving that.

Get professional advice

You also have to appreciate the value of getting good advice. There are good companies that are looking to help. We have many of those within our ecosystem, such as providers of assistance like the large SIs as well as midsize companies focused on helping SMBs.

As I mentioned before, I grew up fly fishing. But anybody that comes to me and says, “Hey, I’d love to go learn how to fly fish.” I say, “Start with hiring a professional guide. Spend a day on a river with a professional guide because they will show you how to do things.” I honestly think that that same advice applies to the professional guide who can help you understand how to consume cloud software services.

And that professional guide fee is not going to be as much as it was in the past. So I would say get professional help to start.

Gardner: I’d like to close out with a look to the future. It seems that for third-party organizations that want to find a home in an ecosystem that there’s never been a better time for them to innovate, and find new business models, new ways of collaborating.

You mentioned risk management and financial improvements and efficiency. What are some of the other areas for new business models within ecosystems? Where are we going to see some new and innovative business models cropping up, especially within the SAP Ariba network ecosystem?

Thompson: You mentioned it earlier in the conversation. The future is about data. The future is about insights that we gather from the data.

We’re still early in a very interesting future. We’re still understanding how to gather insights from data. At SAP Ariba we have a treasure trove of data from $2.1 trillion in commerce among 3.5 million members in the Ariba Network.

I started a company in the natural language processing world. I spent five years of my life understanding how to drive a new type of user experience by using voice. It’s about natural language and understanding how to drive domain-specific knowledge of what people want through a natural user interface.

I’ve played on the edge of where we are in terms of artificial intelligence (AI) within that natural language processing. But we’re still fiddling in many respects. We still fiddle in the business software arena, talking about chatbots, talking about natural user interfaces.

We’re still early in a very interesting future. We’re still very early in understanding how to gather insights from data. At SAP Ariba we have a treasure trove of data from $2.1 trillion in commerce among 3.5 million members in the Ariba Network.

The future is data driven

There are so many data insights available on contracts and supplier profiles alone. So the future is about being able to harvest insights from that data. It’s now very exciting to be able to leverage the right infrastructure like the S/4 HANA data platform.

But we have a lot of work to do still to clean data and ensure the structure, privacy, and security of the data. The future certainly is bright. It will be magical in how we will be able to be proactive in making recommendations based on understanding all the data.

Buyers will be proactively alerted that something is going on in the supply chain. We will be able to predict and be a prescriptive in the way the business operates. So it is a fascinating future that we have ahead of us. It’s very exciting to be a part of it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in: