Cerner’s lifesaving sepsis control solution shows the potential of bringing more AI-enabled IoT to the healthcare edge

workingThe next BriefingsDirect intelligent edge adoption benefits discussion focuses on how hospitals are gaining proactive alerts on patients at risk for contracting serious sepsis infections.

An all-too-common affliction for patients around the world, sepsis can be controlled when confronted early using a combination of edge computing and artificial intelligence (AI). Edge sensors, Wi-Fi data networks, and AI solutions help identify at-risk situations so caregivers at hospitals are rapidly alerted to susceptible patients to head-off sepsis episodes and reduce serious illness and death.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us now as we hear about this cutting-edge use case that puts AI to good use by outsmarting a deadly infectious scourge with guests Missy Ostendorf, Global Sales and Business Development Practice Manager at Cerner Corp.; Deirdre Stewart, Senior Director and Nursing Executive at Cerner Europe, and Rich Bird, World Wide Industry Marketing Manager for Healthcare and Life sciences at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Missy, what are the major trends driving the need to leverage more technology and process improvements in healthcare? When we look at healthcare, what’s driving the need to leverage better technology now?

Missy Ostendorf

Ostendorf

Ostendorf: That’s an easy question to answer. Across all industries resources always drive the need for technology to make things more efficient and cost-conservative — and healthcare is no different.

If we tend to lead more slowly with technology in healthcare, it’s because we don’t have mission-critical risk — we have life-critical risk. And the sepsis algorithm is a great example of that. If a patient turns septic, they have four hours and they can die. So, as you can imagine, that clock ticking is a really big deal in healthcare.

Gardner: And what has changed, Rich, in the nature of the technology that makes it so applicable now to things like this algorithm to intercept sepsis quickly?

Bird: The pace of the change in technology is quite shocking to hospitals. That’s why they can really benefit when two globally recognized organizations such as HPE and Cerner can help them address problems.

cerner logoWhen we look at the demand-spike across the healthcare system, we see that people are living longer with complex long-term conditions. When they come into a hospital, there are points in time when they need the most help.

What [HPE and Cerner] are doing together is understanding how to use this connected technology at the bedside. We can integrate the Internet of Things (IoT) devices that the patients have on them at the bedside, medical devices traditionally not connected automatically but through the humans. The caregivers are now able to use the connected technology to take readings from all of the devices and analyze them at the speed of computers.

So we’re certainly relying on the professionalism, expertise, and the care of the team on the ground, but we’re also helping them with this new level of intelligence. It offers them and the patients more confidence in the fact that their care is being looked at from the people on the ground as well as the technology that’s reading all of their life science indicators flowing into the Cerner applications.

Win against sepsis worldwide 

Gardner: Deirdre, what is new and different about the technology and processes that makes it easier to consume intelligence at the healthcare edge? How are nurses and other caregivers reacting to these new opportunities, such as the algorithm for sepsis?

Deirdre Stewart

Stewart

Stewart: I have seen this growing around the world, having spent a number of years in the Middle East and looking at the sepsis algorithm gain traction in countries like Qatar, UAE, and Saudi Arabia. Now we’re seeing it deployed across Europe, in Ireland, and the UK.

Once nurses and clinicians get over the initial feeling of, “Hang on a second, why is the computer telling me my business? I should know better.” Once they understand how that all happens, they have benefited enormously.

But it’s not just the clinicians who benefit, Dana, it’s the patients. We have documented evidence now. We want to stop patients ever getting to the point of having sepsis. This algorithm and other similar algorithms alert the front-line staff earlier, and that allows us to prevent patients developing sepsis in the first place.

Some of the most impressive figures show the reduction in incidents of sepsis and the increase in the identification of the early sepsis stages, the severe inflammatory response part. When that data is fed back to the doctors and nurses, they understand the importance of such real-time documentation.

I remember in the early days of the electronic medical records; the nurses might be inclined to not do such real-time documentation. But when they understand how the algorithms work within the system to identify anything that is out of place or kilter, it really increases the adoption, and definitely the liking of the system and what it can provide for.

Gardner: Let’s dig into what this system does before we look at some of the implications. Missy, what does the Cerner’s CareAware platform approach do?

Ostendorf: The St. John Sepsis Surveillance Agent looks for early warning signs so that we can save lives. There are three pieces: monitoring, alerting, and then the prescribed intervention.

It goes to what Deirdre was speaking to about the documentation is being done in real-time instead of the previous practice, where a nurse in the intensive care unit (ICU) might have had a piece of paper in her pocket and she would write down, for instance, the patients’ vital signs.

A lot can happen in four hours in the ICU. By having all of the information flow into the electronic medical record we can now have the sepsis agent algorithm continually monitoring that data.

And maybe four hours later she would sit at a computer and put in four hours of vitals from every 15 minutes for that patient. Well, as you can imagine, a lot can happen in four hours in the ICU. By having all of the information flow into the electronic medical record we can now have the sepsis agent algorithm continually monitoring that data.

It surveys the patient’s temperature, heart rate, and glucose level — and if those change and fall outside of safe parameters, it automatically sends alerts to the care team so they can take immediate action. And with that immediate action, they can now change how they are treating that patient. They can give them intravenous antibiotics and fluids, and there is 80 percent to 90 percent improvement in lives saved when you can take that early intervention.

So, we’re changing the game by leveraging the data that was already there, we are just taking advantage of it, and putting it into the hands of the clinicians so that action can be taken early. That’s the most important part. We have been able to actionize the data.

Gardner: Rich, this sounds straightforward, but there is a lot going on to make this happen, to make the edge of where the patient exists able to deliver data, capture data, protect it and make it secure and in compliance. What has had to come together in order to support what was just described by Missy in terms of the Cerner solution?

Healthcare tech progresses to next level 

Rich Bird

Bird

Bird: Focusing on the outcomes is very important. It delivers confidence to the clinical team, always at the front of mind. But it provides that in a way that is secured, real-time, and available, no matter where the care team are. That’s very, very important. And the fact that all of the devices are connected poses great potential opportunities in terms of the next evolution of healthcare technology.

Until now we have been digitizing the workflows that have always existed. Now, for me, this represents the next evolution of that. It’s taking paper and turning it into digital information. But then how do we get more value from that? Having Wi-Fi connectivity across the whole of a site is not something that’s easy. It’s something that we pride ourselves on making simple for our clients, but a key thing that you mentioned was security around that.

When you have everything speaking to everything else, that also introduces the potential of a bad actor. How do we protect against that, how do we ensure that all of the data is collected, transported, and recorded in a safe way? If a bad actor were to become a part of external network and internal network, how do we identify them and close it down?

Working together with our partners, that’s something that we take great pride in doing. We spoke about mobility, and outside of healthcare, in other industries, mobility usually means people have wide access to things.

devicesBut within hospitals, of course, that mobility is about how clinicians can collect and access the data wherever they are. It’s not just one workstation in a corner that the care team uses every now and again. The technology now for the care team gives them the confidence to know the data they are taking action on is collected correctly, protected correctly, and provided to them in a timely manner.

Gardner: Missy, another part of the foundational technology here is that algorithm. How are machine learning (ML) and AI coming to bear? What is it that allowed you to create that algorithm, and why is that a step further than simple reports or alerts?

Ostendorf: This is the most exciting part of what we’re doing today at Cerner and in healthcare. While the St. John’s Sepsis Algorithm is saving lives in a large-scale way – and it’s getting most of the attention — there are many things we have been able to do around the world.

Deirdre brought up Ireland, and even way back in 2009 one of our clients there, St. James’s Hospital in Dublin, was in the news because they made the decision to take the data and build decision-making questions into the front-end application that the clinicians use to order a CT scan. Unlike other X-rays, CT scans actually provide radiation in a way that’s really not great. So we don’t want to have a patient unnecessarily go through a CT scan. The more they have, the higher their risks go up.

They take the data and build decision-making questions into the front-end of the application the clinicians use to order a CT scan. We don’t want to have a patient unnecessarily go through a CT scan. Now with ML, it can tell the clinician whether the CT scan is necessary for the treatment of that patient.

By implementing three questions, the computer looks at the trends and why the clinicians thought they needed it based on previous patients’ experiences. Did that CT scan make a difference and how they were diagnosed? And now with ML, it can tell the clinician on the front end that, “This really isn’t necessary for what you are looking for to treat this patient.”

Clinicians can always override that, they can always call the x-ray department and say, “Look, here’s why I think this one is different.” But in Ireland they were able to lower the number of CT scans that they had always automatically ordered. So with ML they are changing behaviors and making their community healthier. That’s one example.

Another example of where we are using the data and ML is with the Cerner Opioid Toolkit in the United States (US). We announced that in 2018 to help our healthcare system partners combat the opioid crisis that we’re seeing across America.

Deirdre, you could probably speak to the study as a clinician.

Algorithm assisted opioid-addiction help

Stewart: Yes, indeed. It’s interesting work being done in the US on what they call Opioid-Induced Respiratory Depression (OIRD). It looks like approximately 1 in 200 hospitalized surgical patients can end up with an opioid-induced ventilatory impairment. This results in a large cost in healthcare. In the US alone, it’s estimated in 2011 that it cost $2 billion. And the joint commission has made some recommendations on how the assessment of patients should be personalized.

It’s not just one single standardized form with a score that is generated based on questions that are answered. Instead it looks at the patients’ age, demographics, previous conditions, and any other history with opioid intake in the previous 24 hours. And according to the risks of the patient, it then recommends limiting the number of opioids they are given. They also looked at the patients who ended up in respiratory distress and they found that a drug agent to reverse that distress was being administered too many times and at too high a cost in relation to patient safety.

Now with the algorithm, they have managed to reduce the number of patients who end up in respiratory distress and limit the number of narcotics according to the specific patients. It’s no longer a generalized rule. It looks at specific patients, alerts, and intervenes. I like the way our clients worldwide work in the willingness to share this information across the world. I have been on calls recently where they voiced interest in using this in Europe or the Middle East. So it’s not just one hospital doing this and improving their outcomes — it’s now something that could be looked at and done worldwide. That’s the same whenever our clients devise a particular outcome to improve. We have seen many examples of those around the world.

Ostendorf: It’s not just collecting data, it’s being able to actualize the data. We see how that’s creating not only great experiences for a partner but healthier communities.

Gardner: This is a great example of where we get the best of what people can do with their cognitive abilities and their ability to contextualize and the best of the machines to where they can do automation and orchestration of vast data and analytics. Rich, how do you view this balancing act between attaining the best of what people can do and machines can do? How do these medical use cases demonstrate that potential?

Machines plus, not instead of, people 

Bird: When I think about AI, I grew up in the science fiction depiction where AI is a threat. If it’s not any taking your life, it’s probably going to take your job.

But we want to be clear. We’re not replacing doctors or care teams with this technology. We’re helping them make more informed and better decisions. As Missy said, they are still in control. We are providing data to them in a way that helps them improve the outcomes for their patients and reduce the cost of the care that they deliver.

It’s all about using technology to reduce the amount of time and the amount of money care costs to increase patient outcomes – and also to enhance the clinicians’ professionalism.

Missy also talked about adding a few questions into the workflow. I used to work with a chief technology officer (CTO) of a hospital who often talked about medicine as eminence-based, which is based on the individuals that deliver it. There are numerous and different healthcare systems based on the individuals delivering them. With this digital technology, we can nudge that a little bit. In essence, it says, “Don’t just do what you’ve always done. Let’s examine what you have done and see if we can do that a little bit better.”

We know that personal healthcare data cannot be shared. But when we can show the value of the data when shared in a safe way, the clinical teams can see the value generated . It changes the conversation. It helps people provide better care.

The general topic we’re talking about here is digitization. In this context we’re talking about digitizing the analog human body’s vital signs. Any successful digitization of any industry is driven by the users. So, we see that in the entertainment industry, driven by people choosing Netflix over DVDs from the store, for example.

When we talk about delivering healthcare technology in this context, we know that personal healthcare data cannot be shared. It is the most personal data in the world; we cannot share that. But when we can show the value of data when shared in a safe way — highly regulated but shared in a safe way — the clinical teams can then see the value generated from using the data. It changes the conversation to how much does the technology cost. How much can we save by using this technology?

For me, the really exciting thing about this is technology that helps people provide better care and helps patients be protected while they’re in hospital, and in some cases avoid having to come into the hospital in the first place.

Gardner: Getting back to the sepsis issue as a critical proof-point of life-enhancing and life-saving benefits, Missy, tell us about the scale here. How is this paying huge dividends in terms of saved lives?

Life-saving game changer 

Ostendorf: It really is. The World Health Organization (WHO) statistics from 2018 show that 30 million people worldwide experience a sepsis event. In their classification, six million of those could lead to deaths. In 2018 in the UK, there were 150,000 annual cases, with 44 of those ending in deaths.

You can see why this sepsis algorithm is a game-changer, not just for a specific client, but for everyone around the world. It gives clinicians the information they need in a timely manner so that they can take immediate action — and they can save lives.

doctorRich talked about the resources that we save, the cost that’s driven out, all those things are extremely important. When you are the patient or the patient’s family, that translates into a person who actually gets to go home from the hospital. You can’t put a dollar amount or an efficiency on that.

It’s truly saving lives and that’s just amazing to think that. We’re doing that by simply taking the data that was already being collected, running that through the St. John’s sepsis algorithm and alerting the clinicians so that they can take quick action.

Stewart: It was a profound moment for me after Hamad Medical Corp. in Qatar, where I had run the sepsis algorithm across their hospitals for about 11 months, did the data and they reckoned that they had potentially saved 64 lives.

And at the time when I was reading this, I was standing in a clinic there. I looked out at the clinic, it was a busy clinic, and I reckoned there were 60 to 70 people sitting there. And it just hit me like a bolt of lightning to think that what the sepsis algorithm had done for them could have meant the equivalent of every single person in that room being saved. Or, on the flipside, we could have lost every single person in that room.

Mothers, fathers, husbands, wives, sons, daughters, brothers, sisters — and it just hit me so forcefully and I thought, “Oh, my gosh, we have to keep doing this.” We have to do more and find out all those different additional areas where we can help to make a difference and save lives.

nurseGardner: We have such a compelling rationale for employing these technologies and processes and getting people and AI to work together. In making that precedent we’re also setting up the opportunity to gather more data on a historical basis. As we know, the more data, the more opportunity for analysis. The more analysis, the more opportunity for people to use it and leverage it. We get into a virtuous, positive adoption cycle.

Rich, once we’ve established the ability to gather the data, we get a historical base of that data. Where do we go next? What are some of the opportunities to further save lives, improve patient outcomes, enhance patient experience, and reduce costs? What is the potential roadmap for the future?

Personalization improves patients, policy 

Bird: The exciting thing is, if we can take every piece of medical information about an individual and provide that in a way that the clinical team can see it from one end of the user’s life right up to the present day, we can provide medicine that’s more personalized. So, treating people specifically for the conditions that they have.

Missy was talking about evaluating more precisely whether to send a patient for a certain type of scan. There’s also another side of that. Do we give a patient a certain type of medication?

When we’re in a situation where we have the patient’s whole data profile in front of us, clinical teams can make better decisions. Are they on a certain medication already? Are they allergic to a medication that you might prescribe to them? Will their DNA, the combination of their physiology, the condition that they have, the multiple conditions that they have – then we start to see that better clinical decisions can be made. We can treat people uniquely for the specific conditions.

At Hewlett Packard Labs, I was recently talking with an individual about how big data will revolutionize healthcare. You have certain types of patients with certain conditions in a cohort of patients, but how can we make better decisions on that cohort of patients with those co-conditions? You know, with at a specific time in their life, but then also how do we do that from an individual level of individuals?

Rather than just thinking about patients as cohorts, how could policymakers and governments around the world make decisions based on impacts of preventative care, such as more health maintenance? We can give visibility into that data to make better decisions for populations over long periods of time.

It all sounds very complicated, but my hope is, as we get closer, as the power of computing improves, these insights are going to reveal themselves to the clinical team more so than ever.

There’s also the population health side. Rather than just thinking about patients as individuals, or cohorts of patients, how could policymakers and governments around the world make decisions based on impacts of preventative care, such as incentivizing populations to do more health maintenance? How can we give visibility into that data into the future to make better decisions for populations over the longer period of time?

We want to bring all of this data together in a safe way that protects the security and the anonymity of the patients. It could provide those making clinical decisions about the people that are in front of them, as well as policymakers to look over the whole population, the means to make more informed decisions. We see massive potential around prevention. It could have an impact on how much healthcare costs before the patient actually needs treatment.

It’s all very exciting. I don’t think it’s too far away. All of these data points we are collecting are in their own silos right now. There is still work to do in terms of interoperability, but soon everybody’s data could interact with everybody else’s data. Cerner, for example, is making some great strides around the population health element.

Gardner: Missy, where do you see accelerating benefits happening when we combine edge computing, healthcare requirements, and AI?

At the leading edge of disease prevention

Ostendorf: I honestly believe there are no limits. As we continue to take the data in in places like in northern England, where the healthcare system is on a peninsula, they’re treating the entire population.

Rich spoke to population health management. Well, they’re now able to look across the data and see how something that affects the population, like diabetes, specifically affects that community. Clinicians can work with their patients and treat them, and then work the actual communities to reduce the amount of type 2 diabetes. It reduces the cost of healthcare and reduces morbidity rate.

That’s the next place where AI is going to make a massive impact. It will no longer be just saving a life with the sepsis algorithm running against those patients who are in the hospital. It will change entire communities and how they approach health as a community, as well as how they fund healthcare initiatives. We’ll be able to see more proactive management of health community by community.

hpe-logoGardner: Deirdre, what advice do you give to other practitioners to get them to understand the potential and what it takes to act on that now? What should people in the front lines of caregiving be thinking about on how to best utilize and exploit what can be done now with edge computing and AI services?

Stewart: Everybody should have the most basic analytical questions in their heads at all times. How can I make what I am doing better? How can I make what I am doing easier? How can I leverage the wealth of information that is available from people who have walked in my shoes and looked after patients in the same way as I’m looking after them, whether that’s in the hospital or at home in the community? How do I access that in an easier fashion, and how do I make sure that I can help to make improvements in it?

Access to information at your fingertips means not having to remember everything. It’s having it there, and having suggestions made to me. I’m always going back and reviewing what those results and analytics are to help improve the next time, the next time around.

From bedside to boardroom, everybody should be asking themselves those questions. Have I got access to the information I need? And how can I make things better? What more do I need?

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, electronic medical records, Enterprise transformation, healthcare, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, Security | Tagged , , , , , , , , , , , , , | Leave a comment

Three generations of Citrix CEOs on enabling a better way to work

Citrix-Workspace-StageFor the past 30 years, Citrix has made a successful habit of challenging the status quo. That includes:

  • Delivering applications as streaming services to multiple users

  • Making the entire PC desktop into a secure service

  • Enhancing networks that optimize applications delivery

  • Pioneering infrastructure-as-a-service (IaaS) now known as public cloud, and

  • Supplying a way to take enterprise applications and data to the mobile edge.

Now, Citrix is at it again, by creating digital workspaces and redefining the very nature of applications and business intelligence. How has one company been able to not only reinvent itself again and again, but make major and correct bets on the future direction of global information technology?

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To find out, Dana Gardner, Principal Analyst at Interarbor Solutions, recently sat down to simultaneously interview three of Citrix’s chief executives from the past 30 years, Roger Roberts, Citrix CEO and Chairman from 1990 to 2002; Mark Templeton, CEO of Citrix from 2001 to 2015, and David Henshall, who became the company’s CEO in July of 2017.

Here are some excerpts:

Dana Gardner: So much has changed across the worker productivity environment over the past 30 years. The technology certainly has changed. What hasn’t changed as fast is the human factor, the people.

How do we keep moving the needle forward with technology and also try to attain productivity growth when we have this lump of clay that’s often hard to manage, hard to change?

Mark Templeton

Templeton

Mark Templeton: The human factor “lump of clay” is changing as rapidly as technology because of the changing demographics of the workforce. Today’s baby boomers are being followed by generations of millennials, Gen Y, Gen X and then Gen Z will be making important decisions 20 years from now.

So the human factor clay is changing rapidly and providing great opportunity for innovation and invention of new technology in the workplace.

Gardner: The trick is to be able to create technology that the human factor will adopt. It’s difficult to solve a chicken and egg relationship when you don’t know what’s going to drive the other.

What about the past 30 years at Citrix gives you an edge in finding the right formula?

David Henshall: Citrix has always had an amazing ability to stay focused on connecting people and information — and doing it in a way that it’s secure, managed, and available so that we can abstract away a lot of the complexity that’s inherent with technology.

Because, at the end of the day, all we are really focused on is driving those outcomes and allowing people to be as productive, successful, and engaged as humanly possible by giving them the tools to — as we frame it up — work in a way that’s most effective for them. That’s really about creating the future of work and allowing people to be unleashed so that they can do their best working.

Gardner: Roger, when you started, so much of the IT world was focused on platforms and applications and how one drives the other. You seem to have elevated yourself above that and focused on services, on delivery of productivity – because, after all, they are supposed to be productivity applications. How were you able to see above and beyond the 1980s platform-application relationship?

Roger Roberts

Roberts

Roger Roberts: We grew up when the personal computer (PC) and local area networks (LANs) like when Novell NetWare came on the scene. Everybody wanted to use their own PC, driven primarily by things such as the Lotus applications.

So [applications like] spreadsheets, WordPerfectdBase were the tremendous bulk of the market demand at that time. However, with the background that I shared with [Citrix Co-Founder] Ed Iacobucci, we had been in the real world working from mainframes through minicomputers and then to the PCs, and so we knew there were applications out there, where the existing model – well, it really sucked.

The trick then was to take advantage of the increasing processing power we knew the PC was going to deliver and put it in a robust environment that would have stability so we could target specific customers with specific applications. Those customers were always intrigued with our story.

Our story was not formed to meet the mass market. Things like running ads or trying to search for leads would have been a waste of time and money. It made no sense in those days because the vast majority of the world had no idea of what we were talking about.

Gardner: What turned out to be the killer application for Citrix’s rise? What were the use cases you knew would pay off even before the PC went mainstream?

The personnel touch 

Roberts: The easiest one to relate to is personnel systems. Brown and Root Construction out of Houston, Texas was a worldwide operation. Most of their offices were on construction sites and in temporary buildings. They had a great deal of difficulty managing their personnel files, including salaries, when someone was promoted, reviewed, or there was a new hire.

The only way you could do it in the client-server LAN world was to replicate the database. And let me tell you, nobody wants to replicate their human resources (HR) database across 9,000 or 10,000 sites.

The only way you could do it in the client-server-LAN world was to replicate the database. And let me tell you, nobody wants to replicate their HR database across 10,000 sites. We came in and said, “We can solve that problem for you.”

So we came in and said, “We can solve that problem for you, and you can keep all of your data secure at your corporate headquarters. It will always be synchronized because there is only one copy. And we can give you the same capabilities that the LAN-based PC user experiences even over fairly slow telecommunication circuits.”

That really resonated with the people who had those HR problems. I won’t say it was an easy sell. When you are a small company, you are vulnerable. They ask, “How can we trust you to put in a major application using your technology when you don’t have a lot of business?” It was never the technology or the ability to get the job done that they questioned. It was more like having the staying power. That turned out to be the biggest obstacle.

Gardner: David, does it sound a little bit familiar? Today, 30 years later, we’re still dealing with distance, the capability of the network, deciding where the data should reside, how to manage privacy, and secure regulatory compliance. When you listen to Citrix’s use cases and requirements from 30 years ago, does it ring a bell?

Organize, guide, and predict work 

David Henshall

Henshall

Henshall: It absolutely resonates because a lot of what we’re doing is still dealing with the inherent complexity of enterprise IT. Some of our largest customers today are dealing with thousands and thousands of underlying applications. Those can be everything from mainframe applications that Roger talked about through the different eras of client-server — the PC, web, and mobile. A lot of those applications are still in use today because they are adding value to the business, and they are really hard to pull out of the infrastructure.

We can now help them abstract away a lot of that complexity put in over the last 30 years. We start by helping organize IT, allowing them to manage all that complexity of the application tier, and present that out in a way that is easier to consume, easier to manage, and easier to secure.

Several years ago, we began bringing together all of these application types in a way that I would describe as helping to move from organizing IT to organizing work. That means bringing not only the apps but access to all the content and files — whether those reside in on-premises data repositories or in any cloud — including Citrix Cloud. We make that all accessible across universal endpoints management. Then you layer underneath that all kinds of platform capabilities such as security, access control, management, and analytics.

WorkerWhere we’re taking the company in the future is one step beyond organizing work to helping to guide and predict work. That will drive more engagement and productivity by leveraging machine learning (ML)artificial intelligence (AI), and a lot of other capabilities to present work to people in real time and suggest and advise on what they need to be to be most productive.

That’s all just a natural evolution from back when the same fundamental concept was to connect people with the information they need to be productive in real time.

Gardner: One of the ways to improve on these tough problems, Mark, is being in the right place in an ecosystem. Citrix has continuously positioned itself between the data, the systems of record, and the end-user devices. You made a big bet on virtualization as a means to do that.

How do we understand the relationship between the technology and productivity? Is being in the right place and at the right time the secret sauce?

Customers first, innovation always

Templeton: Generically, right place and time is key in just about every aspect of life, but especially the timing of invention and innovation, how it’s introduced, and how to get it adopted.

Citrix adopted a philosophy from an ecosystem perspective from pretty early on. We thought of it as a Switzerland-type of mindset, where we’re willing to work with everyone in the ecosystem — devices, networks, applications, etc. – to interoperate, even as they evolved. So we were basically device-, network-, and application-independent around the kind of the value proposition that David and Roger talked about.

We made a great reputation for ourselves by being able to provide a demilitarized zone so that customers could manage and control their own destiny. When a customer is better off, we are better off. But it starts with making the customer better off.

That type of a cooperative mindset is always in style because it is customer-centered. It’s based upon value-drivers for customers, and my experience is that when there are religious wars in the industry — the biggest losers are customers. They pay for the fight, the incompatibilities, and obsolescence.

We made a great reputation for ourselves then by being able to provide a demilitarized zone (DMZ), or platform for détente, so that customers could manage and control their own destiny. The company has that culture and mindset and it’s always been that way. When a customer is better off, we are better off. But it starts with making the customer better off.

Gardner: Roger, we have often seen companies that had a great leap in innovation but then plateaued and got stuck in the innovator’s dilemma, as it’s been called. That hasn’t been the case with Citrix. You have been able to reinvent yourselves pretty frequently. How do you do that as a culture? How do you get people to stay innovative even when you have a very successful set of products? How do you not rest on your laurels?

Templeton: I think for the most part, people don’t change until they have to, and to actively disrupt yourself is a very unnatural act. Being aware of an innovator’s dilemma is the first step in being able to act on it. And we did have an innovator’s dilemma here on multiple occasions.

That we saw the cliff allowed us to make a turn – mostly ahead of necessity. We made a decision, we made a bet, and we made the innovator’s dilemma actually work for us. We used it as a catalyst for driving change. When you have a lot of smart engineers, if you help them see that innovator’s dilemma, they will fix it, they will innovate.

Gardner: The pace of business sure has ramped up in the last 30 years. You can go through that cycle in 9 or 10 months, never mind 9 or 10 years. David, is that something that keeps you up at night? How do you continue to be one step ahead of where the industry is going?

Embrace, empower change 

Henshall: The sine waves of business cycles are getting much more compressed and with much higher volatility. Today we simply have devices that are absolutely transient. The ways to consume technology and information are coming and going at a pace that is extraordinary. The same thing is true for applications and infrastructure, which not that many years ago involved a major project to install and manage.

Today, it’s a collection of mesh services in so many different areas. By their very nature they become transient. Instead of trying to fight these forces, we look for ways to embrace them and make them part of what we do.

citrix-logo-blackWhen we talk about the Citrix Workspace platform, it is absolutely device- and infrastructure-independent because we recognize our customers have different choices. It’s very much like the Switzerland approach that Mark talked about. The fact that those choices change over time — and being able to support that change — is critical for our own staying power and stickiness. It also gives customers the level of comfort that we are going to be with them wherever they are in their journey.

But it’s the sheer laws of physics that have taken these disruptions to a place where, not that many years ago, it was about how fast physical goods could transfer across state or national boundaries. Today’s market moves on a Tweet or a notification or a new service — something that was just not even possible a few years ago.

Roberts: At the time I retired from Citrix, we were roughly at $500 million [in annual revenue] and growing tremendously. I mean we grew a factor of 10 in four years, and that still amazes me.

Our piece of the market at that time was 100 percent Microsoft Windows-centric. At the same time, you could look at that and tell we could be a multibillion-dollar company just in that space. But then came the Internet, with web apps, web app servers, new technology, HTML, and Java and you knew the world we were in had a very lucrative and long run, but if we didn’t do something, inevitably it was going to die. I think it would have been a slow death, but it would have been death.

Gardner: The relationship with Microsoft that you brought up. It’s not out of the question to say that you were helping them avoid the innovator’s dilemma. In instances that I can recall, you were able to push Microsoft off of its safety block. You were an accelerant to Microsoft’s next future. Is that fair, Mark?

Templeton: Well, I don’t think we were an accelerant to Microsoft per se. We were helping Microsoft extend the reach of Windows into places and use cases that they weren’t providing a solution for. But the Windows brand has always been powerful, and it helped us certainly with our [1995] initial public offering (IPO). In fact, the tagline on our IPO was that “Citrix extends the reach of Microsoft Windows,” in many ways, in terms of devices, different types of connectivity, over the Internet, over dial-up and on wireless networks.

Our value to Microsoft was always around being a value-added kind of partner, even though we had a little bit of a rough spot with them. I think most people didn’t really understand it, but I think Microsoft did, and we worked out a great deal that’s been fantastic for both companies for many, many years.

Gardner: David, as we look to the future and think about the role of AI and ML, having the right data is such an important part of that. How has Citrix found itself in the catbird seat when it comes to having access to broad data? How did your predecessors help out with that?

Data drives, digests, and directs the future 

Henshall: Well, if I think about data and analytics right now, over the last couple of years we’ve spent an extraordinary amount of time building out what I call an analytics platform that sits underneath the Citrix Workspace platform.

We have enough places that we can instrument to capture information from everything, from looking backward across the network, into the application, the user, the location, the files, and all of those various attributes. We collect a rich dataset of many, many different things.

Taking it to a platform approach allows us to step back and begin introducing modules, if you will, that use this information not just in a reporting way, but in a way to actually drive enforcement across the platform. Those great data collection points are also places where we can enforce a policy.

Over the last couple of years we have spent an extraordinary amount of time building out what I call an analytics platform that sits underneath the Citrix Workspace platform.We collect a rich dataset of many, many different things.

Gardner: The metadata has become more important in many cases than the foundational database data. The metadata about what’s going on with the network, the relationship between the user and their devices, what’s going on between all the systems, and how the IT infrastructure beneath them is performing.

Did you have a clue, Mark, that the metadata about what’s going on across an IT environment would become so powerful one day?

Templeton: Absolutely. While I was at Citrix, we didn’t have the technical platform yet to handle big data the way you can handle it now. I am really thrilled to hear that under David’s leadership the company is diving into that because it’s behavioral data around how people are consuming systems — which systems, how they’re working, how available are they, and whether they’re performing. And there are many things that data can express around security, which is a great opportunity for Citrix.

Back in my time, in one of the imagination presentations, we would show IT customers how they eventually would have the equivalent of quarterly brokerage reports. You could see all the classes of investments — how much is invested in this type of app, that type of app, the data, where it’s residing, its performance and availability over time. Then you could make important decisions – even simple ones like when do we turn this application off. At that time, there was very little data to help IT make such hard decisions.

So that was always in the idea, but I’m really thrilled to see the company doing it now.

Gardner: So David, now that you have all of that metadata, and the big data systems to analyze it in real-time, what does that get for you?

Serving what you need, before you need it 

Henshall: The applications are pretty broad, actually. If you think about our data platform right now, we’re able to do lots of closed-loop analytics across security, operations, and performance — and really drive all three of those different factors to improve overall performance. You can customize that in an infinite number of ways so customers can manage it in the way that’s right for their business.

But what’s really interesting now is, as you start developing a pattern of behaviors in the way people are going about work, we can predict and guide work in ways that were unavailable not that long ago. We can serve up the information before you need it based on the graph of other things that you’re doing at work.

WorkspacesA great example is mobile applications for airlines today. The smart ones are tied into the other things that you are doing. So an hour before your flight, it already gives you a notification that it’s time to leave for the airport. When you get to your car, you have your map of the fastest route to the airport already plotted out. As you check in, using biometrics or some other form of authentication, it simplifies these workflows in a very intuitive way.

We have amazing amounts of information that will take that example and allow us to drive it throughout a business context.

Gardner: Roger, in 30 years, we have gone from delivering a handful of applications to people in a way that’s acceptable — given the constraints of the environment and the infrastructure — to a point where the infrastructure data doesn’t have any constraints. We are able to refine work and tell people how they should be more productive.

Is that something you could have imagined back then?

Roberts: Quite frankly, as good as I am, no. It’s beyond my comprehension.

I have an example. I was recently in Texas, and we had an airplane that broke down. We had to get back, and using only my smartphone, I was able to charter a flight, sign a contract with DocuSign, pay for it with an automated clearing house (ACH) transfer, and track that flight on FlightAware to the nearest 15 seconds. I could determine how much time it would take us to get home, and then arrange an Uber ride. Imagine that? It still amazes me; it truly amazes me.

Gardner: You guys would know this better than I do, but it seems that you can run a multinational corporation on a device that fits in your palm. Is that an exaggeration?

Device in hand still needs hands-on help 

Templeton: In many ways, it still is an exaggeration. You can accomplish a lot with the smart device in your hand, and to the degree that leadership is largely around communications and the device in your hand gives you information and the ability to communicate, you can do a lot. But it’s not a substitute entirely for other types of tasks and work that it takes to run a big business, including the human relationships.

Gardner: David, maybe when the Citrix vision for 2030 comes out, you will be able to — through cloud, AI, and that device — do almost anything?

Henshall: It will be more about having the right information on demand when you need it, and that’s a trend that we’ve seen for quite some time.

The amount of information is absolutely staggering. But turning that into something that is actually useful is nearly impossible. The businesses that are going to be successful are those that can put the right information at people’s fingertips at the right time to interact with different business opportunities.

If you look at the broader trends and technology, I mean, we are entering the yottabyte era now, which is one with 24 zeros after it. The amount of information is absolutely staggering. But turning that into something that is actually useful is nearly impossible.

That’s where AI and ML — and a lot of these other advancements — will allow you to parse through that all and give people the freedom of information that probably just never existed before. So the days of proprietary knowledge, of proprietary data, are quickly coming to an end. The businesses that are going to be successful are those that can put the right information at people’s fingertips at the right time to interact with different business opportunities.

That’s what the technology allows you to do. Advancements in network and compute are making that a very near-term reality. I think we are just on that continuum.

Goodbye digital, hello contextual era 

Templeton: You don’t realize an era is over until you’re in a new one. For example, I think the digital era is now done. It ended when people woke up every day and started to recognize that they have too many devices, too many apps that do similar things, too many social things to manage, and blah, blah, blah. How do you keep track of all that stuff in a way where you know what to look at and when?

Workspace AppThe technologies underlying AI and ML are defining a new era that I call the “contextual era.” A contextual era works exactly how David just described it. It senses and predicts. It makes the right information available in true context. Just like Roger was saying, it brings all those the things he needs together for him, situationally. And, obviously, it could even be easier than the experience that he described.

We are in the contextual era now because the amount of data, the number of apps, and the plethora of devices that we all have access to is beyond human comprehension.

Gardner: David, how do you characterize this next era? Imagine us having a conversation in 30 years with Citrix, talking about how it was able to keep up with the times.

Henshall: Mark put it absolutely the way I would, in terms of being able to be contextual in such a way that it brings purpose through the chaos, or the volume of data, or the information that exists out there. What we are really trying to do in many dimensions is think about our technology platform as a way that creates space. Space for people to be successful, space for them to really do their best work. And you do that by removing a lot of the clutter.

You remove a lot of the extraneous things that bog people down. When we talk about it with our customers, the statistics behind-the-scenes are amazing. We are interrupted every two minutes in this world right now; a Tweet, a text, an email, a notification. And science shows that humans are not very good at multitasking. Our brains just haven’t evolved that way.

Gardner: It goes back to that lump of clay we talked about at the beginning. Some things don’t change.

Henshall: When you are interrupted, it takes you 20 minutes on average to get back to the task at hand. That’s one of the fundamental reasons why the statistics around engagement around the world are horrible.

For the average company, 85 percent of their employee base is disengaged — 85 percent! Gallup even put a number on that — they say it’s a $7 trillion annual problem. It’s enormous. We believe that part of that is a technology problem. We have created technologies that are no longer enhancing people’s ability to be productive and to be engaged.

If we can simplify those interactions, allow workers to engage in a way that’s more intuitive, more focused on the task at hand versus the possibility of interruption, it just helps the entire ecosystem move forward. That’s the way I think about it.

CEO staying-power strategies 

Gardner: On the subject of keeping time on your side, it’s not very often I get together with 30 years’ worth of CEOs to talk about things. For those in our audience who are leaders of companies, small or large, what advice can you give them on how to keep their companies thriving for 30 years?

Roberts: Whenever you are running a company — you are running the company. It puts a lot of pressure on you to think about the future, when technology is going to change, and how you get ahead of the power curve before it’s too late.

There is a hell of an operational component. How do you keep the wheel turning and the current moving? How do you keep it functioning, how do you grow staff, and how do you put in systems and infrastructure?

The challenge of managing as the company grows is enormously more complicated. There is the complexity of the technology, the people, the market, and what’s going on in the ecosystem. But never lose sight of the execution component, because it can kill you quicker than losing sight of the strategy.

The challenge of managing as the company grows is enormously more complicated. But never lose sight of the execution component, because it can kill you quicker than losing sight of the strategy. 

One thing I tried to do was instill a process in the company where seemingly hard questions were easy, because it was part of the fabric of how people measured and kept up with their jobs, what they were doing, and what they were forecasting. Things as simple as, “Jennifer, how many support calls are we going to get in the second quarter next year or the fourth quarter of the following year?” It’s how do you think about what you need, to be able to answer questions like those.

“How much are we going to sell?” Remember, we were selling packaged product, through a two-step distribution channel. There was no backlog. Backlog was a foreign concept, so every 30 days we had to get up and do it all over again.

It takes a lot of thought, depending on how big you want to be. If you are a CEO, the most important thing to figure out is how big you want to be. If you want to be a lifestyle, small company, then hats off; I admire you. There is nothing wrong with that.

If you want to be a big company, you need to be putting in process, systems, infrastructure, strategy, and marketing now — even though you might not think you need it. And then the other side of that is, if you go overboard in that direction, process will kill you. Where everybody is so ingrained in the process, nobody is questioning, nobody is thinking, they are just going through the process, that is as deadly as not having one.

So process is necessary, process is not sufficient. Process will help you, and it will also kill you.

Gardner: Mark, same question, advice to keep a company 30 years’ young?

Templeton: Going after Roger is the toughest thing in the world. I’ll share where I focused at Citrix. Number one is making sure you have an opinion about the future, that you believe strongly enough to bet your career and business on it. And number two, to make sure that you are doing the things that make your business model, your products, and your services more relevant over time. That allows you to execute some of the great advice that Roger just gave, so the wind’s at your back, so you are using the normal forces of change and evolution in the world to work for you, because it’s already too hard and you need all the help you can get.

A simple example is the whole idea of consumerization of IT. Pretty early on, we had an opinion about that, so, at Citrix, we created a bring-your-own-device (BYOD) policy and an experimental program. I think we were among the first and we certainly evangelized it. We developed a lot of technology to help support it, to make it work and make it better. That BYOD idea became more and more relevant over time as the workforce got younger and younger and began bringing their own devices to the office, and Citrix had a solution.

So that’s an example. We had that opinion and we made a bet on it. And it put some wind at our back.

Gardner: David, you are going to be able to get tools that these guys couldn’t get. You are going to have AI and ML on your side. You are going to be able to get rid of some of those distractions. You are going to take advantage of the intelligence embedded in the network — but you are still going to also have to get the best of what the human form factor, that lump of clay, that wetware, can do.

So what’s the CEO of the future going to do in terms of getting the right balance between what companies like Citrix are providing them as tools — but not losing track of what’s the best thing that a human brain can do?

IT’s not to do and die, but to reason why 

Henshall: It’s an interesting question. In a lot of ways, technology and the pace of evolution right now are breaking down the historical hierarchy that has existed in a lot of organizations. It has created the concept of a liquid enterprise, similar what we’ve talked about with those who can respond and react in different ways.

But what that doesn’t ever replace is what Roger and Mark were talking about — the need to have a future-back methodology, one that I subscribe to a lot, where we help people understand where we’re going, but more importantly, why.

Citrix campusAnd then you operationalize that in a way that people have context, so everybody understands clarity in terms of roles and responsibilities, operational outcomes, milestones, metrics, and how we are going to measure that along the way. Then that becomes a continuous process.

There is no such thing as, “Set it and forget it.” Without a perspective and a point of view, everything else doesn’t have enough purpose. And so you have to marry those going forward. Make sure you’re empowering your teams with culture and clarity — and then turn them loose and let them go.

Gardner: Productivity in itself isn’t necessarily a high enough motivator.

Henshall: No, productivity by itself is just a metric, and it’s going to be measured in 100 different ways. Productivity should be based on understanding clarity in terms of what the outcomes need to be and empowering that, so people can do their best work in a very individual and unique way.

The days of measuring tasks are mostly in the past. Measuring outcomes, which can be somewhat loosely defined, are really where we are going. And so, how do we enable that? That’s how I think about it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, artificial intelligence, big data, Business intelligence, Citrix, Cloud computing, Cyber security, data center, Data center transformation, Enterprise transformation, Information management, machine learning, Microsoft | Tagged , , , , , , , , , , , , , , | Leave a comment

How the ArchiMate modeling standard helps Enterprise Architects deliver successful digital transformation

Blue Chip Client Testing Enterprise MobilityThe next BriefingsDirect business trends discussion explores how the latest update to the ArchiMate® standard helps Enterprise Architects (EAs) make complex organizations more agile and productive.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Joining us is Marc Lankhorst, Managing Consultant and Chief Technology Evangelist at BiZZdesign in The Netherlands. He also leads the development team within the ArchiMate Forum at The Open Group. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: There are many big changes happening within IT, business, and the confluence of both. We are talking about Agile processes, lean developmentDevOps, the ways that organizations are addressing rapidly changing business environments and requirements.

Companies today want to transform digitally to improve their business outcomes. How does Enterprise Architecture (EA) as a practice and specifically the ArchiMate standard support being more agile and lean?

Lankhorst: The key role of enterprise architecture in that context is to control and reduce complexity, because complexity is the enemy of change. If everything is connected to everything else, it’s too difficult to make any changes, because of all of the moving parts.

Marc LankhorstAnd one of the key tools is to have models of your architecture to create insights into how things are connected so you know what happens if you change something. You can design where you want to go by making something that is easier to change from your current state.

It’s a misunderstanding that if you have Agile development processes like Scrum or SAFe then eventually your company will also become an agile organization. It’s not enough. It’s important, but if you have an agile process and you are still pouring concrete, the end result will still be inflexibility.

Stay flexible, move with the times

So the key role of architecture is to ensure that you have flexibility in the short-term and in the long-term. Models are a great help in that. And that’s of course where the ArchiMate standard comes in. It lets you create models in standardized ways, where everybody understands them in the same way. It lets you analyze your architecture across many aspects, including identifying complexity bottlenecks, cost issues, and risks from outdated technology — or any other kind of analysis you want to make.

Enterprise architecture is the key discipline in this new world of digital transformation and business agility. Although the discipline has to change to move with the times, it’s still very important to make sure that your organization is adaptive, can change with the times, and doesn’t get stuck in an overly complex, legacy world.

Find Out More About

The Open Group ArchiMate Forum

Gardner: Of course, Enterprise Architecture is always learning and improving, and so the ArchiMate standard is advancing, too. So please summarize for me the improvements in the new release of ArchiMate, version 3.1.

Lankhorst: The most obvious new addition to the standard is the concept of a value stream, that’s the latest new concept or new standard. That’s inspired by business architecture, and those of you who follow things like TOGAF®, a standard of The Open Group, or the BIZBOK will know this that value streams are a key concept in there, next to things like capabilities. ArchiMate didn’t yet have a value stream concept. Now it does, and it plays the same role as the value stream does for the TOGAF framework.

the-open-group-logoIt lets you express how a company produces its value and what the stages in the value production are. So that helps describe how an organization realizes its business outcomes. That’s the most visible addition.

Next to that, there are some other changes, minor things, such as you can have a directed association relationship instead of only an undirected one. That can come in very handy in all kinds of modeling situations. And there are some technical improvements to various definitions; they have been clarified. The specification of the metamodel has been improved.

One technical improvement specifically of interest to ArchiMate specialists is the way in which we deal with so-called derived relationships. A derived relationship is basically the conclusion you can draw from a whole chain of things connected together. You might want to see what’s actually the end-to-end connection between things on that chain so there are rules on that. We have changed, improved, and formalized these rules. That allows, at a technical level, some extra capabilities in the language.

And that’s really for the specialists. I would say the first two things, the value stream concept and this directed association — those are the most visible for most end users.

Overall value of the value stream 

Gardner: It’s important to understand how value streams now are being applied holistically. We have seen them, of course, in the frameworks — and now with ArchiMate. Value streams provide a common denominator for organizations to interpret and then act. That often cuts across different business units. Help us understand why value streams as a common denominator are so powerful.

Lankhorst: Value stream helps express the value that an organization produces for its stakeholders, the outcomes it produces, and the different stages needed to produce that value. It provides a concept that’s less detailed than looking at your individual business processes.

Value stream helps express the value that an organization produces for its stakeholders, the outcomes it produces, and the different stages needed to produce that value. It provides a concept that’s less detailed than looking at your individual business processes. 

If you look at the process level, you might be standing too closely in front of the picture. You don’t see the overall perspective of how a company creates value for its customers. You only see the individual tasks that you perform, but how that actually adds value for your stakeholders — that’s really the key.

The capability concept and the mapping between them is also very important. That allows you see what capabilities are needed for the stages in the value production. And in that way, you have a great starting point for the rest of the development of your architecture. It tells you what you need to be able to do in order to add value in these different stages.

You can use that at a relatively high level, an economic perspective, where you look at classical value chains from, say, a supplier via internal production to marketing and sales and to the consumer. You can also use that at a fine-grade level. But the focus is really always about the value you create — rather than the tasks you perform.

Gardner: For those who might not be familiar with ArchiMate, can you provide us with a brief history? It was first used in The Netherlands in 2004 and it’s been part of The Open Group since 2008. How far back is your connection with ArchiMate?

Lankhorst: Yes, it started as a research and development project in The Netherlands. At that time, I worked at an applied research institute in IT. We did joint collaborative projects with industry and academia. In the case of ArchiMate, there was a project in which we had, for example, a large bank and a pension fund and the Dutch tax administration. A number of these large organizations needed a common way of describing architectures.

Download the New

ArchiMate Specification 3.1 

That began in 2002. I was the project manager of that project until 2004. Already during the projects the participating companies said, “We need this. We need a description technique for architecture. We also want you to make this a standard.” And we promised to make it into a standard. We needed a separate organization for that.

So we were in touch with The Open Group in 2004 to 2005. It took a while, but eventually The Open Group adopted the standard, and the official version under the aegis of The Open Group came out in 2008, version 1. We had a number of iterations: in 2012, version 2.0, and in 2016, version 3.0. Now, we are at version 3.1.

Gardner: The vision for ArchiMate is to be a de facto modeling notation standard for Enterprise Architecture that helps improve communication between different stakeholders across an organization, a company, or even a country or a public agency. How do the new ArchiMate improvements help advance this vision, in your opinion?

The value streams concept gives a broader perspective of how value is produced — even across an ecosystem of organizations. This broad perspective is important. 

Lankhorst: The value streams concept gives a broader perspective of how value is produced — even across an ecosystem of organizations. That’s broader than just a single company or a single government agency. This broad perspective is important. Of course it works internally for organizations, it has worked like that, but increasingly we see this broader perspective.

Just to name two examples of that. The North Atlantic Treaty Organization (NATO) in its most recent NATO Architecture Framework version 4 came out early last year, now specify ArchiMate as one of the two allowed metamodels for specifically modeling architecture for NATO.

For these different countries and how they work together, this is one of the allowed standards. For example, the British Ministry of Defence wants to use ArchiMate models and the ArchiMate Exchange format to communicate with industry. For example, when seek a request for proposal (RFP), they use ArchiMate models for describing the context of that and then require industry to provide ArchiMate models to describe their solution.

Another example is in the European System of Central Banks. They have joint systems for doing transactions between central banks. They have completely modeled those out in ArchiMate. So, all of these different central banks have the same understanding of the architecture, across, between, and within organizations. Even within organizations you can have the same problems of understanding what’s actually happening, how the bits fit together, and make sure everybody is on the same page.

A manifesto to control complexity 

Gardner: It’s very impressive, the extent to which ArchiMate is now being used and applied. One of the things that’s also been impressive is that the goal of ArchiMate to corral complexity hasn’t fallen into the trap of becoming too complex itself. One of its goals was to remain as small as possible, not to cover every single scenario.

How do you manage not to become too complex? How has that worked for ArchiMate?

Lankhorst: One of the key principles behind the language is that we want to keep it as small and simple as possible. When we drew up our own ArchiMate manifesto — some might know of the Agile manifesto – and the ArchiMate manifesto is somewhat similar.

One of the key principles is that we want to cover 80 percent of cases for the 80 percent of the common users, rather than try to cover a 100 percent for a 100 percent of the users. That would give you exotic use cases that require very specific features in the language that hardly anybody uses. It can clutter the picture for all the users. It would be much more complicated.

Find Out More About

The Open Group ArchiMate Forum

So, we have been vigilant to avoid that feature-creep, where we keep adding and adding all sorts of things to the language. We want to keep it as simple as possible. Of course, if you are in a complex world, you can’t always keep it completely straightforward. You have to be able to address that complexity. But keeping the language as easy to use and as easy to understand as possible has and will remain the goal.

Gardner: The Open Group has been adamant about having executable standards as a key principle, not too abstract but highly applicable. How is the ArchiMate standard supporting this principle of being executable and applicable?

Lankhorst: In two major ways. First, because it is implemented by most major architecture tools in the market. If you look at the Gartner Magic Quadrant and the EA tools in there, pretty much all of them have an implementation of the ArchiMate language. It is just the standard for EA.

In that sense, it becomes the one standard that rules them all in the architecture field. At a more detailed level, the executable standards, the ArchiMate Exchange format has played an important role. It makes it possible to exchange models between different tools for different applications. I mentioned the example of the UK Ministry of Defence which wants to exchange models with industry, specify their requirements, and get back specifications and solutions using ArchiMate models. It’s really important to make these kinds of models and this kind of information available in ways that the different tools can use, manipulate, and analyze.

Gardner: That’s ArchiMate 3.1. When did that become available?

Lankhorst: The first week of November 2019.

Gardner: What are the next steps? What does the future hold? Where do you take ArchiMate next?

Lankhorst: We haven’t made any concrete plans yet for possible improvements. But some things you can think about is simplifying the language further so that it is even easier to use, perhaps having a simplified notation for certain use cases so you don’t need the precision of the current notation. Maybe having an alternative notation that looks easier to the eye.

There are some other things that we might want to look at. For example, ArchiMate currently assumes that you already have a fair idea about what kind of solution you are developing. But maybe it’s moving an upstream to the brainstorming phase of architecture. So supporting the initial stages of design. That might be something we want to look into.

There are various potential directions but it’s our aim to keep things simple and help architects express what they want to do — but not make the language overly complicated and more difficult to learn.

Download the New

ArchiMate Specification 3.1 

So simplicity, communication, and maybe expanding a bit toward early-stage design. Those are the ideas that I currently have. Of course, there is a community, the ArchiMate Forum within The Open Group. All of the members have a say. There are other outside influences as well, with various ideas of where we could take this.

Gardner: It’s also important to note that the certification program around ArchiMate is very active. How can people learn more about certification in ArchiMate?

Certification basics 

Lankhorst: You can find more details on The Open Group website, it’s all laid out there. Basically, there are two levels of certification and you can take the exams for that.  You can take courses with various course providers, BiZZdesign being one of them, and then prepare for the exam.

Increasingly, I see in practice of this is the requirements when architects are hired, that they are certified so that the company that hires, say consultants, knows that at least they know the basics. So, I would certainly recommend taking an exam if you are into Enterprise Architecture.

Gardner: And of course there are also the events around the world. These topics come up and are often very uniformly and extensively dealt with at The Open Group events, so people should look for those at the website as well.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, risk assessment, Security, The Open Group | Tagged , , , , , , , , , , , , , , , | Leave a comment

How smart IT infrastructure has evolved into the era of data centers-as-a-service

Guys in DCThere has never been a better time to build efficient, protected, powerful, and modular data centers — yet many enterprises and public sector agencies cling to aging, vulnerable, and chaotic legacy IT infrastructure.

The next BriefingsDirect interview examine how automation, self-healing, data-driven insights, and increasingly intelligent data center components are delivering what amount to data centers-as-a-service. 

Listen to the podcast. Find it on iTunes. Read a full transcript or  download a copy. 

Here to explain how a modern data center strategy includes connected components and data analysis that extends from the data center to the computing edge is Steve Lalla, Executive Vice President of Global Services at Vertiv. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Steve, when we look at the evolution of data center infrastructure, monitoring, and management software and services, they have come a long way. What’s driving the need for change now? What’s making new technology more pressing and needed than ever?

Lalla: There are a number of trends taking place. The first is the products we are building and the capabilities of those products. They are getting smarter. They are getting more enabled. Moore’s Law continues. What we are able to do with our individual products is improving as we progress as an industry.

Steve Lalla

Lalla

The other piece that’s very interesting is it’s not only how the individual products are improving, but how we connect those products together. The connective tissue of the ecosystem and how those products increasingly operate as a subsystem is helping us deliver differentiated capabilities and differentiated performance.

So, data center infrastructure products are becoming smarter and they are becoming more interconnected.

The second piece that’s incredibly important is broader network connectivity — whether it’s wide area connectivity or local area connectivity. Over time, all of these products need to be more connected, both inside and outside of the ecosystem. That connectivity is going to enable new services and new capabilities that don’t exist today. Connectivity is a second important element.

Interconnectivity across ecosystems

Third, data is exploding. As these products get smarter, work more holistically together, and are more connected, they provide manufacturers and customers more access to data. That data allows us to move from a break/fix type of environment into a predictive environment. It’s going to allow us to offer more just-in-time and proactive service versus reactive and timed-based services.

And when we look at the ecosystems themselves, we know that over time these centralized data centers — whether they be enterprise data centers, colocation data centers, or cloud data centers — are going to be more edge-based and module-based data centers.

And as that occurs, all the things we talked about — smarter products, more connectivity, data and data enablement — are going to be more important as those modular data centers become increasingly populated in a distributed way. To manage them, to service them, is going to be increasingly difficult and more important.

A lot of the folks who interact with these products and services will face what I call knowledge thinning. The talent is reaching retirement age and there is a high demand for their skills.

And one final cultural piece is happening. A lot of the folks who interact with these products and services will face what I call knowledge thinning. The highly trained professionals — especially on the power side of our ecosystem — that talent is reaching retirement age and there is a high demand for their skills. As data center growth continues to be robust, that knowledge thinning needs to be offset with what I talked about earlier.

So there are a lot of really interesting trends under way right now that impact the industry and are things that we at Vertiv are looking to respond to.

Gardner: Steve, these things when they come together form, in my thinking, a whole greater than the sum of the parts. When you put this together — the intelligence, efficiency, more automation, the culture of skills — how does that lead to the notion of data center-as-a-service?

1600x636-liebert_room_97147_0Lalla: As with all things, Dana, one size does not fit all. I’m always cautious about generalizing because our customer base is so diverse. But there is no question that in areas where customers would like us to be operating their products and their equipment instead of doing it themselves, data center-as-a-service reduces the challenges with knowledge thinning and reduces the issue of optimizing products. We have our eyes on all those products on their behalf.

And so, through the connectivity of the product data and the data lakes we are building, we are better at predicting what should be done. Increasingly, our customers can partner with us to deliver a better performing data center.

Gardner: It seems quite compelling. Modernizing data centers means a lot of return on investment (ROI), of doing more with less, and becoming more predictive about understanding requirements and then fulfilling them.

Why are people still stuck? What holds organizations back? I know it will vary from site to site, but why the inertia? Why don’t people run to improve their data centers seeing as they are so integral to every business?

Adoption takes time

Lalla: Well, these are big, complex pieces of equipment. They are not the kind of equipment that every year you decide to change. One of the key factors that affects the rate at which connectivity, technology, processing capability, and data liberation capability gets adopted is predicated by the speed at which customers are able to change out the equipment that they currently have in their data centers.

Now, I think that we, as a manufacturer, have a responsibility to do what we can to improve those products over time and make new technology solutions backward compatible. That can be through updating communication cards, building adjunct solutions like we do with Liebert® ICOMTM-S and gateways, and figuring out how to take equipment that is going to be there for 15 or 20 years and make it as productive and as modern as you can, given that it’s going to be there for so long.

So number one, the duration of product in the environment is certainly one of the headwinds, if you will.

Smart thingAnother is the concept of connectivity. And again, different customers have different comfort levels with connectivity inside and outside of the firewall. Clearly the more connected we can be with the equipment, the more we can update the equipment and assess its performance. Importantly, we can assess that performance against a big data lake of other products operating in an ecosystem. So, I think connectivity, and having the right solutions to provide for great connectivity, is important.

And there are cultural elements to our business in that, “Hey, if it works, why change it, right?” If it’s performing the way you need it to perform and it’s delivering on the power and cooling needs of the business, why make a change? Again, it’s our responsibility to work with our customers to help them best understand that when new technology gets added — when new cards get added and when new assistants, l call them digital assistants, get added — that that technology will have a differential effect on the business.

So I think there is a bit of reality that gets in the way of that sometimes.

Gardner: I suppose it’s imperative for organizations like Vertiv to help organizations move over that hump to get to the higher-level solutions and overcome the obstacles because there are significant payoffs. It also sets them up to be much more able to adapt to the future when it comes to edge computing, which you mentioned, and also being a data-driven organization.

How is Vertiv differentiating yourselves in the industry? How does combining services and products amount to a solution approach that helps organizations modernize

Three steps that make a difference

Lalla: I think we have a differentiated perspective on this. When we think about service, and we think about technology and product, we don’t think about them as separate. We think about them altogether. My responsibility is to combine those software and service ecosystems into something more efficient that helps our customers have more uptime, and it becomes more predictive versus break/fix to just-in-time-types of services.

We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what we can do once data and information are liberated.

And the way we do that is through three steps. Number one, we have to continue to work closely with our product teams to ensure early in the product definition cycle which products need to be interconnected into an as-a-service or a self-service ecosystem.

We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what, in fact, we can do once data and information gets liberated. A great strategy always starts with great product, and that’s core to our solution.

The next step is a clear understanding that some of our customers want to service equipment themselves. But many of our customers want us to do that for them, whether it’s physically servicing equipment or monitoring and managing the equipment remotely, such as with our LIFETM management solution.

We are increasingly looking at that as a continuum. Where does self-service end, and where do delivered services begin? In the past it’s been relatively different in what we do — from a self-service and delivered service perspective. But increasingly, you see those being blended together because customers want a seamless handover. When they discover something needs to be done, we at Vertiv can pick up from there and perform that service.

So the connective tissue between self-service and Vertiv-delivered service is something that we are increasing clarity on.

And then finally, we talked about this earlier, we are being very active at building a data lake that comes from all the ecosystems I just talked about. We have billions of rows of normalized data in our data lake to benefit our customers as we speak.

Gardner: Steve, when you service a data center at that solution-level through an ecosystem of players, it reminds me of when IT organizations started to manage their personal computers (PCs) remotely. They didn’t have to be on-site. You could bring the best minds and the best solutions to bear on a problem regardless of where the problem was — and regardless of where the expertise was. Is that what we are seeing at the data center level?

Self-awareness remotely and in-person

Lalla: Let’s be super clear, to upgrade the software on an uninterruptible power supply (UPS) is a lot harder than to upgrade software on a PC. But the analogy of understanding what must be done in-person and what can be done remotely is a good one. And you are correct. Over years and years of improvement in the IT ecosystems, we went from a very much in-person type of experience, fixing PCs, to one where very much like mobile phones, they are self-aware and self-healing.

This is why I talked about the connectivity imperative earlier, because if they are not connected then they are not aware. And if they are not aware, they don’t know what they need to do. And so connectivity is a super important trend. It will allow us to do more things remotely versus always having to do things in-person, which will reduce the amount of interference we, as a provider of services, have on our customers. It will allow them to have better uptime, better ongoing performance, and even over time allow tuning of their equipment.

You could argue the mobile phone and PC are at very late stages of their journey of automation. We are on the very early stages of it, and smarter products, connectivity, and data are all important factors.

We are at the early stages of that journey. You could argue the mobile phone and the PC guys are at the very late stages of their journey of automation. We are in the very early stages of it, but the things we talked around earlier — smarter products, connectivity, and data — all are important factors influencing that.

Gardner: Another evolution in all of this is that there is more standardization, even at the data center level. We saw standardization as a necessary step at the server and storage level — when things became too chaotic, too complex. We saw standardization as a result of virtualization as well. Is there a standardization taking place within the ecosystem and at that infrastructure foundation of data centers?

Standards and special sauce

Lalla: There has been a level of standardization in what I call the self-service layer, with protocols like BACnetModbus, and SNMP. Those at least allow a monitoring system to ingest information and data from a variety of diverse devices for minimally being able to monitor how that equipment is performing.

I don’t disagree that there is an opportunity for even more standardization, because that will make that whole self-service, delivered-as-a-service ecosystem more efficient. But what we see in that control plane is really Vertiv’s unique special sauce. We are able to do things between our products with solutions – like Liebert ICOM-S — that allow our thermal products to work better together than if they were operating independently.

DC walkersYou are going to see an evolution of continued innovation in peer-to-peer networking in the control plane that probably will not be open and standard. But it will provide advances in how our products work together. You will see in that self-service, as-a-service, and delivered-service plane continued support for open standards and protocols so that we can manage more than just our own equipment. Then our customers can manage and monitor more of their own equipment.

And this special sauce, which includes the data lakes and algorithms — a lot of intellectual property and capital in building those algorithms and those outcomes — help customers operate better. We will probably stay close to the vest in the short term, and then we’ll see where it goes over time.

Gardner: You earlier mentioned moving data centers to the edge. We are hearing an awful lot architecturally about the rationale for not moving the edge data to the cloud or the data center, but instead moving the computational capabilities right out to the edge where that data is. The edge is where the data streams in, in massive quantities, and needs to be analyzed in real-time. That used to be the domain of the operational technology (OT) people.

As we think about data centers moving out to the edge, it seems like there’s a bit of an encroachment or even a cultural clash between the IT way of doing things and the OT way of doing things. How does Vertiv fit into that, and how does making data center-as-a-service help bring the OT and IT together — to create a whole greater than the sum of the parts?

OT and IT better together  

Lalla: I think maybe there was a clash. But with modular data centers and things like SmartAisle and SmartRow that we do today, they could be fully contained, standalone systems. Increasingly, we are working with strategic IT partners on understanding how that ecosystem has to work as a complete solution — not with power and cooling separate from IT performance, but how can we take the best of the OT world power and cooling and the best of the IT world and combine that with things like alarms and fire suppression. We can build a remote management and monitoring solution that can be outsourced if you wanted, to consume it as a service, or in-sourced if you want to do it yourself.

And there’s a lot of work to do in that space. As an industry, we are in the early stages, but I don’t think it’s hard to foresee a modular data center that should operate holistically as opposed to just the sum of its parts.

Gardner: I was thinking that the OT-IT thing was just an issue at the edge. But it sounds like you’re also referring to it within the data center itself. So flesh that out a bit. How do OT and IT together — managing all the IT systems, components, complexity, infrastructure, support elements — work in the intelligent, data center-as-a-service approach?

Lalla: There is the data center infrastructure management (DCIM) approach, which says, “Let’s bring it all together and manage it.” I think that’s one way of thinking about OT and IT, and certainly Vertiv has solutions in that space with products like TrellisTM.

But I actually think about it as: Once the data is liberated, how do we take the best of computing solutions, data analytics solutions, and stuff that was born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in our industrial OT space?

It’s not necessarily that OT and IT are one thing, but how do we apply the best of all of technology solutions? Things like security. There is a lot of great stuff that’s emerged for security. How do we take a security-solutions perspective in the IT space if we are going to get more connected in the OT space? Well, let’s learn from what’s going on in IT and see how we can apply it to OT.

Once the data is liberated we can take the best of data analytics solutions born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in the industrial OT space.

Just because DCIM has been tackled for years doesn’t mean we can’t take more of the best of each world and see how you can put those together to provide a solution that’s differentiated.

I go back to the Liebert ICOM-S solution, which uses desktop computing and gateway technology, and application development running on a high-performance IT piece of gear, connected to OT gear to get those products that normally would work separately to actually work more seamlessly together. That provides better performance and efficiency than if those products operated separately.

Liebert ICOM-S is a great example of where we have taken the best of the IT world compute technology connectivity and the best of the OT world power and cooling and built a solution that makes the interaction differentiated in the marketplace.

Gardner: I’m glad you raised an example because we have been talking at an abstract level of solutions. Do you have any other use cases or concrete examples where your concept for infrastructure data center-as-a-service brings benefits? When the rubber hits the road, what do you get? Are there some use cases that illustrate that?

Real LIFE solutions

Lalla: I don’t have to point much further than our Vertiv LIFE Services remote monitoring solution. This solution came out a couple years ago, partly from our Chloride® Group acquisition many years ago. LIFE Services allows customers to subscribe to have us do the remote monitoring, remote management, and analytics of what’s happening — and whenever possible do the preventative care of their networks.

And so, LIFE is a great example of a solution with connectivity, with the right data flowing from the products, and with the right IT gear so our personnel take the workload away from the customer and allow us to deliver a solution. That’s one example of where we are delivering as-a-service for our customers.

logoWe are also working with customers — and we can’t expose who they are — to bring their data into our large data lake so we can help them better predict how various elements of their ecosystem will perform. This helps them better understand when they need just-in-time service and maintenance versus break/fix service and maintenance.

These are two different examples where Vertiv provides services back to our customers. One is running a network operations center (NOC) on their behalf. Another uses the data lake that we’ve assimilated from billions of records to help customers who want to predict things and use the broad knowledge set to do that.

Gardner: We began our conversation with all the great things going on in modern data center infrastructure and solutions to overcome obstacles to get there, but economics plays a big role, too. It’s always important to be able to go to the top echelon of your company and say, “Here is the math, here’s why we think doing data center modernization is worth the investment.”

What is there about creating that data lake, the intellectual property, and the insights that help with data center economics? What’s the total cost of ownership (TCO) impact? How do you know when you’re doing this right, in terms of dollars and cents?

Uptime is money

Lalla: It’s difficult to generalize too much but let me give you some metrics we care about. Stuff is going to break, but if we know when it’s going to break — or even if it does break — we can understand exactly what happened. Then we can have a much higher first-time fix rate. What does that mean? That means I don’t have to come out twice, I don’t have to take the system out of commission more than once, and we can have better uptime. So that’s one.

Number two, by getting the data we can understand what’s going on with the network time-to-repair and how long it takes us from when we get on-site to when we can fix something. Certainly it’s better if you do it the first time, and it’s also better if you know exactly what you need when you’re there to perform the service exactly the way it needs to be done. Then you can get in and out with minimal disruption.

A third one that’s important — and one that I think will grow in importance — is we’re beginning to measure what we call service avoidance. The way we measure service avoidance is we call up a customer and say, “Hey, you know, based on all this information, based on these predictions, based on what we see from your network or your systems, we think these four things need to be addressed in the next 30 days. If not, our data tells us that we will be coming out there to fix something that broken as opposed to fixing it before it breaks.” So service avoidance or service simplification is another area that we’re looking at.

There are many more — I mean, meeting service level agreements (SLAs), uptime, and all of those — but when it comes to the tactical benefits of having smarter products, of being more connected, liberating data, and consuming that data and using it to make better decisions as a service — those are the things that customers should expect differently.

Gardner: And in order to enjoy those economic benefits through the Vertiv approach and through data center-as-a-service, does this scale down and up? It certainly makes sense for the larger data center installations, but what about a small- to medium-sized business (SMB)? What about a remote office, or a closet and a couple of racks? Does that make sense, too? Do the economic and the productivity benefits scale down as well scale up?

Lalla: Actually when we look at our data, more customers who don’t have all the expertise to manage and monitor their single-phase or small three-phase or Liebert CRV [cooling] units, and they don’t have the skill set — those are the customers that really appreciate what we can do to help them. It doesn’t mean that they don’t appreciate it as you go up the stack, because as you go up the stack what those customers appreciate isn’t the fact that they can do some of the services themselves. They may be more of a self-service-oriented customer, but what they increasingly are interested in is how we’re using data in our data lake to better predict things that they can’t predict by just looking at their own stuff.

smartaisleSo, the value shifts depending on where you are in the stack of complexity, maturity, and competency. It also varies based on hyperscale, colocation, enterprise, small enterprise, and point-of-sale. There are a number of variables so that’s why it’s difficult to generalize. But this is why the themes of productivity, smarter products, edge ecosystems, and data liberation are common across all those segments. How they apply the value that’s extracted in each segment can be slightly different.

Gardner: Suffice it to say data center-as-a-service is highly customizable to whatever organization you are and wherever you are on that value chain.

Lalla: That’s absolutely right. Not everybody needs everything. Self-service is on one side and as-a-service is on the other. But it’s not a binary conversation.

Customers who want to do most of the stuff themselves with technology, they may need only a little information or help from Vertiv. Customers who want most of their stuff to be managed by us — whether it’s storage systems or large systems — we have the capability of providing that as well. This is a continuum, not an either-or.

Gardner: Steve, before we close out, let’s take a look to the future. As you build data lakes and get more data, machine learning (ML) and artificial intelligence (AI) are right around the corner. They allow you to have better prediction capabilities, do things that you just simply couldn’t have ever done in the past.

So what happens as these products get smarter, as we are collecting and analyzing that data with more powerful tools? What do you expect in the next several years when it comes to the smarter data center-as-a-service?

Circle of knowledge gets smart 

Lalla: We are in the early stages, but it’s a great question, Dana. There are two outcomes that will benefit all of us. One, that information with the right algorithms, analysis, and information is going to allow us to build products that are increasingly smarter.

There is a circle of knowledge. Products produce information going to the data lake, we run the right algorithms, look for the right pieces of information, feed that back into our products, and continually evolve the capability of our products as time goes on. Those products will break less, need less service, and are more reliable. We should just expect that, just as you have seen in other industries. So that’s number one.

Number two, my hope and belief are that we move from a break-fix mentality or environment of where we wait for something to show up on a screen as an alarm or an alert. We move from that to being highly predictive and just-in-time.

As an industry — and certainly at Vertiv — first-time fix, service avoidance, and time for repair are all going to get much better, which means one simple thing for our customers. They are going to have more efficient and well-tuned data centers. They are going to be able to operate with higher rates of uptime. All of those things are going to result in goodness for them — and for us.

Listen to the podcast. Find it on iTunes. Read a full transcript or  download a copy. Sponsor: Vertiv.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Cloud computing, data analysis, data center, Data center transformation, enterprise architecture, hyperconverged infrastructure, Vertiv | Tagged , , , , , , , , , , , , , , , , | Leave a comment

How Unisys and Dell EMC head off backup storage cyber security vulnerabilities

cloud graphicThe next BriefingsDirect data security insights discussion explores how data — from one end of its life cycle to the other — needs new protection and a means for rapid recovery.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we examine how backup storage especially needs to be made safe and secure if companies want to quickly right themselves from an attack. To learn more, please welcome Andrew Peters, Stealth Industry Director at Unisys, and George Pradel, Senior Systems Engineer at Dell EMC. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What’s changed in how data is being targeted by cyber attacks? How are things different from three years ago?

Andrew Peters

Peters

Peters: Well, one major thing that’s changed in the recent past has been the fact that the bad guys have found out how to monetize and extort money from organizations to meet their own ends. This has been something that has caught a lot of companies flatfooted — the sophistication of the attacks and the ability to extort money out of organizations.

Gardner: George, why does all data — from one end of its life cycle to the other –now need to be reexamined for protection?

Pradel: Well, Andrew brings up some really good points. One of the things we have seen out in the industry is ransomware-as-a-service. Folks can just dial that in. There are service level agreements (SLAs) on it. So everyone’s data now is at risk.

Another of the things that we have seen with some of these attacks is that these people are getting a lot smarter. As soon as they go in to try and attack a customer, where do they go first? They go for the backups. They want to get rid of those, because that’s kind of like the 3D chess where you are playing one step ahead. So things have changed quite a bit, Dana.

Peters: Yes, it’s really difficult to put the squeeze on an organization knowing that they can recover themselves with their backup data. So, the heat is on the bad guys to go after the backup systems and pollute that with their malware, just to keep companies from having the capability to recover themselves.

Gardner: And that wasn’t the case a few years ago?

George Pradel

Pradel

Pradel: The attacks were so much different a few years ago. They were what we call script kiddie attacks, where you basically get some malware or maybe you do a denial-of-service attack. But now these are programmatized, and the big thing about that is if you are a target once, chances are really good that the thieves are just going to keep coming back to you, because it’s easy money, as Andrew pointed out.

Gardner: How has the data storage topology changed? Are organizations backing up differently than they did a few years ago as well? We have more cloud use, we have hybrid, and different strategies for managing de-dupe and other redundancies. How has the storage topology and landscape changed in a way that affects this equation of being secure end to end?

The evolution of backup plans 

Pradel: Looking at how things have changed over the years, we started out with legacy systems, the physical systems that many of us grew up with. Then virtualization came into play, and so we had to change our backups. And virtualization offered up some great ways to do image-level backups and such.

Now, the big deal is cloud. Whether it’s one of the public cloud vendors, or a private cloud, how do we protect that data? Where is our data residing? Privacy and security are now part of the discussion when creating a hybrid cloud. This creates a lot of extra confusion — and confusion is what thieves zone in on.

We want to make sure that no matter where that data resides that we are making sure it’s protected. We want to provide a pathway for bringing back the data that is air gapped or via one of our other technologies that helps keeps the data in a place that allows for recoverability. Recoverability is the number one thing here, but it definitely has changed in these last few years.

Gardner: Andrew, what do you recommend to customers who may have thought that they had this problem solved? They had their storage, their backups, they protected themselves from the previous generations of security risk. When do you need to reevaluate whether you are secure enough?

Stay prepared 

Peters: There are a few things to take into consideration. One, they should have an operation that can recover their data and bring their business back up and running. You could get hit with an attack that turns into a smoking hole in the middle of your data center. So how do you bring your organization back from that without having policies, guidance, a process and actual people in place in the systems to get back to work?

Learn More About Cyber Recovery

With Unisys Stealth 

Another thing to consider is the efficacy of the data. Is it clean? If you are backing up data that is already polluted with malware, guess what happens when you bring it back out and you recover your systems? It rehydrates itself within your systems and you still have the same problem you had before. That’s where the bad guys are paying attention. That’s what they want to have happen in an organization. It’s a hand they can play.

Value WheelIf the malware can still come out of the backup systems and rehydrate itself and re-pollute the systems when an organization is going through its recovery, it’s not only going to hamper the business and the time to recovery, and cost them, it’s also going to force them to pay the ransoms that the bad guys are extorting.

Gardner: And to be clear, this is the case across both the public and the private sector. We are hearing about ransomware attacks in lots of cities and towns. This is an equal opportunity risk, isn’t it?

Peters: Malware and bad guys don’t discriminate.

Pradel: You are exactly right about that. One of the customers that I have worked with recently in a large city got hit with a ransomware attack. Now, one of the things about ransomware attacks is that they typically want you to pay in bitcoin. Well, who has $100,000 worth of bitcoin sitting around?

If you have a government attacked, one of the problems is that chaos ensues. Police officers in their cars were not able to pull up license plates on the computer to check on cars they were pulling over.

But let’s take a look at why it’s so important to eliminate these types of attacks. If you have a government attacked, one of the problems is that chaos ensues. In one particular situation, police officers in their cars were not able to pull up license plates on the computer to check on cars they were pulling over, to see if they had a couple of bad tickets or perhaps the person was wanted for some reason. And so it is a very dangerous situation you may put into play for all of these officers.

That’s one tiny example of how these things can proliferate. And like you said, whether it’s public sector or private sector, if you are a soft target, chances are at some point you are going to get hit with ransomware.

Secure the perimeter and beyond 

Gardner: What are we doing differently in terms of the solutions to head this off, especially to get people back and up and running and to make sure that they have clean and useable data when they do so?

Peters: A lot of security had been predicated on the concept of a perimeter, something where we can put up guards, gates, and guns and in a moat. There is an inside and an outside, and it’s generally recognized today that that doesn’t really exist.

And so, one of the new moves in security is to defend the endpoint, the application, and to do that using a technology called micro-segmentation. It’s becoming more popular because it allows us to have a security perimeter and a policy around each endpoint. And if it’s done correctly, you can scale to hundreds to thousands to hundreds of thousands, and potentially millions of endpoint devices, applications, servers and virtually anything you have in an environment.

And so that’s one big change: Let’s secure the endpoint, the application, the storage, and each one comes with its own distinct security policy.

Gardner: George, how do you see the solutions changing, perhaps more toward the holistic infrastructure side and not just the endpoint issues?

Pradel: One of the tenets that Andrew related to is called security by obscurity. The basic tenet is, if you can’t see it’s much safer. Think about a safe in your house. If the safe is back behind the bookcase and you are the only person that knows it’s there, that’s an extra level of security. Well, we can do that with technology.

So you are seeing a lot of technologies being employed. Many of them are not new types of security technologies. We are going back to what’s worked in the past and building some of these new technologies on that. For example, we add on automation, and with that automation we can do a lot of these things without as much user intervention, and so that’s a big part of this.

Incidentally, if any type of security that you are using has too much user intervention, then it’s very hard for the company to cost-justify those types of resources.

Gardner: Something that isn’t different from the past is having that Swiss Army knife approach of multiple layers of security. You use different tools, looking at this as a team sport where you want to bring as many solutions as possible to bear on the problem.

How have Unisys and Dell EMC brought different strengths together to create a whole greater than the sum of the parts?

Hide the data, so hackers can’t seek

Peters: One thing that’s fantastic that Dell has done is that they have put together a Cyber Recovery solution so when there is a meltdown you have gold copies of critical data required to reestablish the business and bring it back up and get into operation. They developed this to be automated, to contain immutable copies of data, and to assure the efficacy of the data in there.

Now, they have set this stuff up with air gapping, so it is virtually isolated from any other network operations. The bad guys hovering around in the network have a terrible time of trying to even touch this thing.

Learn More About Dell EMC PowerProtect

Cyber Recovery Solution 

Unisys put what we call a cryptographic wrapper around that using our micro-segmentation technology called Stealth. This creates a cryptographic air gap that virtually disappears that vault and its recovery operations from anything else in the network, if they don’t have a cryptographic key. If they have a cryptographic key that was authorized, they could talk to it. If they don’t, they can’t. So any bad guys and malware can’t see it. If they can’t see, they can’t touch, and they can’t hack. This then turns into an extraordinarily secure means to recover an organization’s operations.

Gardner: The economics of this is critical. How does your technology combination take the economic incentive away from these nefarious players?

Pradel: Number one, you have a way to be able to recover from this. All of a sudden the bad guys are saying, “Oh, shoot, we are not going to get any money out of these guys.”

You are not going to be a constant target. They are going to go after your backups. Unisys Stealth can hide the targets that these people go after. Once you have this type of a Cyber Recovery solution in place, you can rest a lot easier at night.

As part of the Cyber Recovery solution, we actually expect malware to get into the Cyber Recovery vault. And people shake their head and they go, “Wait, George, what do you mean by that?”

Yes, we want to get malware into the Cyber Recovery vault. Then we have ways to do analytics to see whether our point-in times are good. That way, when we are doing that restore, as Andrew talked about earlier, we are restoring a nice, clean environment back to the production environment.

Recovery requires commitment, investment

So, these types of solutions are an extra expense, but you have to weigh the risks for your organization and factor what it really costs if you have a cyber recovery incident.

Additionally, some people may not be totally versed on the difference between a disaster recovery situation and a cyber recovery situation. A disaster recovery may be from some sort of a physical problem, maybe a tornado hits and wipes out a facility or whatever. With cyber recovery, we are talking about files that have been encrypted. The only way to get that data back — and get back up and running — is by employing some sort of a cyber recovery solution, such as the Unisys and Dell EMC solution.

Gardner: Is this tag team solution between Unisys and Dell EMC appropriate and applicable to all kinds of business, including cloud providers or managed service providers?

Peters: It’s really difficult to measure the return on investment (ROI) in security, and it always has been. We have a tool that we can use to measure risk, probability, and financial exposure for an organization. You can actually use the same methodologies that insurance companies use to underwrite for things like cybersecurity and virtually anything else. It’s based on the reality that there is a strong likelihood that there is going to be a security breach. There is going to be perhaps a disastrous security breach, and it’s going to really hurt the organization.

Plan on the fact that it’s probably going to happen. You need to invest in your systems and your recovery. If you think you can sustain a complete meltdown on your company and go out of operations for weeks to months, then you probably don’t need to put money into it.

Plan on the fact that it’s probably going to happen. You need to invest in your systems and your recovery. If you think that you can sustain a complete meltdown on your company and go out of operation for weeks to months, then you probably don’t need to put money into it.

If you understand how exposed that you potentially are, and the fact that the bad guys are staring at the low hanging fruit — which may be state governments, or cities, or other things that are less protected.

The fact is, the bad guys are extraordinarily patient. If your payoff is in the tens of millions of dollars, you might spend, as the bad guys did with Sony, years mapping systems, learning how an operation works, and understanding their complete operations before you actually take action, and in potentially the most disastrous way possible.

So ergo, it’s hard to put a number on that. An organization will have to decide how much they have to lose, how much they have at risk, and what the probability is that they are actually going to get hit with an attack.

Gardner: George, also important on this applicability as to where it’s the right fit is that automation and skills. What sort of organizations typically will go at this and what skills are required?

Automate and simplify

Pradel: That’s been the basis for our Cyber Recovery solution. We have written a number of APIs to be able to automate different pieces of a recovery situation. If you have a cyber recovery incident, it’s not a matter of just, “Okay, I have the data, now I can restore it.” We have a lot of experts in the field. What they do is figure out exactly where the attack came from, how it came in, what was affected, and those types of things.

unisys bugWe make it as simple as possible for the administration. We have done a lot of work creating APIs that automate items such as recovering backup servers. We take point-in-time copies of the data. I don’t want to go into it too deeply, but our data domain technology is the basis for this. And the reason why it’s important to note is because the replication we do is based upon our variable-length deduplication.

Now, that may sound a little gobbledygook, but what that means is that we have the smallest replication times that you could have for a certain amount of data. So when we are taking data into the Cyber Recovery vault, we are reducing what’s called our dwell time. This is the area where you would have someone that could see that you had a connection open.

Learn More About Cyber Recovery

With Unisys Stealth 

But a big part of this is on a day-to-day basis, I don’t have to be concerned. I don’t have a whole team of people that are maintaining this Cyber Recovery vault. Typically, with our customers, they already have the understanding of how our base technology works and so that part is very straightforward. And what we have is automation, we have policies that are set up in the Cyber Recovery vault that will, on a regular basis, hold the data, whatever is changed from the production environment, typically once a day.

And a rule of thumb for some people that might be thinking, this sounds really interesting, but how much data would I put in this? Typically we’ll do 10 to 15 percent of a customer’s production environment, that might go into the Cyber Recovery vault. So we want to make this as simple as possible, we want to automate as much as possible.

And on the other side, when there is an incident, we want to be able to also automate that part because that is when all heck is going on. If you’ve ever been involved in one of those situations, it’s not always your clearest thinking moment. So automation is your best friend and can help you get back up and running as quickly as possible.

Gardner: George, run us through an example, if you would, of how this works in the real-world.

One step at a time for complete recovery

Pradel: What will happen is that at some point somebody clicks on that doggone attachment that was on that e-mail that had a free trip to Hawaii or something and it had a link to some ransomware.

Once the security folks have determined that there has been an attack, sometimes it’s very obvious. There is one attack where there is a giant security skeleton that comes up on your screen and basically says, “Got you.” It then gives instructions on how you would go about sending them the money so that you can get your data back.

However, sometimes it’s not quite so obvious. Let’s say your security folks have determined there has been attack and then the first thing that you would want to do is access the cyber recovery provided by putting the Cyber Recovery vault with Stealth. You would go to the Cyber Recovery vault and lock down the vault, and it’s simple and straightforward. We talked about this a little earlier about the way we do the automation is you click on the lock, that locks everything down and it stops any future replications from coming in.

Dell_EMC_logoAnd while the security team is looking to find out how bad it is, what was affected, one of the things the cyber recovery team does is to go in and run some analysis, if you haven’t done so already. You can automate this type of analysis, but let’s say you haven’t done that. Let’s say you have 30 point-in times, so one for each day throughout the last month. You might want to check and run an analysis against maybe the last five of those to be able to see whether or not those come up as suspicious or as okay.

The way that’s done is to look at the entropy of the different point-in-time backups. One thing to note is that you do not have to rehydrate the backup in order to analyze it. So let’s say you backed it up with Avamar and then you wanted to analyze that backup. You don’t have to rehydrate that in the vault in order to get it back up and running.

Once that’s done, then there’s a lot of different ways that you can decide what to do. If you have physical machines but they are not in great shape, they are suspect in that. But, if the physical parts of it are okay, you could then decide that at some point you’re going to reload those machines with the gold copies or very typical to have in the vault and then put the data and such on it.

If you have image-level backups that are in the vault, those are very easy to get back up and running on a VMWare ESX host store, or Microsoft Hyper-V host that you have in your production environment. So, there are a lot of different ways that you can do that.

The whole idea, though, is that our typical Cyber Recovery solution is air-gapped and we recommend customers have a whole separate set of physical controls as well as the software controls.

Now, one of those steps may not be practical in all situations. That’s why we looked at Unisys Stealth, to provide a virtual air gap by installing the pieces from Stealth.

Remove human error

Peters: One of the things I learned in working with the United States Air Force’s Information Warfare Center was the fact that you can build the most incredibly secure operation in the world and humans will do things to change it.

With Stealth, we allow organizations to be able to get access into the vault from a management perspective to do analytics, and also from a recovery perspective, because anytime there’s a change to the way that vault operates, that’s an opportunity for bad guys to find a way in. Because, once again, they’re targeting these systems. They know they’re there; they could be watching them and they can be spending years doing this and watching the operations.

Unisys Stealth removes the opportunity for human error. We remove the visibility that any bad guys, or any malware, would have inside a network to observe a vault. They may see data flowing but they don’t know what it’s going to, they don’t know what it’s for, they can’t read it because it’s going to be encrypted. They are not going to be able to even see the endpoints because they will never be able to get an address on them. We are cryptographically disappearing or hiding or cloaking, whatever word you’d like to use — we are actively removing those from visibility from anything else on the network unless it’s specifically authorized.

Gardner: Let’s look to the future. As we pointed out earlier in our discussion, there is a sort of a spy versus spy, dog chasing the cat, whatever you want to use as a metaphor, one side of the battle is adjusting constantly and the other is reacting to that. So, as we move to the future, are there any other machine learning (ML)-enabled analytics on these attacks to help prevent them? How will we be able to always stay one step ahead of the threat?

Peters: With our technology we already embody ML. We can do responses called dynamic isolation. A device could be misbehaving and we could change its policy and be able to either restrict what it’s able to communicate with or cut it off altogether until it’s been examined and determined to be safe for the environment.

We can provide a lot of automation, a lot of visibility, and machine-speed reaction in response to threats as they are happening. Malware doesn’t have to get that 20-second head start. We might be able to cut off in 10 seconds and be able to make it a dynamic change to the threat surface.

Gardner: George, what’s in the future that it’s going to allow you to stay always one step ahead of the bad guys? Also, is there is an advantage for organizations doing a lot of desktops-as-a-service (DaaS) or virtual desktops? Do they have an advantage in having that datacenter image of all of the clients?

Think like a bad guy 

Pradel: Oh, yes, definitely. How do we stay in front of the bad guys? You have to think like the bad guys. And so, one of the things that you want to do is reduce your attack surface. That’s a big part of it, and that’s why the technology that we use to analyze the backups, looking for malware, uses 100 different types of objects of entropy.

As we’re doing ML of that data, of what’s normal what’s not normal, we can figure out exactly where the issues are to stay ahead of them.

Now an air gap on its own surface is extremely secure because it keeps that data in an environment where no one can get at it. We have situations where Unisys Stealth helped with closing the air gap situation where a particular general might have three different networks that they need to connect to and Stealth is a fantastic solution for that.

If you’re doing DaaS, there are ways that it can help. We’re always looking at where the data resides, and most of the time in those situations the data is going to reside back at the corporate infrastructure. That’s a very easy place to be able to protect data. When the data is out on laptops and things like that, then it makes it a little bit more difficult, not impossible, but you have a lot of different end points that you’re pulling from. To be able to bring the system back up — if you’re using virtual desktops, that kind of thing, actually it’s pretty straightforward to be able to do that because that environment, chances are they’re not going to bring down the virtual desktop environment, they’re going to encrypt the data.

Learn More About Dell EMC PowerProtect

Cyber Recovery Solution 

Now, that said, one of the things when we’re having these conversations, it’s not as straightforward of a conversation as ever. We talk about how long you might be out of business depending upon what you’ve implemented. We have to engineer for all the different types of malware attacks. And what’s the common denominator? It’s the data and keeping that data safe, keeping that data so it can’t be deleted.

We have a retention lock capability so you can lock that up for as many as 70 years and it takes two administrators to unlock it. That’s the kind of thing that makes it robust.

In the old days, we would do a WORM drive and copy stuff off to a CD to make something immutable. This is a great way to do it. And that’s one way to stay ahead of the bad guys as best as we can.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys.

You may also be interested in:

Posted in AIOps, big data, Cloud computing, Cyber security, Dell, DevOps, disaster recovery, Information management, Security, storage, Unisys | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

How Unisys and Microsoft team up to ease complex cloud adoption for governments and enterprises

cloud imageThe path to cloud computing adoption persistently appears complex and risky to both government and enterprise IT leaders, recent surveys show.

This next BriefingsDirect managed cloud methodologies discussion explores how tackling complexity and security requirements upfront helps ease the adoption of cloud architectures. By combining managed services, security solutions, and hybrid cloud standardization, both public and private sector organizations are now making the cloud journey a steppingstone to larger business transformation success.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

We’ll now explore how cloud-native apps and services modernization benefit from prebuilt solutions with embedded best practices and automation. To learn how, we welcome Raj Raman, Chief Technology Officer (CTO) for Cloud at Unisys, and Jerry Rhoads, Cloud Solutions Architect at Microsoft. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Raj, why are we still managing cloud adoption expectations around complexity and security? Why has it taken so long to make the path to cloud more smooth — and even routine?

Raman: Well, Dana, I spend quite a bit of time with our customers. A common theme we see — be it a government agency or a commercial customer – is that many of them are driven by organizational mandates and getting those organizational mandates in place often proves challenging, more so than what one thinks.

Cloud adoption challenges

Raj Raman

Raman

The other part is that while Amazon Web Services (AWS) or Microsoft Azure may be very easy to get on to, the question then becomes how do you scale up? They have to either figure out how to develop in-house capabilities or they look to a partner like Unisys to help them out.

Cloud security adoption continues to be a challenge because enterprises still try and wish to apply traditional security practices to the cloud. Having a security and risk posture on AWS or Azure means having a good understanding of the shared security model across user level, application and infrastructure layers of the cloud.

And last, but not least, a very clear mandate such as digital transformation or a specific initiative, where there is a core sponsor around it, oftentimes does ease the whole focus on some of these.

These are some of the reasons we see for cloud complexity. The applications transformation can also be quite arduous for many of our clients.

Gardner: Jerry, what are you seeing for helping organizations get cloud-ready? What best practices make for a smoother on-ramp?

Rhoads: One of the best practices beforehand is to determine what your endgame is going to look like. What is your overall cloud strategy going to look like?

Jerry Rhoads

Rhoads

Instead of just lifting and shifting a workload, what is the life cycle of that workload going to look like? It means a lot of in-depth planning — whether it’s a government agency or private enterprise. Once we get into the mandates, it’s about, “Okay, I need this application that’s running in my on-premises data center to run in the cloud. How do I make it happen? Do I lift it and shift it or do I re-architect it? If so, how do I re-architect for the cloud?”

That’s a big common theme I’m seeing: “How do I re-architect my application to take better advantage of the cloud?”

Gardner: One of the things I have seen is that a lot of organizations do well with their proof of concepts (POCs). They might have multiple POCs in different parts of the organization. But then, getting to standardized comprehensive cloud adoption is a different beast.

Raj, how do you make that leap from spotty cloud adoption, if you will, to more holistic?

One size doesn’t fit all 

Raman: We advise customers to try and [avoid] taking it on as a one-size-fits-all. For example, we have one client who is trying – all at once – to lift and shift thousands of applications.

Now, they did a very detailed POC and they got yield from that POC. But when it came to the actual migration and transformation, they were convinced and felt confident that they could take it on and try it en masse, with thousands of applications.

The thing is, in trying to do that, not all applications are one size. One needs a phased approach for doing application discovery and application assessment. Then, based on that, you can determine which applications are well worth the effort [to move to a cloud].

So we recommend to customers that they think of migrations as a phased approach. Be very clear in terms of what you want to accomplish. Start small, gain the confidence, and then have a milestone-based approach of accomplishing it all.

Learn More About 

Unisys CloudForte 

Gardner: These mandates are nonetheless coming down from above. For the US federal government, for example, cloud has become increasingly important. We are expecting somewhere in the vicinity of $3.3 billion to be spent for federal cloud in 2021. Upward of 60 percent of federal IT executives are looking to modernization. They have both the cloud and security issues to face. Private sector companies are also seeing mandates to rapidly become cloud-native and cloud-first.

Jerry, when you have that pressure on an IT organization — but you have to manage the complexity of many different kinds of apps and platforms — what do you look for from an alliance partner like Unisys to help make those mandates come to fruition?

Rhoads: In working with partners such as Unisys, they know the customer. They are there on the ground with the customer. They know the applications. They hear the customers. They understand the mandates. We also understand the mandates and we have the cloud technology within Azure. Unisys, however, understands how to take our technology and integrate it in with their end customer’s mission.

Gardner: And security is not something you can just bolt on, or think of, after the fact in such migrations. Raj, are we seeing organizations trying to both tackle cloud adoption and improve their security? How do Unisys and Microsoft come together to accomplish those both as a tag team rather than a sequence, or even worse, a failure?

Secure your security strategy

Raman: We recently conducted a survey of our stakeholders, including some of our customers. And to no surprise security — be it as part of the migrations or in scaling up their current cloud initiatives – is by far a top area of focus and concern.

We are already partnering with Microsoft and others with our flagship security offering, Unisys Stealth. We are not just in collaboration but leapfrogging in terms of innovation. The Azure cloud team has released a specific API to make products like Stealth available. This now gives customers more choice and it allows Unisys to help meet customers in terms of where they are.

Also, earlier this year we worked very closely with the Microsoft cloud team to release Unisys CloudForte for Azure. These are foundational elements that help both governments as well as commercial customers leverage Azure as a platform for doing their digital transformation.

The Microsoft team has also stepped up and worked very closely with the Unisys team developers and architects to make these services native on Azure.

The Microsoft team has also stepped up and worked very closely with the Unisys team developers and architects to make these services native on Azure, as well as help customers understand how they can better consume Azure services.

Those are very specific examples in which we see the Unisys collaboration with Microsoft scaling really well.

Gardner: Jerry, it is, of course, about more than just the technology. These are, after all, business services. So whether a public or private organization is making the change to an operations model — paying as you consume and budgeting differently — financially you need to measure and manage cloud services differently.

How is that working out? Why is this a team sport when it comes to adopting cloud services as well as changing the culture of how cloud-based business services are consumed?

Keep pay-as-you-go under control 

Rhoads: One of the biggest challenges I hear from our customers is around going from a CAPEX model to an OPEX model. They don’t really understand how it works.

CAPEX is a longtime standard — here is the price and here is how long it is good for until you have to then re-up and buy new piece of hardware or re-up the license, or whatnot. Using cloud, it’s pay-as-you-go.

If I launch 400 servers for an hour, I’m paying for 400 virtual machines running for one hour. So if we don’t have a governance strategy in place to stop something like that, we can wind up going through one year’s worth of budget in 30 days — if it’s not governed, if it’s not watched.

And that’s why, for instance, working with Unisys CloudForte there are built-in controls to where you can go through and ping the Azure cloud backend — such as Azure Cost Management or our Cloudyn product — where you can see how much your current charges are, as well as forecast what those charges are going to look like. Then you can get ahead of the eight ball, if you will, to make sure that you are actually burning through your budget correctly — versus getting a surprise at the end of the month.

Gardner: Raj, how should organizations better manage that cultural shift around cloud consumption governance?

Raman: Adding to Jerry’s point, we see three dimensions to help customers. One is what Unisys calls setting up a clear cloud architecture, the foundations. We actually have an offering geared around this. And, again, we are collaborating with Microsoft on how to codify those best practices.

In going to the cloud, we see five pillars that customers have to contend with: cost, security, performance, availability, and operations. Each of these can be quite complex and very deep.

partnersRather than have customers figure these out themselves, we have combined product and framework. We have codified it, saying, “Here are the top 10 best practices you need to be aware of in terms of cost, security, performance, availability, and operations.”

It makes it very easy for the Unisys consultants, architects, and customers to understand at any given point — be it pre-migration or post-migration — that they have clear visibility on where they stand for their review on cost in the cloud.

We are also thinking about security and compliance upfront — not as an afterthought. Oftentimes customers go deep into the journey and they realize they may not have the controls and the security postures, and the next thing you know they start to lose confidence.

So rather than wait for that, the thinking is we arm them early. We give them the governance and the policies on all things security and compliance. And Azure has very good capabilities toward this.

The third bit, and Jerry touched on this, is overall financial governance. The ability to think about — not just cost as a matter of spinning a few Azure resources up and down – but in a holistic way, in a governance model. That way you can break it up in terms of analyzed or utilized resources. You can do chargebacks and governance and gain the ability to optimize cost on an ongoing basis.

These are distinctive foundational elements that we are trying to arm customers with and make them a lot more comfortable and gain the trust as well as the process with cloud adoption.

Gardner: The good news about cloud offerings like Azure and hybrid cloud offerings like Azure Stack is you gain a standardized approach. Not necessarily one-size-fits-all, but an important methodological and technical consistency. Yet organizations are often coming from a unique legacy, with years and years of everything from mainframes to n-tier architectures, and applications that come and go.

How do Unisys and Microsoft work together to make the best of standardization for cloud, but also recognize specific needs that each organization has?

Different clouds, same experience

Rhoads: We have Azure Stack for on-premise Azure deployments. We also have Azure Commercial Cloud as well as Azure Government Cloud and Department of Defense (DoD) Cloud. The good news is that they use the same portal, same APIs, same tooling, and same products and services across all three clouds.

Now, as services roll out, they roll out in our Commercial Cloud first, and then we will roll them out into Azure Government as well as into Azure Stack. But, again, the good news is these products are available, and you don’t have to do any special configuration or anything in the backend to make it work. It’s the same experience regardless of which product the customer wants to use.

Unisys CloudForte works with Azure Stack, with Commercial, and Azure for Government. For the end customers it’s the same cloud services that they expect to use. The difference is just where those cloud services live.

What’s more, Unisys CloudForte works with Azure Stack, with Commercial, and Azure for Government. For the end customer it’s the same cloud services that they expect to use. The difference really is just where those cloud services live, whether it’s with Azure Stack on-premises, on a cruise ship or in a mine, or if you are going with Azure Commercial Cloud, or if you need a regulated workload such as a FedRAMP high workload or an IC4, IC5 workload, then you would go into Azure Government. But there are no different skills required to use any of those clouds.

Same skill set. You don’t have to do any training, it’s the same products and services. And if the products and services aren’t in that region, you can work with Unisys or myself to engage the product teams to put those products in Azure Stack or in Azure for Government.

Gardner: How does Unisys CloudForte managed services complement these multiple Azure cloud environments and deployment models?

Rhoads: CloudForte really further standardizes it. There are different levels of CloudForte, for instance, and the underlying cloud really doesn’t matter, it’s going to be the same experience to roll that out. But more importantly, CloudForte is really an on-ramp. A lot of times I am working with customers and they are like, “Well, gee, how do I get started?”

Whether it’s setting up that subscription in-tenant, getting them on-board with that, as well as how to roll out that POC, how do they do that, and that’s where we leverage Unisys and CloudForte as the on-ramp to roll out that first POC. And that’s whether that POC is a bare-bones Azure virtual network or if they are looking to roll out a complete soup-to-nuts application with application services wrapped around it. CloudForte and Unisys can provide that functionality.

Do it your way, with support 

Raman: Unisys CloudForte has been designed as an offering on top of Azure. There are two key themes. One, meet customers where they are. It’s not about what Unisys is trying to do or what Azure is trying to do. It’s about, first and foremost, being customer obsessed. We want to help customers do things on their terms and do it the right way.

So CloudForte has been designed to meet those twin objectives. The way we go about doing it is — imagine, if you will, a flywheel. The flywheel has four parts. One, the whole consumption part, which is the ability to consume Azure workloads at any given point.

Learn More About 

Unisys CloudForte 

Next is the ability to run commands, or the operations piece. Then you follow that up with the ability to accelerate transformations, so data migrations or app modernization.

Last, but not least, is to transform the business itself, be it on a new technology, artificial intelligence (AI)machine learning (ML)blockchain, or anything that can wrap on top of Azure cloud services.

The beauty of the model is a customer does not have to buy all of these en masse; they could be fitting into any of this. Some customers come and say, “Hey, we just want to consume the cloud workloads, we really don’t want to do the whole transformation piece.” Or some customers say, “Thank you very much, we already have the basic consumption model outlined. But can you help us accelerate and transform?”

So the ability to provide flexibility on top of Azure helps us to meet customers where they are. That’s the way CloudForte has been envisioned, and a key part of why we are so passionate and bullish in working with Microsoft to help customers meet their goals.

Gardner: We have talked about technology, we have talked about process, but of course people and human capital and resources of talent and skills are super important as well. So Raj, what does the alliance between Unisys and Microsoft do to help usher people from being in traditional IT to be more cloud-native practitioners? What are we doing about the people element here?

Expert assistance available

Raman: In order to be successful, one of the big focus areas with Unisys is to arm and equip our own people, be it at the consulting level, a sales-facing level, either doing cloud architectures or even doing cloud delivery, across the stripe, rank and file. There is an absolute mandate to increase the number of certifications, especially the Azure certifications.

In fact, I can also share that Unisys, as we speak, every month we have a doubling of people who are across the rank of Azure 300 and the 900. These are the two popular certifications, the whole Azure stack of it. We have now north of 300 trained people, and maybe my number is at the lower end. We expect the number to double.

So we have absolute commitment, because customers look to us to not only come in and solve the problems, but to do it with the level of expertise that we claim. So that’s why our commitment to getting our people trained and certified on Azure is a very important piece of it.

Gardner: One of the things I love to do is to not just tell, but to show. Do we have examples of where the Unisys and Microsoft alliance — your approach and methodologies to cloud adoption, tackling the complexity, addressing the security, and looking at both the unique aspect of each enterprise and their skills or people issues — comes all together? Do you have some examples?

Raman: The California State University is a longstanding customer of ours, a good example where they have transformed their own university infrastructure using Unisys CloudForte with a specific focus on all things hybrid cloud. We are pleased to see that not only is the customer happy but they are quite eager to get back to us in terms of making sure that their mandates are met on a consistent basis.

Our federal agencies are usually reluctant to be in the spotlight. That said, what I can share are representative examples. We have some very large defense establishments working with us. We have some state agencies close to the Washington, DC area, agencies responsible for the roll-out of cloud consumption across the mandates.

We are well on our way in not only working with the Microsoft Azure cloud teams, but also with state agencies. Each of these agencies is citywide or region-wide, and within that they have a health agency or an agency focused on education or social services.

In our experience, we are seeing an absolute interest in adopting the public clouds for them to achieve their citizens’ mandates. So those are some very specific examples.

Gardner: Jerry, when we look to both public and private sector organizations, how do you know when you are doing cloud adoption right? Are there certain things you should look to, that you should measure? Obviously you would want to see that your budgets are moving from traditional IT spending to cloud consumption. But what are the metrics that you look to?

The measure of success

Rhoads: One of the metrics that I look at is cost. You may do a lift and shift and maybe you are a little bullish when you start building out your environments. When you are doing cloud adoption right, you should see your costs start to go down.

So your consumption will go up, but your costs will go down, and that’s because you are taking advantage of either platform as a service (PaaS) in the cloud, and being able to auto-scale out, or looking to move to say Kubernetes and start using things like Docker containers and shutting down those big virtual machines (VMs), and clusters of VMs, and then running your Kubernetes services on top of them.

When you see those costs go down and your services going up, that’s usually a good indicator that you are doing it right.

Gardner: Just as a quick aside, Jerry, we have also seen that Microsoft Azure is becoming very container- and Kubernetes-oriented, is that true?

Rhoads: Yes, it is. We actually have Brendan Burns, as a matter of fact, who was one of the co-creators of Kubernetes during his time at Google.

Gardner: Raj, how do you know when you are getting this right? What do you look to as chief metrics from Unisys’s perspective when somebody has gone beyond proof of concept and they are really into a maturity model around cloud adoption?

Raman: One of the things we take very seriously is our mandate to customers to do cloud on your terms and do it right. And what we mean by that is something very specific, so I will break it in two.

One is from a customer-led metric perspective. We actually rank ourselves very seriously in terms of Net Promoter Score. We have one of the highest in the industry relative to the rest of our competitions. And that’s something that’s hard-earned, but we keep striving to raise the bar on how our customers talk to each other and how they feel about us.

The other part is the ability to retain customers, so retention. So those are two very specific customer-focused benchmarks.

Now, building upon some of the examples that Jerry was talking about, from a cloud metric perspective, besides cost and besides cost optimization, we also look at some very specific metrics, such as how many net-net workloads are there under management. What are some of the net new services that are being launched? We especially are quite curious to see if there is a focus in terms of Kubernetes or AI and ML adoption, are there any trends toward that?

We rank ourselves very seriously in terms of Net Promoter Score. We have one of the highest in the industry, but we keep striving to raise the bar on how our customers talk to each other and feel about us.

One of the very interesting ones that I will share, Dana, is that some of our customers are starting to come and ask us, “Can you help set up an Azure Cloud center of excellence within our organization?” So that oftentimes is a good indicator that the customer is looking to transform the business beyond the initial workload movement.

And last, but not the least, is training, and absolute commitment to getting their own organization to become more cloud-native.

Gardner: I will toss another one in, and I know it’s hard to get organizations to talk about it, but fewer security breaches, fewer days or instances of downtime because of a ransomware attack. So it’s hard to get people to talk about it if you can’t always prove when you don’t get attacked, but certainly a better security posture as compared to two years, three years ago would be a high indicator on my map as to whether cloud is being successful for you.

All right, we are almost out of time, so let’s look to the future. What comes next when we get to a maturity model, when organizations are comprehensive, standardized around cloud, have skills and culture oriented to the cloud regardless of their past history? We are also of course seeing more use of the data behind the cloud, in operations and using ML and AI to gain AIOps benefits.

Where can we look to even larger improvements when we employ and use all that data that’s now being generated within those cloud services?

Continuous cloud propels the future 

Raman: One of the things that’s very evident to us is, as customers start to come to us and use the cloud at significant scale, is it is very hard for any one organization. Even for Unisys, we see this, which is how do you get scaled up and keep up with the rate of change that the cloud platform vendors such as Azure are bringing to the table; all good innovations, but how do you keep on top of that?

So that’s where a focus on what we are calling as “AI-led operations” is becoming very important for us. It’s about the ability to go and look at the operational data and have these customers go from a reactive, from a hindsight-led model, to a more proactive and a foresight-driven model, which can then guide, not only their cloud operations, but also help them think about where they can now leverage this data and use that Azure infrastructure to then launch more innovation or new business mandates. That’s where the AIOps piece, the AI-led operations piece, of it kicks in.

Learn More About 

Unisys CloudForte 

There is a reason why cloud is called continuous. You gain the ability to have continuous visibility via compliance or security, to have constant optimization, both in terms of best practices, reviewing the cloud workloads on a constant basis and making sure that their architectures are being assessed for the right way of doing Azure best practices.

And then last, but not the least, one other trend I would surface up, Dana, as a part of this, which is we are starting to see an increase in the level of conversational bots. Many customers are interested in getting to a self-service mode. That’s where we see conversational bots built on Azure or Cortana will become more mainstream.

Gardner: Jerry, how do organizations recognize that the more cloud adoption they have, the more standardization, the more benefits they will get in terms of AIOps and a virtuous adaption pattern kicks in?

Rhoads: To expand on what Raj talked about with AIOps, we actually have built in a lot of AI into our products and services. One of them is with Advanced Threat Protection (ATP) on Azure. Another one is with anti-phishing mechanisms that are deployed in Office 365.

So as more folks move into the cloud, we are seeing a lot of adoption around these products and services, as well as we are also able to bring in a lot of feedback and do a lot of learning off of some of the behaviors that we are seeing to make the products even better.

DevOps integrated in the cloud

So one of things that I do, in working with my customers is DevOps, how do we employ DevOps in the cloud? So a lot of folks are doing DevOps on-premises and they are doing it from an application point of view. I am rolling out my application on an infrastructure that is either virtualized or physical, sitting in my data center, how do I do that in the cloud, why do I do that in the cloud?

Well, in the cloud everything is software, including infrastructure. Yes, it sits on a server at the end of the day; however, it is software-defined, being it is software-defined, it has an API, I can write code. So therefore if I want to blow out or roll out a suite of VMs or I want to roll out Kubernetes clusters and put my application on top of it, I can create definable, repeatable code, if you will, that I can check into a repository someplace, press the button, and roll out that infrastructure and put my application on top of it.

So now deploying applications, especially with DevOps in the cloud, it’s not about I have an operations team and then I have my DevOps team that rolls out the application on top of existing infrastructure. Instead I actually bundle it altogether. I have tighter integration, which means I now have repeatable deployments and I can do my deployments, instead of doing them every quarter or annually, I can do deployments — I can do 20, 30, 1,000 a day if I like, if I do it right.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Unisys and Microsoft.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, Cyber security, data center, Data center transformation, DevOps, Enterprise architect, Enterprise transformation, Microsoft, multicloud, Unisys | Tagged , , , , , , , , , , , , , , , | Leave a comment

How containers are the new basic currency for pay as you go hybrid IT

puzzleContainer-based deployment models have rapidly gained popularity across a full spectrum of hybrid IT architectures — from edge, to cloud, to data center.

This next edition of the BriefingsDirect Voice of the Innovator podcast series examines how IT operators are now looking to increased automation, orchestration, and compatibility benefits to further exploit containers as a mainstay across their next-generation hybrid IT estate.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to explore the escalating benefits that come from broad container use with Robert Christiansen, Evangelist in the Office of the Chief Technology Officer at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Containers are being used in more ways and in more places. What was not that long ago a fringe favorite for certain developer use cases is becoming broadly disruptive. How disruptive has the container model become?

Christiansen: Well, it’s the new change in the paradigm. We are looking to accelerate software releases. This is the Agile motion. At the end of the day, software is consuming the world, and if you don’t release software quicker — with more features more often — you are being left behind.

robert-christiansen

Christiansen

The mechanism to do that is to break them out into smaller consumable segments that we can manage. Typically that motion has been adopted at the public cloud level on containers, and that is spreading into the broader ecosystem of the enterprises and private data centers. That is the fundamental piece — and containers have that purpose.

Gardner: Robert, users are interested in that development and deployment advantage, but are there also operational advantages, particularly in terms of being able to move workloads more freely across multiple clouds and hybrid clouds?

Christiansen: Yes, the idea is twofold. First off was to gain agility and motion, and then people started to ask themselves, “Well, I want to have choice, too.” So as we start abstracting away the dependencies of what runs a container, such as a very focused one that might be on a particular cloud provider, I can actually start saying, “Hey, can I write my container platform services and APIs compatible across multiple platforms? How do I go between platforms; how do I go between on-prem or in the public cloud?

Gardner: And because containers can be tailored to specific requirements needed to run a particular service, can they also extend down to the edge and in resource-constrained environments?

Adjustable containers add flexibility 

Christiansen: Yes, and more importantly, they can adjust to sizing issues, too. So think about pushing a container that’s very lightweight into a host that needs to have high availability of compute but may be limited on storage.

There are lots of different use cases. As you collapse the virtualization of an app — that’s what a container really does, it virtualizes app components, it virtualizes app parts and dependencies. You only deploy the smallest bit of code needed to execute that one thing. That works in niche uses like a hospital, telecommunications on a cell tower, on an automobile, on the manufacturing floor, or if you want to do multiple pieces inside a cloud platform that services a large telco. However you structure it, that’s the type of flexibility containers provide.

Gardner: And we know this is becoming a very popular model, because the likes of VMware, the leader in virtualization, is putting a lot of emphasis on containers. They don’t want to lose their market share, they want to be in the next game, too. And then Microsoft with Azure Stack is now also talking about containers — more than I would have expected. So that’s a proof-point, when you have VMware and Microsoft agreeing on something.

Christiansen: Yes, that was really interesting actually. I just saw this little blurb that came up in the news about Azure Stack switching over to a container platform, and I went, “Wow!” Didn’t they just put in three- or five-years’ worth of R and D? They are basically saying, “We might be switching this over to another platform.” It’s the right thing to do.

How to Modernize IT Operations

And Accelerate App Performance 

With Container Technology 

And no one saw Kubernetes coming, or maybe OpenShift did. But the reality now is containers suddenly came out of nowhere. Adoption has been there for a while, but it’s never been adopted like it has been now.

Gardner: And Kubernetes is an important part because it helps to prevent complexity and sprawl from getting out of hand. It allows you to have a view over these different disparate deployment environments. Tell us why Kubernetes is in an accelerant to containers adoption.

Christiansen: Kubernetes fundamentally is an orchestration platform that allows you to take containers and put them in the right place, manage them, shut them down when they are not working or having behavior problems. We need a place to orchestrate, and that’s what Kubernetes is meant to be.

racksIt basically displaced a number of other private, what we call opinionated, orchestrators. There was a number of them out there that were being worked on. And then Google released Kubernetes, which was fundamentally their platform that they had been running their world on for 10 years. They are doing for this ecosystem what the Android system did for cell phones. They were releasing and open sourcing the operating model, which is an interesting move.

Gardner: It’s very rapidly become the orchestration mechanism to rule them all. Has Kubernetes become a de facto industry standard?

Christiansen: It really has. We have not seen a technology platform gain acceptance in the ecosystem as fast as this. I personally in all my years or decades have not seen something come up this fast.

Gardner: I agree, and the fact that everyone is all-in is very powerful. How far will this orchestration model will go? Beyond containers, perhaps into other aspects of deployment infrastructure management?

Christiansen: Let’s examine the next step. It could be a code snippet. Or if you are familiar with functions, or with Amazon Web Services (AWS) Lambda [serverless functions], you are talking about that. That would be the next step of how orchestration – it allows anyone to just run code only. I don’t care about a container. I don’t care about the networking. I don’t care about any of that stuff — just execute my code.

Gardner: So functions-as-a-service (FaaS) and serverless get people used to that idea. Maybe you don’t want to buy into one particular serverless approach versus another, but we are starting to see that idea of much more flexibility in how services can be configured and deployed — not based on a platform, an OS, or even a cloud.

Containers’ tipping point 

Christiansen: Yes, you nailed it. If you think about the stepping stones to get across this space, it’s a dynamic fluid space. Containers are becoming, I bet, the next level of abstraction and virtualization that’s necessary for the evolution of application development to go faster. That’s a given, I think, right now.

Malcolm Gladwell talked about tipping points. Well, I think we have hit the tipping point on containers. This is going to happen. It may take a while before the ecosystem changes over. If you put the strategy together, if you are a decision-maker, you are making decisions about what to do. Now your container strategy matters. It matters now, not a year from now, not two years from now. It matters now.

Gardner: The flexibility that containers and Kubernetes give us refocuses the emphasis of how to do IT. It means that you are going to be thinking about management and you are going to be thinking about economics and automation. As such, you are thinking at a higher abstraction than just developing and deploying applications and then attaching and integrating data points to them.

Learn More About 

Cloud and Container Trends 

How does this higher abstraction of managing a hybrid estate benefit organizations when they are released from the earlier dependencies?

Christiansen: That’s a great question. I believe we are moving into a state where you cannot run platforms with manual systems, or ticketing-based systems. That type of thing. You cannot do that, right? We have so many systems and so much interoperability between the systems, that there has to be some sort of anatomic or artificial intelligence (AI)-based platforms that are going to make the workflow move for you.

There will still be someone to make a decision. Let’s say a ticket goes through, and it says, “Hey, there is the problem.” Someone approves it, and then a workflow will naturally happen behind that. These are the evolutions, and containers allow you to continue to remove the pieces that cause you problems.

Right now we have big, hairy IT operations problems. We have a hard time nailing down where they are. The more you can start breaking these apart and start looking at your hotspots of areas that have problems, you can be more specifically focused on solving those. Then you can start using some intelligence behind it, some actual workload intelligence, to make that happen.

Gardner: The good news is we have lots of new flexibility, with microservices, very discrete approaches to combining them into services, workflows, and processes. The bad news is we have all that flexibility across all of those variables.

Auspiciously we are also seeing a lot of interest and emphasis in what’s called AIOps, AI-driven IT operations. How do we now rationalize what containers do, but keep that from getting out of hand? Can we start using programmatic and algorithmic approaches? What you are seeing when we combine AIOps and containers?

Simplify your stack 

Christiansen: That’s like what happens when I mix oranges with apples. It’s kind of an interesting dilemma. But I can see why people want to say, “Hey, how does my container strategy help me manage my asset base? How do I get to a better outcome?”

One reason is because these approaches enable you to collapse the stack. When you take complexity out of your stack — meaning, what are the layers in your stack that are necessary to operate in a given outcome — you then have the ability to remove complexity.

HPE LogoWe are talking about dropping the containers all the way to bare metal. And if you drop to bare metal, you have taken not only cost out of the equation, you have taken some complexity out of the equation. You have taken operational dependencies out of the equation, and you start reducing those. So that’s number one.

Number two is you have to have some sort of service mesh across this thing. So with containers comes a whole bunch of little hotspots all over the place and a service manager must know where those little hotspots are. If you don’t have an operating model that’s intelligent enough to know where those are (that’s called a service mesh, where they are connected to all these things) you are not going to have autonomous behaviors over the top of that that will help you.

So yes, you can connect the dots now between your containers to get autonomous, but you have got to have that layer in between that’s telling where the problems are — and then you have intelligence above that that says how do I handle it.

Gardner: We have been talking, Robert, at an abstract level. Let’s go a bit more to the concrete. Are there use cases examples that HPE is working through with customers that illustrate the points we have been making around containers and Kubernetes?

Practice, not permanence 

Christiansen: I just met with the team, and they are working with a client right now, a very large Fortune 500 company, where they are applying the exact functions that I just talked to you about.

First thing that needed to happen is a development environment where they are actually developing code in a solid continuous integration, continuous development, and DevOps practice. We use the word “practice,” it’s like medicine and law. It’s a practice, nothing is permanent.

So we put that in place for them. The second thing is they’re trying to release code at speed. This is the first goal. Once you start releasing code at speed, with containers as the mechanism by which you are doing it, then you start saying, “Okay, now my platform on which I’m dropping on is going through development, quality assurance, integration, and then finally into production.

By the time you get to production, you need to know how you’re managing your platform. So it’s a client evolution. We are in that process right now — from end-to-end — to take one of their applications that is net new and another application that’s a refactor and run them both through this channel.

More Enterprises Are Choosing 

Containers — Here’s Why 

Now, most clients we engage with are in that early stage. They’re doing proof of concepts. There are a couple of companies out there that have very large Kubernetes installations, that are in production, but they are born-in-the-cloud companies. And those companies have an advantage. They can build that whole thing I just talked about from scratch. But 90 percent of the people out there today, what I call the crown jewels of applications, have to deal with legacy IT. They have to deal with what’s going on today, their data sources have gravity, they still have to deal with that existing infrastructure.

Those are the people I really care about. I want to give them a solution that goes to that agile place. That’s what we’re doing with our clients today, getting them off the ground, getting them into a container environment that works.

Gardner: How can we take legacy applications and deployments and then use containers as a way of giving them new life — but not lifting and shifting?

Improve past, future investments 

Christiansen: Organizations have to make some key decisions on investment. This is all about where the organization is in its investment lifecycle. Which ones are they going to make bets on, and which ones are they going to build new?

We are involved with clients going through that process. What we say to them is, “Hey, there is a set of applications you are just not going to touch. They are done. The best we can hope for is put the whole darn thing in a container, leave it alone, and then we can manage it better.” That’s about cost, about economics, about performance, that’s it. There are no release cycles, nothing like that.

The best we can hope for is put the whole darn thing in a container and we can manage it better. That’s about cost, economics, and performance.

The next set are legacy applications where I can do something to help. Maybe I can take a big, beefy application and break it into four parts, make a service group out of it. That’s called a refactor. That will give them a little bit of agility because they can only release code pieces for each segment.

And then there are the applications that we are going to rewrite. These are dependent on what we call app entanglement. They may have multiple dependencies on other applications to give them data feeds, to give them connections that are probably services. They have API calls, or direct calls right into them that allow them to do this and that. There is all sorts of middleware, and it’s just a gnarly mess.

If you try to move those applications to public cloud and try to refactor them there, you introduce what I call data gravity issues or latency issues. You have to replicate data. Now you have all sorts of cost problems and governance problems. It just doesn’t work.

You have to keep those applications in the datacenters. You have to give them a platform to do it there. And if you can’t give it to them there, you have a real problem. What we try to do is break those applications into part in ways where the teams can work in cloud-native methodologies — like they are doing in public cloud — but they are doing it on-premises. That’s the best way to get it done.

Gardner: And so the decision about on-premises or in a cloud, or to what degree a hybrid relationship exists, isn’t so much dependent upon cost or ease of development. We are now rationalizing this on the best way to leverage services, use them together, and in doing so, we attain backward compatibility – and future-proof it, too.

HPE-GreenlakeChristiansen: Yes, you are really nailing it, Dana. The key is thinking about where the app appropriately needs to live. And you have laws of physics to deal with, you have legacy issues to deal with, and you have cultural issues to deal with. And then you have all sorts of data, what we call data nationalization. That means dealing with GDPR and where is all of this stuff going to live? And then you have edge issues. And this goes on and on, and on, and on.

So getting that right — or at least having the flexibility to get it right — is a super important aspect. It’s not the same for every company.

Gardner: We have been addressing containers mostly through an applications discussion. Is there a parallel discussion about data? Can we begin thinking about data as a service, and not necessarily in a large traditional silo database, but perhaps more granular, more as a call, as an API? What is the data lifecycle and DataOps implications of containers?

Everything as a service

Christiansen: Well, here is what I call the Achilles heel of the container world. It doesn’t handle persistent data well at all. One of the things that HPE has been focused on is providing stateful, legacy, highly dependent persistent data stores that live in containers. Okay, that is a unique intellectual property that we offer. I think is really groundbreaking for the industry.

Kubernetes is a stateless container platform, which is appropriate for cloud-native microservices and those fast and agile motions. But the legacy IT world in stateful, with highly persistent data stores. They don’t work well in that stateless environment.

Through the work we’ve been doing over the last several years, specifically with an HPE-acquired company called BlueData, we’ve been able to solve that legacy problem. We put that platform into the AI, machine learning (ML), and big data areas first to really flesh that all out. We are joining those two systems together and offering a platform that is going to be really useful out in marketplace.

BlueData.logo2.jpgGardner: Another aspect of this is the economics. So one of the big pushes from HPE these days is everything as a service, of being able to consume and pay for things as you want regardless of the deployment model — whether it’s on premises, hybrid, in public clouds, or multi-clouds. How does the container model we have been describing align well with the idea of as a service from an economic perspective?

Christiansen: As-a-service is really how I want to get my services when I need them. And I only want to pay for what I need at the time I need it. I don’t want to overpay for it when I don’t use it. I don’t want to be stuck without something when I do need it.

Top Trends — Stateful Apps Are Key 

To Enterprise Container Strategy 

Solving that problem inside various places in the ecosystem is a different equation, it comes up differently. Some clients want to buy stuff, they want to capitalize it and just put it on the books. So we have to deal with that.

You have other people who say, “Hey, I’m willing to take on this hardware burden as a financer, and you can rent it from me.” You can consume all the pieces you need and then you’ve got the cloud providers as a service. But more importantly, let’s go back to how the containers allow you to have much finer granularity about what it is you’re buying. And if you want to deploy an app, maybe you are paying for that app to be deployed as opposed to the container. But the containers are the encapsulation of it and where you want to have it.

So you still have to get to what I call the basic currency. The basic currency is a container. Where does that container run? It has to run either on premises, in the public cloud, or on the edge. If people are going to agree on that basic currency model, then we can agree on an economic model.

Gardner: Even if people are risk averse, I don’t think they’re in trouble by making some big bets on containers as their new currency and to attain capabilities and skills around both containers and Kubernetes. Recognizing that this is not a big leap of faith, what do you advise people to do right now to get ready?

Christiansen: Get your arms around the Kubernetes installations you already have, because you know they’re happening. This is just like when the public cloud was arriving and there was shadow IT going on. You know it’s happening; you know people are writing code, and they’re pushing it into a Kubernetes cluster. They’re not talking to the central IT people about how to manage or run it — or even if it’s going to be something they can handle. So you’ve got to get a hold of them first.

Teamwork works

Gfind your hand raisers. That’s what I always say. Who are the volunteers? Who has their hands in the air? Openly say, “Hey, come in. I’m forming a containers, Kubernetes, and new development model team.” Give it a name. Call it the Michael Jordan team of containers. I don’t care. But go get them. Go find out who they are, right?

If your data lives on-premises and an application is going to need data, you’re going to need to have an on-premises solution for containers that can handle legacy and cloud at the same time. If that data goes to the cloud, you can always move the container there, too.

And then form and coalesce that team around that knowledge base. Learn how they think, and what is the best that’s going on inside of your own culture. This is about culture, culture, culture, right? And do it in public so people can see it. This is why people got such black eyes when they were doing their first stuff around public cloud because they snuck off and did it, and then they were really reluctant not to say anything. Bring it out in the open. Let’s start talking about it.

Next thing is looking for instantiations of applications that you either are going to build net new or you are going to refactor. And then decide on your container strategy around that Kubernetes platform, and then work it as a program. Be open about transparency about what you’re doing. Make sure you’re funded.

And most importantly, above all things, know where your data lives. If your data lives on-premises and that application you’re talking about is going to need data, you’re going to need to have an on-premises solution for containers, specifically those that handle legacy and public cloud at the same time. If that data decides it needs to go to public cloud, you can always move it there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, Data center transformation, DevOps, Docker, Hewlett Packard Enterprise, multicloud, Software, storage, User experience, Virtualization | Tagged , , , , , , , , , , , , , , , , | Leave a comment

HPE strategist Mark Linesch on the surging role of containers in advancing the hybrid IT estate

HPE containers

Openness, flexibility, and speed to distributed deployments have been top drivers of the steady growth of container-based solutions. Now, IT operators are looking to increase automation, built-in intelligence, and robust management as they seek container-enabled hybrid cloud and multicloud approaches for data and workloads.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

This next edition of the BriefingsDirect Voice of the Innovator podcast series examines the rapidly evolving containers innovation landscape with Mark Linesch, Vice President of Technology Strategy in the CTO Office and Hewlett Packard Labs at Hewlett Packard Enterprise (HPE). The composability strategies interview is conducted byDana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Let’s look at the state of the industry around containers. What are the top drivers for containers adoption now that the technology has matured?

Linesch: The history of computing, as far back as I can remember, has been about abstraction; abstraction of the infrastructure and then a separation of concern between the infrastructure and the applications.

Mark LineschIt used to be it was all bare metal, and then about a decade ago, we went on the journey to virtualization. And virtualization is great, it’s an abstraction that allows for certain amount of agility. But it’s fairly expensive because you are virtualizing the entire infrastructure, if you will, and dragging along a unique operating system (OS) each time you do that.

So the industry for the last few years has been saying, “Well, what’s next, what’s after virtualization?” And clearly things like containerization are starting to catch hold.

Why now? Well, because we are living in a hybrid cloud world, and we are moving pretty aggressively toward a more distributed edge-to-cloud world. We are going to be computing, analyzing, and driving intelligence in all of our edges — and all of our clouds.

Things such as performance- and developer-aware capabilities, DevOps, the ability to run an application in a private cloud and then move it to a public cloud, and being able to drive applications to edge environments on a harsh factory floor — these are all aspects of this new distributed computing environment that we are entering into. It’s a hybrid estate, if you will.

Containers have advantages for a lot of different constituents in this hybrid estate world. First and foremost are the developers. If you think about development and developers in general, they have moved from the older, monolithic and waterfall-oriented approaches to much more agile and continuous integration and continuous delivery models.

And containers give developers a predictable environment wherein they can couple not only the application but the application dependencies, the libraries, and all that they need to run an application throughout the DevOps lifecycle. That means from development through test, production, and delivery.

Containers carry and encapsulate all of the app’s requirements to develop, run, test, and scale. With bare metal or virtualization, as the app moved through the DevOps cycle, I had to worry about the OS dependencies and the type of platforms I was running that pipeline on.

Developers’ package deal

A key thing for developers is they can package the application and all the dependencies together into a distinct manifest. It can be version-controlled and easily replicated. And so the developer can debug and diagnose across different environments and save an enormous amount of time. So developers are the first beneficiaries, if you will, of this maturing containerized environment.

How to Modernize Your IT 

With Container Technology 

But next are the IT operations folks because they now have a good separation of concern. They don’t have to worry about reconfiguring and patching all these kinds of things when they get a hand-off from developers into a production environment. That capability is fundamentally encapsulated for them, and so they have an easier time operating.

And increasingly in this more hybrid distributed edge-to-cloud world, I can run those containers virtually anywhere. I can run them at the edge, in a public cloud, in a private cloud, and I can move those applications quickly without all of these prior dependencies that virtualization or bare metal required. It contains an entire runtime environment and application, plus all the dependencies, all the libraries, and the like.

The third area that’s interesting for containers is around isolation. Containers virtualize the CPU, the memory, storage network resources – and they do that at the OS level. So they use resources much more efficiently for that reason.

Unlike virtualization, which includes your entire OS as well as the application, containers run on a single OS. Each container shares the OS kernel with other containers so it’s lightweight, uses less resources, and spins up instantly.

Unlike virtualization, which includes your entire OS as well as the application, containers run on a single OS. Each container shares the OS kernel with other containers, so it’s lightweight, uses much fewer resources, and spins up almost instantly — in seconds versus virtual machines (VMs) that spin up in minutes.

When you think about this fast-paced, DevOps world we live in — this increasingly distributed hybrid estate from the many edges and many clouds we compute and analyze data in — that’s why containers are showing quite a bit of popularity. It’s because of the business benefits, the technical benefits, the development benefits, and the operations benefits.

Gardner: It’s been fascinating for me to see the portability and fit-for-purpose containerization benefits, and being able to pass those along a DevOps continuum. But one of the things that we saw with virtualization was that too much of a good thing spun out of control. There was sprawl, lack of insight and management, and eventually waste.

How do we head that off with containers? How do containers become manageable across that entire hybrid estate?

Setting the standard 

Linesch: One way is standardizing the container formats, and that’s been coming along fairly nicely. There is an initiative called The Open Container Initiative, part of the Linux Foundation, that develops to the industry standard so that these containers, formats, and runtime software associated with them are standardized across the different platforms. That helps a lot.

Number two is using a standard deployment option. And the one that seems to be gripping the industry is KubernetesKubernetes is an open source capability that provides mechanisms for deploying, maintaining, and scaling containerized applications. Now, the combination of the standard formats from a runtime perspective with the ability to manage that with capabilities like Mesosphere or Kubernetes has provided the tooling and the capabilities to move this forward.

Gardner: And the timing couldn’t be better, because as people are now focused on as-a-service for so much — whether it’s an application, infrastructure, and increasingly, entire data centers — we can focus on the business benefits and not the underlying technology. No one really cares whether it’s running in a virtualized environment, on bare metal, or in a container — as long as you are getting the business benefits.

Linesch: You mentioned that nobody really cares what they are running on, and I would postulate that they shouldn’t care. In other words, developers should develop, operators should operate. The first business benefit is the enormous agility that developers get and that IT operators get in utilizing standard containerized environments.

How to Extend the Cloud Experience 

Across Your Enterprise 

Not only do they get an operations benefit, faster development, lower cost to operate, and those types of things, but they take less resources. So containers, because of their shared and abstracted environment, really take a lot fewer resources out of a server and storage complex, out of a cluster, so you can run your applications faster, with less resources, and at lower total cost.

This is very important when you think about IT composability in general because the combination of containerized environments with things like composable infrastructure provides the flexibility and agility to meet the needs of customers in a very time sensitive and very agile way.

Ship

Gardner: How are IT operators making a tag team of composability and containerization? Are they forming a whole greater than the sum of the parts? How do you see these two spurring innovation?

Linesch: I have managed some of our R&D centers. These are usually 50,000-square-foot data centers where all of our developers and hardware and software writers are off doing great work.

And we did some interesting things a few years ago. We were fully virtualized, a kind of private cloud environment, so we could deliver infrastructure-as-a-service (IaaS) resources to these developers. But as hybrid cloud hit and became more of a mature and known pattern, our developers were saying, “Look, I need to spin this stuff up more quickly. I need to be able to run through my development-test pipeline more effectively.”

And containers-as-a-service was just a super hit for these guys. They are under pressure every day to develop, build, and run these applications with the right security, portability, performance, and stability. The containerized systems — and being able to quickly spin up a container, to do work, package that all, and then move it through their pipelines — became very, very important.

From an infrastructure operations perspective, it provides a perfect marriage between the developers and the operators. The operators can use composition and things like our HPE Synergy platform and our HPE OneViewtooling to quickly build container image templates. These then allow those developers to populate that containers-as-a-service infrastructure with the work that they do — and do that very quickly.

Gardner: Another hot topic these days is understanding how a continuum will evolve between the edge deployments and a core cloud, or hybrid cloud environment. How do containers help in that regard? How is there a core-to-cloud and/or core-to-cloud-to-edge benefit when containers are used?

Gaining an edge 

Linesch: I mentioned that we are moving to a much more distributed computing environment, where we are going to be injecting intelligence and processing through all of our places, people, and things. And so when you think about that type of an environment, you are saying, “Well, I’m going to develop an application. That application may require more microservices or more modular architecture. It may require that I have some machine learning (ML) or some deep learning analytics as part of that application. And it may then need to be provisioned to 40 — or 400 — different sites from a geographic perspective.”

When you think about edge-to-cloud, you might have a set of factories in different parts of the United States. For example, you may have 10 factories all seeking to develop inferencing and analyzed actions on some type of an industrial process. It might be video cameras attached to an assembly line looking for defects and ingesting data and analyzing that data right there, and then taking some type of a remediation action.

How to Optimize Your IT Operations 

With Composable Infrastructure 

And so as we think about this edge-to-cloud dance, one of the things that’s critical there is continuous integration and continuous delivery — of being able to develop these applications and the artificial intelligence (AI) models associated with analyzing the data on an ongoing basis. The AI models, quite frankly, drift and they need to be updated periodically. And so continuous integration and continuous delivery types of methodologies are becoming very important.

Then, how do I package up all of those application bits, analytics bits, and ML bits? How do I provision that to those 10 factories? How do I do that in a very fast and fluid way?

That’s where containers really shine. They will give you bare-metal performance. They are packaged and portable – and that really lends itself to the fast-paced delivery and delivery cycles required for these kinds of intelligent edge and Internet of Things (IoT) operations.

Gardner: We have heard a lot about AIOps and injecting more intelligence into more aspects of IT infrastructure, particularly at the June HPE Discover conference. But we seem to be focusing on the gathering of the data and the analysis of the data, and not so much on the what do you do with that analysis – the execution based on the inferences.

It seems to me that containers provide a strong means when it comes to being able to exploit recommendations from an AI engine and then doing something — whether to deploy, to migrate, to port.

Am I off on some rough tangent? Or is there something about containers — and being able to deftly execute on what the intelligence provides — that might also be of benefit?

Linesch: At the edge, you are talking about many applications where a large amount of data needs to be ingested. It needs to be analyzed, and then take a real-time action from a predictive maintenance, classification, or remediation perspective.

We are seeing the benefits of containers really shine in these more distributed edge-to-cloud environments. At the edge, many apps need a large amount of data ingested. The whole cycle time of ingesting data, analyzing it, and taking some action back is highly performant with containers.

And so containers spin up very quickly. They use very few resources. The whole cycle-time of ingesting data, analyzing that data through a container framework, taking some action back to the thing that you are analyzing is made a whole lot easier and a whole lot performant with less resources when you use containers.

Now, virtualization still has a very solid set of constituents, both at the hybrid cloud and at the intelligent edge. But we are seeing the benefits of containers really shine in these more distributed edge-to-cloud environments.

Gardner: Mark, we have chunked this out among the developer to operations and deployment, or DevOps implications. And we have talked about the edge and cloud.

But what about at the larger abstraction of impacting the IT organization? Is there a benefit for containerization where IT is resource-constrained when it comes to labor and skills? Is there a people, skills, and talent side of this that we haven’t yet tapped into?

Customer microservices support 

Linesch: There definitely is. One of the things that we do at HPE is try to help customers move into these new models like containers, DevOps, and continuous integration and delivery. We offer a set of services that help customers, whether they are medium-sized customers or large customers, to think differently about development of applications. As a result, they are able to become more agile and microservices-oriented.

Microservice-oriented development really lends itself to this idea of containers, and the ability of containers to interact with each other as a full-set application. What you see happening is that you have to have a reason not to use containers now.

How to Simplify and Automate 

Across Your Datacenter 

That’s pretty exciting, quite frankly. It gives us an opportunity to help customers to engage from an education perspective, and from a consulting, integration, and support perspective as they journey through microservices and how to re-architect their applications.

Our customers are moving to a more continuous integration-continuous development approach. And we can show them how to manage and operate these types of environments with high automation and low operational cost.

Gardner: A lot of the innovation we see along the lines of digital transformation at a business level requires taking services and microservices from different deployment models — oftentimes multi-cloud, hybrid cloud, software-as-a-service (SaaS) services, on-premises, bare metal, databases, and so forth.

Are you seeing innovation percolating in that way? If you have any examples, I would love to hear them.

Linesch: I am seeing that. You see that every day when you look at the Internet. It’s a collaboration of different services based on APIs. You collect a set of services for a variety of different things from around these Internet endpoints, and that’s really as-a-service. That’s what it’s all about — the ability to orchestrate all of your applications and collections of service endpoints.

Furthermore, beyond containers, there are new as-a-function-based, or serverless, types of computing. These innovators basically say, “Hey, I want to consume a service from someplace, from an HTTP endpoint, and I want to do that very quickly.” They very effectively are using service-oriented methodologies and the model of containers.

ContainersforDummiesWe are seeing a lot of innovation in these function-as-a-service (FaaS) capabilities that some of the public clouds are now providing. And we are seeing a lot of innovation in the overall operations at scale of these hybrid cloud environments, given the portability of containers.

At HPE, we believe the cloud isn’t a place — it’s an experience. The utilization of containers provides a great experience for both the development community and the IT operations community. It truly helps better support the business objectives of the company.

Investing in intelligent innovation

Gardner: Mark, for you personally, as you are looking for technology strategy, how do you approach innovation? Is this something that comes organically, that bubbles up? Or is there a solid process or workflow that gets you to innovation? How do you foster innovation in your own particular way that works?

Linesch: At HPE, we have three big levers that we pull on when we think about innovation.

The first is we can do a lot of organic development — and that’s very important. It involves understanding where we think the industry is going, and trying to get ahead of that. We can then prove that out with proof of concepts and incubation kinds of opportunities with lead customers.

We also, of course, have a lever around inorganic innovation. For example, you saw recently an acquisition by HPE of Cray to turbocharge the next generation of high-performance computing (HPC) and to drive the next generation of exascale computing.

The third area is our partnerships and investments. We have deep collaboration with companies like Docker, for example. They have been a great partner for a number of years, and we have, quite frankly, helped to mature some of that container management technology.

We are an active member of the standards organizations around the containers. Being able to mature the technology with partners like Docker, to get at the business value of some of these big advancements is important. So those are just three ways we innovate.

Longer term, with other HPE core innovations, such as composability and memory-driven computing, we believe that containers are going to be even more important. You will be able to hold the containers in memory-driven computingsystems, in either Dynamic random-access memory (DRAM) or storage-class memory (SCM).

You will be able to spin them up instantly or spin them down instantly. The composition capabilities that we have will increasingly automate a very significant part of bringing up such systems, of bringing up applications, and really scaling and moving those applications to where they need to be.

One of the principles that we are focused on is moving the compute to the data — as opposed to moving the data to the compute. And the reason for that is when you move the compute to the data, it’s a lot easier, simpler, and faster with less resources.

One of the principles that we are focused on is moving the compute to the data — as opposed to moving the data to the compute. And the reason for that is when you move the compute to the data, it’s a lot easier, simpler, and faster — with less resources.

This next generation of distributed computing, memory-driven computing, and composability is really ripe for what we call containers in microseconds. And we will be able to do that all with the composability tooling we already have.

Gardner: When you get to that point, you’re not just talking about serverless. You’re talking about cloudless. It doesn’t matter where the FaaS is being generated as long as it’s at the right performance level that you require, when you require it. It’s very exciting.

Before we break, I wonder what guidance you have for organizations to become better prepared to exploit containers, particularly in the context of composability and leveraging a hybrid continuum of deployments? What should companies be doing now in order to be getting better prepared to take advantage of containers?

Be prepared, get busy

Linesch: If you are developing applications, then think deeply about agile development principles, and developing applications with a microservice-bent is very, very important.

If you are in IT operations, it’s all about being able to offer bare metal, virtualization, and containers-as-a-service options — depending on the workload and the requirements of the business.

How to Manage Your Complex 

Hybrid Cloud More Effectively 

I recommend that companies not stand on the sidelines but to get busy, get to a proof of concept with containers-as-a-service. We have a lot of expertise here at HPE. We have a lot of great partners, such as Docker, and so we are happy to help and engage.

We have quite a bit of on-boarding and helpful services along the journey. And so jump in and crawl, walk, and run through it. There are always some sharp corners on advanced technology, but containers are maturing very quickly. We are here to help our customers on that journey.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Docker, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud | Tagged , , , , , , , , , , , , | Leave a comment

The venerable history of IT systems management meets the new era of AIOps-fueled automation over hybrid and multicloud complexity

data centerThe next edition of the BriefingsDirect Voice of the Innovator podcast series explores the latest developments in hybrid IT management.

IT operators have for decades been playing catch-up to managing their systems amid successive waves of heterogeneity, complexity, and changing deployment models. IT management technologies and methods have evolved right along with the challenge, culminating in the capability to optimize and automate workloads to exacting performance and cost requirements.

But now automation is about to give an AIOps boost from new machine learning (ML) and artificial intelligence (AI) capabilities — just as multicloud and edge computing deployments become more common — and demanding.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Stay with us as we explore the past, present, and future of IT management innovation with a 30-year veteran of IT management, Doug de Werd, Senior Product Manager for Infrastructure Management at Hewlett Packard Enterprise (HPE). The interview is conducted byDana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Management in enterprise IT has for me been about taking heterogeneity and taming it, bringing varied and dynamic systems to a place where people can operate over more, using less. And that’s been a 30-year journey.

Yet heterogeneity these days, Doug, includes so much more than it used to. We’re not just talking about platforms and frameworks – we’re talking about hybrid cloud, multicloud, and many Software as a service (SaaS) applications. It includes working securely across organizational boundaries with partners and integrating business processes in ways that never have happened before.

With all of that new complexity, with an emphasis on intelligent automation, where do you see IT management going next?

Managing management 

Doug deWerd

de Werd

de Werd: Heterogeneity is known by another term, and that’s chaos. In trying to move from the traditional silos and tools to more agile, flexible things, IT management is all about your applications — human resources and finance, for example – that run the core of your business. There’s also software development and other internal things. The models for those can be very different and trying to do that in a single manner is difficult because you have widely varying endpoints.

Gardner: Sounds like we are now about managing the management.

de Werd: Exactly. Trying to figure out how to do that in an efficient and economically feasible way is a big challenge.

Gardner: I have been watching the IT management space for 20-plus years and every time you think you get to the point where you have managed everything that needs to be managed — something new comes along. It’s a continuous journey and process.

But now we are bringing intelligence and automation to the problem. Will we ever get to the point where management becomes subsumed or invisible?

de Werd: You can automate tasks, but you can’t automate people. And you can’t automate internal politics and budgets and things like that. What you do is automate to provide flexibility.

How to Support DevOps, Automation,

And IT Management Initiatives 

But it’s not just the technology, it’s the economics and it’s the people. By putting that all together, it becomes a balancing act to make sure you have the right people in the right places in the right organizations. You can automate, but it’s still within the context of that broader picture.

rackGardner: When it comes to IT management, you need a common framework. For HPE, HPE OneView has been core. Where does HPE OneView go from here? How should people think about the technology of management that also helps with those political and economic issues?

de Werd: HPE OneView is just an outstanding core infrastructure management solution, but it’s kind of like a car. You can have a great engine, but you still have to have all the other pieces.

And so part of what we are trying to do with HPE OneView, and we have been very successful, is extending that capability out into other tools that people use. This can be into more traditional tools like with our Microsoft or VMware partnerships and exposingand bringing HPE OneView functionality into traditional things.

The integration allows the confidence of using HPE OneView as a core engine. All those other pieces can still be customized to do what you need to do — yet you still have that underlying core foundation of HPE OneView.

But it also has a lot to do with DevOps and the continuous integration development types of things with Docker, Chef, and Puppet — the whole slew of at least 30 partners we have.

That integration allows the confidence of using HPE OneView as a core engine. All those other pieces can still be customized to do what you need to do — yet you still have that underlying core foundation of HPE OneView.

Gardner: And now with HPE increasingly going to an as-a-service orientation across many products, how does management-as-a-servicework?

Creativity in the cloud 

de Werd: It’s an interesting question, because part of management in the traditional sense — where you have a data center full of servers with fault management or break/fix such as a hard-drive failure detection – is you want to be close, you want to have that notification immediately.

As you start going up in the cloud with deployments, you have connectivity issues, you have latency issues, so it becomes a little bit trickier. When you have more up levels, up the stack, where you have software that can be more flexible — you can do more coordination. Then the cloud makes a lot of sense.

Management in the cloud can mean a lot of things. If it’s the infrastructure, you tend to want to be closer to the infrastructure, but not exclusively. So, there’s a lot of room for creativity.

Gardner: Speaking of creativity, how do you see people innovating both within HPE and within your installed base of users? How do people innovate with management now that it’s both on- and off-premises? It seems to me that there is an awful lot you could do with management beyond red-light, green-light, and seek out those optimization and efficiency goals. Where is the innovation happening now with IT management?

de Werd: The foundation of it begins with automation, because if you can automate you become repeatable, consistent, and reliable, and those are all good in your data center.

Transform Compute, Storage, and Networking

Into Software-Defied Infrastructure 

You can free up your IT staff to do other things. The truth is if you can do that reliably, you can spend more time innovating and looking at your problems from a different angle. You gain the confidence that the automation is giving you.

Automation drives creativity in a lot of different ways. You can be faster to market, have quicker releases, those types of things. I think automation is the key.

Gardner: Any examples? I know sometimes you can’t name customers, but can you think of instances where people are innovating with management in ways that would illustrate its potential?

Automation innovation 

de Werd: There’s a large biotech genome sequencing company, an IT group that is very innovative. They can change their configuration on the fly based on what they want to do. They can flex their capacity up and down based on a task — how much compute and storage they need. They have a very flexible way of doing that. They have it all automated, all scripted. They can turn on a dime, even as a very large IT organization.

And they have had some pretty impressive ways of repurposing their IT. Today we are doing X and tonight we are doing Y. They can repurpose that literally in minutes — versus days for traditional tasks.

Gardner: Are your customers also innovating in ways that allow them to get a common view across the entire lifecycle of IT? I’m thinking from requirements, through development, deployment, test, and continuous redeployment.

de Werd: Yes, they can string all of these processes together using different partner tools, yet at the core they use HPE OneView and HPE Synergy underneath the covers to provide that real, raw engine.

By using the HPE partner ecosystem integrated with HPE OneView, they have visibility. Then they can get into things like Docker Swarm. It may not be HPE OneView providing that total visibility. At the hardware level it is, but because we feed into upper-level apps they can adjust to meet the needs across the entire business process.

By using the HPE partner ecosystem integrated with HPE OneView, they have that visibility. Then they can get into things like Docker Swarm. It may not be HPE OneView providing that total visibility. At the hardware and infrastructure level it is, but because we are feeding into upper-level and broader applications, they can see what’s going on and determine how to adjust to meet the needs across the entire business process.

Gardner: In terms of HPE Synergy and composability, what’s the relationship between composability and IT management? Are people making the whole greater than the sum of the parts with those?

de Werd: They are trying to. I think there is still a learning curve. Traditional IT has been around a long time. It just takes a while to change the mentality, skills sets, and internal politics. It takes a while to get to that point of saying, “Yeah, this is a good way to go.”

But once they dip their toes into the water and see the benefits — the power, flexibility, and ease of it — they are like, “Wow, this is really good.” One step leads to the next and pretty soon they are well on their way on their composable journey.

Gardner: We now see more intelligence brought to management products. I am thinking about how HPE InfoSight is being extended across more storage and server products.

How to Eliminate Complex Manual Processes 

And Increase Speed of IT Delivery 

We used to access log feeds from different IT products and servers. Then we had agents and agent-less analysis for IT management. But now we have intelligence as a service, if you will, and new levels of insight. How will HPE OneView evolve with this new level of increasingly pervasive intelligence?

managersde Werd: HPE InfoSight is a great example. You see it being used in multiple ways, things like taking the human element out, things like customer advisories coming out and saying, “Such-and-such product has a problem,” and how that affects other products.

If you are sitting there looking at 1,000 or 5,000 servers in your data center, you’re wondering how I am affected by this? There are still a lot of manual spreadsheets out there, and you may find yourself pouring through a list.

Today, you have the capability of getting an [intelligent alert] that says, “These are the ones that are affected. Here is what you should do. Do you want us to go fix it right now?” That’s just an example of what you can do.

It makes you more efficient. You begin to understand howyou are using your resources, where your utilization is, and how you can then optimize that. Depending on how flexible you want to be, you can design your systems to respond to those inputs and automatically flex [deployments] to the places that you want to be.

This leads to autonomous computing. We are not quite there yet, but we are certainly going in that direction. You will be able to respond to different compute, storage, and network requirements and adjust on the fly. There will also be self-healing and self-morphing into a continuous optimization model.

Gardner: And, of course, that is a big challenge these days … hybrid cloud, hybrid IT, and deploying across on-premises cloud, public cloud, and multicloud models. People know where they want to go with that, but they don’t know how to get there.

How does modern IT management help them achieve what you’ve described across an increasingly hybrid environment?

Manage from the cloud down 

de Werd: They need to understand what their goals are first. Just running virtual machines (VMs) in the cloud isn’t really where they want to be. That was the initial thing. There are economic considerations involved in the cloud, CAPEX and OPEX arguments.

Simply moving your infrastructure from on-premises up into the cloud isn’t going to get you where you really need to be. You need to look at it from a cloud-native-application perspective, where you are using micro services, containers, and cloud-enabled programming languages — your Javas and .NETs and all the other stateless types of things – all of which give you new flexibility to flex performance-wise.

From the management side, you have to look at different ways to do your development and different ways to do delivery. That’s where the management comes in. To do DevOps and exploit the DevOps tools, you have to flip the way you are thinking — to go from the cloud down.

Cloud application development on-premises, that’s one of the great things about containers and cloud-native, stateless types of applications. There are no hardware dependencies, so you can develop the apps and services on-premises, and then run them in the cloud, run them on-premises, and/or use your hybrid cloud vendor’s capabilities to burst up into a cloud if you need it. That’s the joy of having those types of applications. They can run anywhere. They are not dependent on anything — on any particular underlying operating system.

But you have to shift and get into that development mode. And the automation helps you get there, and then helps you respond quickly once you do.

Gardner: Now that hybrid deployment continuum extends to the edge. There will be increasing data analytics, measurement, and making deployment changes dynamically from that analysis at the edge.

It seems to me that the way you have designed and architected HPE IT management is ready-made for such extensibility out to the edge. You could have systems run there that can integrate as needed, when appropriate, with a core cloud. Tell me how management as you have architected it over the years helps manage the edge, too.

Businesses need to move their processing further out to the edge and gain the instant response, instant gratification. You can’t wait to have an input analyzed by going all the way back to the cloud. You want the processing toward the edge to get that instantaneous response.

de Werd: Businesses need to move their processing further out to the edge, and gain the instant response, instant gratification. You can’t wait to have an input analyzed on the edge, to have it go all the way back to a data source or all the way up to a cloud. You want to have the processing further and further toward the edge so you can get that instantaneous response that customers are coming to expect.

But again, being able to automate how to do that, and having the flexibility to respond to differing workloads and moving those toward the edge, I think, is key to getting there.

InfoSightGardner: And Doug, for you, personally, do you have some takeaways from your years of experience about innovation and how to make innovation a part of your daily routine?

de Werd: One of the big impacts on the team that I work with is in our quality assurance (QA) testing. It’s a very complex thing to test various configurations; that’s a lot of work. In the old days, we had to manually reconfigure things. Now, as we use an Agile development process, testing is a continuous part of it.

We can now respond very quickly and keep up with the Agile process. It used to be that testing was always the tail-end and the longest thing. Development testing took forever. Now because we can automate that, it just makes that part of the process easier, and it has taken a lot of stress off of the teams. We are now much quicker and nimbler in responses, and it keeps people happy, too.

How to Get Simle, Automated Management 

Of Your Hybrid Infrastructure 

Gardner: As we close out, looking to the future, where do you see management going, particularly how to innovate using management techniques, tools, and processes? Where is the next big green light coming from?

Set higher goals 

de Werd: First, get your house in order in terms of taking advantage of the automation available today. Really think about how not to just use the technology as the end-state. It’s more of a means to get to where you want to be.

Define where your organization wants to be. Where you want to be can have a lot of different aspects; it could be about how the culture evolves, or what you want your customers’ experience to be. Look beyond just, “I want this or that feature.”

Then, design your full IT and development processes. Get to that goal, rather than just saying, “Oh, I have 100 VMs running on a server, isn’t that great?” Well, if it’s not achieving the ultimate goal of what you want, it’s just a technology feat. Don’t use technology just for technology’s sake. Use it to get to the larger goals, and define those goals, and how you are going to get there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, application transformation, artificial intelligence, Cloud computing, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Microsoft, multicloud, Security, storage, User experience, VMware | Tagged , , , , , , , , , , , , | Leave a comment

How the Catalyst UK program seeds the next generations of HPC, AI, and supercomputing

cray-supercomputerThe next BriefingsDirect Voice of the Customer discussion explores a program to expand the variety of CPUs that support supercomputer and artificial intelligence (AI)-intensive workloads.

The Catalyst program in the UK is seeding the advancement of the ARM CPU architecture for high performance computing (HPC) as well as establishing a vibrant software ecosystem around it.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Stay with us to learn about unlocking new choices and innovation for the next generations of supercomputing with Dr. Eng Lim Goh, Vice President and Chief Technology Officer for HPC and AI at Hewlett Packard Enterprise (HPE), and Professor Mark Parsons, Director of the Edinburgh Parallel Computing Centre (EPCC) at the University of Edinburgh. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mark, why is there a need now for more variety of CPU architectures for such use cases as HPC, AI, and supercomputing?

Mark Parsons

Parsons

Parsons: In some ways this discussion is a bit odd because we have had huge variety over the years in supercomputing with regard to processors. It’s really only the last five to eight years that we’ve ended up with the majority of supercomputers being built from the Intel x86 architecture.

It’s always good in supercomputing to be on the leading edge of technology and getting more variety in the processor is really important. It is interesting to seek different processor designs for better performance for AI or supercomputing workloads. We want the best type of processors for what we want to do today.

Gardner: What is the Catalyst program? Why did it come about? And how does it help address those issues?

Parsons: The Catalyst UK program is jointly funded by a number of large companies and three universities: The University of Bristol, the University of Leicester, and the University of Edinburgh. It is UK-focused because Arm Holdings is based in the UK, and there is a long history in the UK of exploring new processor technologies.

Through Catalyst, each of the three universities hosts a 4,000-core ARM processor-based system. We are running them as services. At my university, for example, we now have a number of my staff using this system. But we also have external academics using it, and we are gradually opening it up to other users.

Catalyst for change in processor

We want as many people as possible to understand how difficult it will be to port their code to ARM. Or, rather — as we will explore in this podcast — how easy it is.

You only learn by breaking stuff, right? And so, we are going to learn which bits of the software tool chain, for example, need some work. [Such porting is necessary] because ARM predominantly sat in the mobile phone world until recently. The supercomputing and AI world is a different space for the ARM processor to be operating in.

Gardner: Eng Lim, why is this program of interest to HPE? How will it help create new opportunity and performance benchmarks for such uses as AI?

Dr-Eng-Lim-Goh

Goh

Goh: Mark makes a number of very strong points. First and foremost, we are very keen as a company to broaden the reach of HPC among our customers. If you look at our customer base, a large portion of them come from the commercial HPC sites, the retailers, banks, and across the financial industry. Letting them reach new types of HPC is important and a variety of offerings makes it easier for them.

The second thing is the recent reemergence of more AI applications, which also broadens the user base. There is also a need for greater specialization in certain areas of processor capabilities. We believe in this case, the ARM processor — given the fact that it enables different companies to build innovative variations of the processor – will provide a rich set of new options in the area of AI.

Gardner: What is it, Mark, about the ARM architecture and specifically the Marvell ThunderX2 ARM processor that is so attractive for these types of AI workloads?

Expanding memory for the future 

Parsons: It’s absolutely the case that all numerical computing — AI, supercomputing, and desktop technical computing — is controlled by memory bandwidth. This is about getting data to the processor so the processor core can act on it.

What we see in the ThunderX2 now, as well as in future iterations of this processor, is the strong memory bandwidth capabilities. What people don’t realize is a vast amount of the time, processor cores are just waiting for data. The faster you get the data to the processor, the more compute you are going to get out with that processor. That’s one particular area where the ARM architecture is very strong.

Goh: Indeed, memory bandwidth is the key. Not only in supercomputing applications, but especially in machine learning (ML) where the machine is in the early phases of learning, before it does a prediction or makes an inference.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

It has to go through the process of learning, and this learning is a highly data-intensive process. You have to consume massive amounts of historical data and examples in order to tune itself to a model that can make good predictions. So, memory bandwidth is utmost in the training phase of ML systems.

And related to this is the fact that the ARM processor’s core intellectual property is available to many companies to innovate around. More companies therefore recognize they can leverage that intellectual property and build high-memory bandwidth innovations around it. They can come up with a new processor. Such an ability to allow different companies to innovate is very valuable.

armchip

Gardner: Eng Lim, does this fit in with the larger HPE drive toward memory-intensive computing in general? Does the ARM processor fit into a larger HPE strategy?

Goh: Absolutely. The ARM processor together with the other processors provide choice and options for HPE’s strategy of being edge-centric, cloud-enabled, and data-driven.

Across that strategy, the commonality is data movement. And as such, the ARM processor allowing different companies to come in to innovate will produce processors that meet the needs of all these various kinds of sectors. We see that as highly valuable and it supports our strategy.

Gardner: Mark, Arm Holdings controls the intellectual property, but there is a budding ecosystem both on the processor design as well as the software that can take advantage of it. Tell us about that ecosystem and why the Catalyst UK program is facilitating a more vibrant ecosystem.

The design-to-build ecosystem 

Parsons: The whole Arm story is very, very interesting. This company grew out of home computing about 30 to 40 years ago. The interesting thing is the way that they are an intellectual property company, at the end of the day. Arm Holdings itself doesn’t make processors. It designs processors and sells those designs to other people to make.

We’ve had this wonderful ecosystem of different companies making their own ARM processors or making them for other people. It’s no surprise it’s the most common processor in the world today.

So, we’ve had this wonderful ecosystem of different companies making their own ARM processors or making them for other people. With the wide variety of different ARM processors in mobile phones, for example, there is no surprise that it’s the most common processor in the world today.

Now, people think that x86 processors rule the roost, but actually they don’t. The most common processor you will find is an ARM processor. As a result, there is a whole load of development tools that come both from ARM and also within the developer community that support people who want to develop code for the processors.

In the context of Catalyst UK, in talking to Arm, it’s quite clear that many of their tools are designed to meet their predominant market today, the mobile phone market. As they move into the higher-end computing space, it’s clear we may find things in the programs where the compiler isn’t optimized. Certain libraries may be difficult to compile, and things like that. And this is what excites me about the Catalyst program. We are getting to play with leading-edge technology and show that it is easy to use all sorts of interesting stuff with it.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

Gardner: And while the ARM CPU is being purpose-focused for high-intensity workloads, we are seeing more applications being brought in, too. How does the porting process of moving apps from x86 to ARM work? How easy or difficult is it? How does the Catalyst UK program help?

Parsons: All three of the universities are porting various applications that they commonly use. At the EPCC, we run the national HPC service for the UK called ARCHER. As part of that we have run national [supercomputing] services since 1994, but as part of the ARCHER service, we decided for the first time to offer many of the common scientific applications as modules.

You can just ask for the module that you want to use. Because we saw users compiling their own copies of code, we had multiple copies, some of them identically compiled, others not compiled particularly well.

U of E2So, we have a model of offering about 40 codes on ARCHER as precompiled where we are trying to keep them up to date and we patch them, etc. We have 100 staff at EPCC that look after code. I have asked those staff to get an account on the Catalyst system, take that code across and spend an afternoon trying to compile. We already know for some that they just compile and run. Others may have some problems, and it’s those that we’re passing on to ARM and HPE, saying, “Look, this is what we found out.”

The important thing is that we found there are very few programs [with such problems]. Most code is simply recompiling very, very smoothly.

Gardner: How does HPE support that effort, both in terms of its corporate support but also with the IT systems themselves?

ARM’s reach 

Goh: We are very keen about the work that Mark and the Catalyst program are doing. As Mark mentioned, the ARM processor came more from the edge-centric side of our strategy. In mobile phones, for example.

Now we are very keen to see how far these ARM systems can go. Already we have shipped to the US Department of Energy at the Sandia National Lab a large ARM processor-based supercomputer called Astra. These efforts are ongoing in the area of HPC applications. We are very keen to see how this processor and the compilers for it work with various HPC applications in the UK and the US.

Gardner: And as we look to the larger addressable market, with the edge and AI being such high-growth markets, it strikes me that supercomputing — something that has been around for decades — is not fully mature. We are entering a whole new era of innovation.

Mark, do you see supercomputing as in its heyday, sunset years, or perhaps even in its infancy?

Parsons: I absolutely think that supercomputing is still in its infancy. There are so many bits in the world around us that we have never even considered trying to model, simulate, or understand on supercomputers. It’s strange because quite often people think that supercomputing has solved everything — and it really hasn’t. I will give you a direct example of that.

Supercomputing is still in its infancy. There are so many bits in the world around us that we have never even considered trying to model, simulate, or understand on supercomputers. It’s strange because people think that supercomputers have already solved everything. 

A few years ago, a European project I was running won an award for simulating the highest accuracy of water flowing through a piece of porous rock. It took over a day on the whole of the national service [to run the simulation]. We won a prize for this, and we only simulated 1 cubic centimeter of rock.

People think supercomputers can solve massive problems — and they can, but the universe and the world are complex. We’ve only scratched the surface of modeling and simulation.

This is an interesting moment in time for AI and supercomputing. For a lot of data analytics, we have at our fingertips for the very first time very, very large amounts of data. It’s very rich data from multiple sources, and supercomputers are getting much better at handling these large data sources.

The reason the whole AI story is really hot now, and lots of people are involved, is not actually about the AI itself. It’s about our ability to move data around and use our data to train AI algorithms. The link directly into supercomputing is because in our world we are good at moving large amounts of data around. The synergy now between supercomputing and AI is not to do with supercomputing or AI – it is to do with the data.

Gardner: Eng Lim, how do you see the evolution of supercomputing? Do you agree with Mark that we are only scratching the surface?

Top-down and bottom-up data crunching 

Goh: Yes, absolutely, and it’s an early scratch. It’s still very early. I will give you an example.

Solving games is important to develop a method or strategy for cyber defense. If you just take the most recent game that machines are beating the best human players, the game of Go, is much more complex than chess in terms of the number of potential combinations. The number of combinations is actually 10[171], if you comprehensively went through all the different combinations of that game.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

You know how big that number is? Well, okay, if we took all computers in the world together, all the supercomputers, all of the computers in the data centers of the Internet companies and put them all together, run them for 100 years — all you can do is 10[30], which is so very far from 10[171]. So, you can see just by this one game example alone that we are very early in that scratch.

A second group of examples relates to new ways that supercomputers are being used. From ML to AI, there is now a new class of applications changing how supercomputers are used. Traditionally, most supercomputers have been used for simulation. That’s what I call top-down modeling. You create your model out of physics equations or formulas and then you run that model on a supercomputer to try and make predictions.

ARM logoThe new way of making predictions uses the ML approach. You do not begin with physics. You begin with a blank model and you keep feeding it data, the outcomes of history and past examples. You keep feeding data into the model, which is written in such a way that for each new piece of data that is fed, a new prediction is made. If the accuracy is not high, you keep tuning the model. Over time — with thousands, hundreds of thousand, and even millions of examples — the model gets tuned to make good predictions. I call this the bottom-up approach.

Now we have people applying both approaches. Supercomputers used traditionally in a top-down simulation are also employing the bottom-up ML approach. They can work in tandem to make better and faster predictions.

Supercomputers are therefore now being employed for a new class of applications in combination with the traditional or gold-standard simulations.

Gardner: Mark, are we also seeing a democratization of supercomputing? Can we extend these applications and uses? Is what’s happening now decreasing the cost, increasing the value, and therefore opening these systems up to more types of uses and more problem-solving?

Cloud clears the way for easy access

Parsons: Cloud computing is having a big impact on everything that we do, to be quite honest. We have all of our photos in the cloud, our music in the cloud, et cetera. That’s why EPCC last year got rid of its file server. All our data running the actual organization is in the cloud.

The cloud model is great inasmuch as it allows people who don’t want to operate and run a large system 100 percent of the time the ability to access these technologies in ways they have never been able to do before.

The cloud model is great inasmuch as it allows people who don’t want to operate and run a large system 100 percent of the time the ability to access these technologies in ways they have never been able to do before.

The other side of that is that there are fantastic software frameworks now that didn’t exist even five years ago for doing AI. There is so much open source for doing simulations.

It doesn’t mean that an organization like EPCC, which is a supercomputing center, will stop hosting large systems. We are still great aggregators of demand. We will still have the largest computers. But it does mean that, for the first time through the various cloud providers, any company, any small research group and university, has access to the right level of resources that they need in a cost-effective way.

Gardner: Eng Lim, do you have anything more to offer on the value and economics of HPC? Does paying based on use rather than a capital expenditure change the game?

More choices, more innovation 

Goh: Oh, great question. There are some applications and institutions with processes that work very well with a cloud, and there are some applications that don’t and processes that don’t. That’s part of the reason why you embrace both. And, in fact, we at HPE embrace the cloud and we also we build on-premises solutions for our customers, like the one at the Catalyst UK program.

We also have something that is a mix of the two. We call that HPE GreenLake, which is the ability for us to acquire the system the customer needs, but the customer pays per use. This is software-defined experience on consumption-based economics.

HPE logoThese are some of the options we put together to allow choice for our customers, because there is a variation of needs and processes. Some are more CAPEX-oriented in a way they acquire resources and others are more OPEX-oriented.

Gardner: Do you have examples of where some of the fruits of Catalyst, and some of the benefits of the ecosystem approach, have led to applications, use cases, and demonstrated innovation?

Parsons: What we are trying to do is show how easy ARM is to use. We have taken some really powerful, important code that runs every day on our big national services and have simply moved them across to ARM. Users don’t really understand or don’t need to understand they are running on a different system. It’s that boring.

We have picked up one or two problems with code that probably exist in the x86 version, but because you are running a new processor, it exposes it more, and we are fixing that. But in general — and this is absolutely the wrong message for an interview — we are proceeding in a very boring way. The reason I say that is, it’s really important that this is boring, because if we don’t show this is easy, people won’t put ARM on their next procurement list. They will think that it’s too difficult, that it’s going to be too much trouble to move codes across.

One of the aims of Catalyst, and I am joking, is definitely to be boring. And I think at this point in time we are succeeding.

More interestingly, though, another aim of Catalyst is about storage. The ARM systems around the world today still tend to do storage on x86. The storage will be running on Lustre or BeeGFS server, all sitting on x86 boxes.

We have made a decision to do everything on ARM, if we can. At the moment, we are looking at different storage software on ARM services. We are looking at Ceph, at Lustre, at BeeGFS, because unless you have the ecosystem running in ARM as well, people won’t think it’s as pervasive of a solution as x86, or Power, or whatever.

The benefit of being boring 

Goh: Yes, in this case boring is good. Seamless movement of code across different platforms is the key. It’s very important for an ecosystem to be successful. It needs to be easy to develop code for and it, and it needs to be easy to port. And those are just as important with our commercial HPC systems for the broader HPC customer base.

In addition to customers writing their own code and compiling it well and easily to ARM, we also want to make it easy for the independent software vendors (ISVs) to join and strengthen this ecosystem.

Parsons: That is one of the key things we intend to do over the next six months. We have good relationships, as does HPE, with many of the big and small ISVs. We want to get them on a new kind of system, let them compile their code, and get some help to do it. It’s really important that we end up with ISV code on ARM, all running successfully.

Gardner: If we are in a necessary, boring period, what will happen when we get to a more exciting stage? Where do you see this potentially going? What are some of the use cases using supercomputers to impact business, commerce, public services, and public health?

Goh: It’s not necessarily boring, but it is brilliantly done. There will be richer choices coming to supercomputing. That’s the key. Supercomputing and HPC need to reach a broader customer base. That’s the goal of our HPC team within HPE.

Over the years, we have increased our reach to the commercial side, such as the financial industry and retailers. Now there is a new opportunity coming with the bottom-up approach of using HPC. Instead of building models out of physics, we train the models with example data. This is a new way of using HPC. We will reach out to even more users.

How UK universities

Collaborate with HPE

To Advance ARM-Based Supercomputing 

So, the success of our supercomputing industry is getting more users, with high diversity, to come on board.

Gardner: Mark, what are some of the exciting outcomes you anticipate?

Parsons: As we get more experience with ARM it will become a serious player. If you look around the world today, in Japan, for example, they have a big new ARM-based supercomputer that’s going to be similar to the Thunder X2 when it’s launched.

I predict in the next three or four years we are going to see some very significant supercomputers up at the X2 level, built from ARM processors. Based on what I hear, the next generations of these processors will produce a really exciting time.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, data analysis, Hewlett Packard Enterprise, machine learning | Tagged , , , , , , , , , , | Leave a comment

HPE and PTC join forces to deliver best manufacturing outcomes from the OT-IT productivity revolution

Seagate_drives_being_testedThe next BriefingsDirect Voice of the Customer edge computing trends discussion explores the rapidly evolving confluence of operational technology (OT) and Internet of Things (IoT).

New advances in data processing, real-time analytics, and platform efficiency have prompted innovative and impactful OT approaches at the edge. We’ll now explore how such data analysis platforms bring manufacturers data-center caliber benefits for real-time insights where they are needed most.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To hear more about the latest capabilities in gaining unprecedented operational insights, we sat down with Riaan Lourens, Vice President of Technology in the Office of the Chief Technology Officer at PTC, and Tripp Partain, Chief Technology Officer of IoT Solutions at Hewlett Packard Enterprise (HPE). The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Riaan, what kinds of new insights are manufacturers seeking into how their operations perform?

Riaan Lourens

Lourens

Lourens: We are in the midst of a Fourth Industrial Revolution, which is really an extension of the third, where we used electronics and IT to automate manufacturing. Now, the fourth is the digital revolution, a fusion of technology and capabilities that blur the lines between the physical and digital worlds.

With the influx of these technologies, both hardware and software, our customers — and manufacturing as a whole, as well as the discrete process industries — are finding opportunities to either save or make more money. The trend is focused on looking at technology as a business strategy, as opposed to just pure IT operations.

There are a number of examples of how our customers have leveraged technology to drive their business strategy.

Gardner: Are we entering a golden age by combining what OT and IT have matured into over the past couple of decades? If we call this Industrial Revolution 4.0 (I4.0) there must be some kind of major opportunities right now.

Lourens: There are a lot of initiatives out there, whether it’s I4.0, Made in China 2025, or the Smart Factory Initiative in the US. By democratizing the process of providing value — be it with cloud capabilities, edge computing, or anything in between – we are inherently providing options for manufacturers to solve problems that they were not able to solve before.

The opportunity for manufacturers today allows them to solve problems that they face almost immediately. There is quick time-to-value by leveraging technology that is consumable. 

If you look at it from a broader technology standpoint, in the past we had very large, monolith-like deployments of technology. If you look at it from the ISA-95 model, like Level 3 or Level 4, your MES deployments or large-scale enterprise resource planning (ERP), those were very large deployments that took many years. And the return on investment (ROI) the manufacturers saw would potentially pay off over many years.

The opportunity that exists for manufacturers today, however, allows them to solve problems that they face almost immediately. There is quick time-to-value by leveraging technology that is consumable. Then they can lift and drop and so scale [those new solutions] across the enterprise. That does make this an era the likes of which nobody has seen before.

Gardner: Tripp, do you agree that we are in a golden age here? It seems to me that we are able to both accommodate a great deal of diversity and heterogeneity of the edge, across all sorts of endpoints and sensors, but also bring that into a common-platform approach. We get the best of efficiency and automation.

Tripp Partain

Partain

Partain: There is a combination of two things. One, due to the smartphone evolution over the last 10 years, the types of sensors and chips that have been created to drive that at the consumer level are now at such reasonable price points you are able to apply these to industrial areas.

To Riaan’s point, the price points of these technologies have gotten really low — but the capabilities are really high. A lot of existing equipment in a manufacturing environment that might have 20 or 30 years of life left can be retrofitted with these sensors and capabilities to give insights and compute capabilities at the edge. The capability to interact in real-time with those sensors provides platforms that didn’t exist even five years ago. That combines with the right software capabilities so that manufacturers and industrials get insights that they never had before into their processes.

Gardner: How is the partnership between PTC and HPE taking advantage of this new opportunity? It seems you are coming from different vantage points but reinforcing one another. How is the whole greater than the sum of the parts when it comes to the partnership?

Partnership for progress, flexibility

Lourens: For some context, PTC is a software vendor. Over the last 30 years we targeted our efforts at helping manufacturers either engineer software with computer-aided design (CAD) or product lifecycle management (PLM). We have evolved to our growth areas today of IoT solution platforms and augmented reality (AR) capabilities.

The challenge that manufacturers face today is not just a software problem. It requires a robust ecosystem of hardware vendors, software vendors, and solutions partners, such as regional or global systems integrators.

The reason we work very closely with HPE as an alliance partner is because HPE is a leader in the space. HPE has a strong offering of compute capabilities — from very small gateway-level compute all the way through to hybrid technologies and converged infrastructure technologies.

Ultimately our customers need flexible options to deploy software at the right place, at the right time, and throughout any part of their network. We find that HPE is a strong partner on this front.

edge boxGardner: Tripp, not only do we have lower cost and higher capability at the edge, we also have a continuum of hybrid IT. We can use on-premises micro-datacenters, converged infrastructure, private cloud, and public cloud options to choose from. Why is that also accelerating the benefits for manufacturers? Why is a continuum of hybrid IT – edge to cloud — an important factor?

Partain: That flexibility is required if you look at the industrial environments where these problems are occurring for our joint customers. If you look at any given product line where manufacturing takes place — no two regions are the same and no two factories are the same. Even within a factory, a lot of times, no two production lines are the same.

There is a wide diversity in how manufacturing takes place. You need to be able to meet those challenge with the customers to give them the deployment options that meet each of those environments.

It’s interesting. Factories don’t do enterprise IT-like deployments, where every factory takes on new capabilities at the same time. It’s much more balanced in the way that products are made. You have to be able to have that same level of flexibility in how you deploy the solutions, to allow it to be absorbed the same way the factories do all of their other types of processes.

We have seen the need for different levels of IT to match up to the way they are implemented in different types of factories. That flexibility meets them where they are and allows them to get to the value much quicker — and not wait for some huge enterprise rollout, like what Riaan described earlier with ERP systems that take multiple years.

By leveraging new, hybrid, converged, and flexible environments, we allow a single plant to deploy multiple solutions and get results much quicker. We can also still work that into an enterprise-wide deployment — and get a better balance between time and return.

Gardner: Riaan, you earlier mentioned democratization. That jumped out at me. How are we able to take these advances in systems, software, and access and availability of deployments and make that consumable by people who are not data scientists? How are we able to take the results of what the technology does and make it actionable, even using things like AR?

Lourens: As Tripp described, every manufacturing facility is different. There are typically different line configurations, different programmable logic controller (PLC) configurations, different heterogeneous systems — be it legacy IT systems or homegrown systems — so the ability to leverage what is there is inherently important.

From a strategic perspective, PTC has two core platforms; one being our ThingWorx Platform that allows you to source data and information from existing systems that are there, as well as from assets directly via the PLC or by embedding software into machines.

We also have the ability to simplify and contextualize all of that information and make sense of it. We can then drive analytical insights out of the data that we now have access to. Ultimately we can orchestrate with end users in their different personas – be that the maintenance operator, supervisor, or plant manager — enabling and engaging with these different users through AR.

Four capabilities for value

There are four capabilities that allow you to derive value. Ultimately our strategy is to bring that up a level and to provide capabilities solutions to our end customers across four different areas.

One, we look at it from an enterprise operational intelligence perspective; the second is intelligent asset optimization; the third, digital workforce productivity, and fourth, scalable production management.

assembly lineSo across those four solution areas we can apply our technology together with that of our sourced partners. We allow our customers to find use-cases within those four solution areas that provides them a return on investment.

One example of that would be leveraging augmented work instructions. So instead of an operator going through a maintenance procedure by opening a folder of hundreds of pages of instructions, they can leverage new technology such as AR to guide the operator in process, and in situ, in terms of how to do something.

There are many use cases across those four solution areas that leverage the core capabilities across the IoT platform, ThingWorx, as well as the AR platform, Vuforia.

Gardner: Tripp, it sounds like we are taking the best of what people can do and the best of what systems and analytics can do. We also move from batch processing to real time. We have location-based services so we can tell where things and people are in new ways. And then we empower people in ways that we hadn’t done before, such as AR.

Are we at the point where we’re combining the best of cognitive human capabilities and machine capabilities?

Partain: I don’t know if we have gotten to the best yet, but probably the best of what we’ve had so far. As we continue to evolve these technologies and find new ways to look at problems with different technology — it will continue to evolve.

We are getting to the new sweet spot, if you will, of putting the two together and being able to drive advancements forward. One of the things that’s critical has to do with where our current workforce is.

A number of manufacturers I talk to — and I’ve heard similar from PTC’s customers and our joint customers — is you are at a tipping point in terms of the current talent pool, with those currently employed and those getting close to retirement age.

The next generation that’s coming in is not going to have the same longevity and the same skill sets. Having these newer technologies and bringing these pieces together, it’s not only a new matchup based on the new technology – it’s also better suited for the type of workers carrying these activities forward. Manufacturing is not going away, but it’s going to be a very different generation of factory workers and types of technologies.

The solutions are now available to really enhance those jobs. We are starting to see all of the pieces come together. That’s where both IoT solutions — but even especially AR solutions like PTC Vuforia — really come into play.

Gardner: Riaan, in a large manufacturing environment, only small iterative improvements can make a big impact on the economics, the bottom line. What sort of future categorical improvements value are we looking at? To what degree do we have an opportunity to make manufacturing more efficient, more productive, more economically powerful?

Tech bridges skills gap, talent shortage

Lourens: If you look at it from the angle that Tripp just referred to, there are a number of increasing pressures across the board in the industrial markets via the workers’ skills gap. Products are also becoming more complex. Workspaces are becoming more complex. There are also increasing customer demands and expectations. Markets are just becoming more fiercely competitive.

But if you leverage capabilities such as AR — which provides augmented 3-D work instructions, expert guidance, and remote assistance, training, and demonstrations — that’s one area. If you combine that, to Tripp’s point, with the new IoT capabilities, then I think you can look at improvements such as reducing waste in processes and materials.

We have seen customers reducing by 30 percent unplanned downtime, which is a very common use case that we see manufacturers target. We also see reducing energy consumption by 3 to 7 percent. And we’re looking at improving productivity by 20 to 30 percent.

We have seen customers reducing by 30 percent unplanned downtime, which is a very common use case that we see manufacturers target. We also see reducing energy consumption by 3 to 7 percent at a very large ship manufacturer, a customer of PTC’s. And we’re generally looking at improving productivity by 20 to 30 percent.

By leveraging this technology in a meaningful way to get iterative improvements, you can then scale it across the enterprise very rapidly, and multiple use cases can become part of the solution. In these areas of opportunity, very rapidly you get that ROI.

Gardner: Do we have concrete examples to help illustrate how those general productivity benefits come about?

Joint solutions reduce manufacturing pains 

Lourens: A joint-customer between HPE and PTC focuses on manufacturing and distributing reusable and recyclable food packaging containers. The company, CuBE Packaging Solutions, targeted protective maintenance in manufacturing. Their goal is to have the equipment notify them when attention is needed. That allows them to service what they need when they need to and focus on reducing unplanned downtime.

In this particular example, there are a number of technologies that play across both of our two companies. The HPE Nimble Storage capability and HPE Synergy technology were leveraged, as well as a whole variety of HPE Aruba switches and wireless access points, along with PTC’s ThingWorx solution platform.

The CuBE Packaging solution ultimately was pulled together through an ecosystem partner, Callisto Integration, which we both worked with very closely. In this use case, we not only targeted the plastic molding assets that they were monitoring, but the peripheral equipment, such as cooling and air systems, that may impact their operations. The goal is to avoid anything that could pause their injection molding equipment and plants.

Gardner: Tripp, any examples of use-cases that come to your mind that illustrate the impact?

Partain: Another joint-customer that comes to mind is Texmark Chemicals in Galena Park, Texas. They are using number of HPE solutions, including HPE Edgeline, our micro-datacenter. They are also using PTC ThingWorx and a number of other solutions.

sparksThey have very large pumps critical to the operation as they move chemicals and fluids in various stages around their plant in the refining process. Being able to monitor those in real time, predict potential failures before they happen, and use a combination of live data and algorithms to predict wear and tear, allows them to determine the optimal time to make replacements and minimize downtime.

Such uses cases are one of the advantages when customers come and visit our IoT Lab in Houston. From an HPE standpoint, not only do they see our joint solutions in the lab, but we can actually take them out to the Texmark location and Texmark will host and allow you them see these technologies in real-time working at their facility.

Similar as Riaan mentioned, we started at Texmark with condition monitoring and now the solutions have moved into additional use cases — whether it’s mechanical integrity, video as a sensor, and employee-safety-related use cases.

We started with condition monitoring, proved that out, got the technology working, then took that framework — including best-in-class hardware and software — and continued to build and evolve on top of that to solve expanded problems. Texmark has been a great joint customer for us.

Gardner: Riaan, when organizations hear about these technologies and the opportunity for some very significant productivity benefits, when they understand that more-and-more of their organization is going to be data-driven and real-time analysis benefits could be delivered to people in their actionable context, perhaps using such things as AR, what should they be doing now to get ready?

Start small

Lourens: Over the last eight years of working with ThingWorx, I have noticed the initial trend of looking at the technology versus looking at specific use-cases that provide real business value, and of working backward from the business value.

My recommendation is to target use cases that provide quick time-to-value. Apply the technology in a way that allows you to start small, and then iterate from there, versus trying to prove your ROI based on the core technology capabilities.

Ultimately understand the business challenges and how you can grow your top line or your bottom line. Then work backward from there, starting small by looking at a plant or operations within a plant, and then apply the technology across more people. That helps create a smart connected people strategy. Apply technology in terms of the process and then relative to actual machines within that process in a way that’s relevant to use cases — that’s going to drive some ROI.

Gardner: Tripp, what should the IT organization be newly thinking? Now, they are tasked with maintaining systems across a continuum of cloud-to-edge. They are seeing micro-datacenters at the edge; they’re doing combinations of data-driven analytics and software that leads to new interfaces such as AR.

How should the IT organization prepare itself to take on what goes into any nook and cranny in almost any manufacturing environment?

IT has to extend its reach 

Partain: It’s about doing all of that IT in places where typically IT has had a little or no involvement. In many industrial and manufacturer organizations, as we go in and start having conversations, IT really has usually stopped at the datacenter back-end. Now there’s lots of technology in the manufacturing side, too, but it has not typically involved the IT department.

PTC_New_LogoOne of the first steps is to get educated on the new edge technologies and how they fit into the overall architecture. They need to have the existing support frameworks and models in place that are instantly usable, but also work with the business side and frame-up the problems they are trying to solve.

As Riaan mentioned, being able to say, “Hey, here are the types of technologies we in IT can apply to this that you [OT] guys haven’t necessarily looked at before. Here’s the standardization we can help bring so we don’t end up with something completely different in every factory, which runs up your overall cost to support and run.”

It’s a new world. And IT is going to have to spend much more time with the part of the business they have probably spent the least amount of time with. IT needs to get involved as early as possible in understanding what the business challenges are and getting educated on these newer IoT, AR, virtual reality (VR), and edge-based solutions. These are becoming the extension points of traditional technology and are the new ways of solving problems.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, Cloud computing, data analysis, Data center transformation, Enterprise architect, Enterprise transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, machine learning, Security, server | Tagged , , , , , , , , , , , , , , , | Leave a comment

IT and HR: Not such an odd couple

Human Resources SectionHow businesses perform has always depended on how well their employees perform. Yet never before has the relationship between how well employees work and the digital technology that they use been so complex.

At the same time, companies are grappling with the transition to increasingly data-driven and automated processes. What’s more, the top skills at all levels are increasingly harder to find — and hold onto — for supporting strategic business agility.

As a result, business leaders must enhance and optimize today’s employee experience so that they in turn can optimize the customer experience and — by extension — better support the success of the overall business.

Stay with us as BriefingsDirect explores how those writing the next chapters of human resources (HR) and information technology (IT) interactions are finding common ground to significantly improve the modern employee experience.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

We’re now joined by two leaders in this area who will share their thoughts on how intelligent workspace solutions are transforming work — and heightening worker satisfaction. Please welcome Art Mazor, Principal and Global Human Resources Transformation Practice Leader at Deloitte, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Art, is there more of a direct connection now between employee experience and overall business success?

Mazor: There has been a longstanding sense on the part of leaders intuitively that there must be a link. For a long time people have said, “Happy employees equal happy customers.” It’s been understood.

Arthur Mazor

Mazor

But now, what’s really powerful is we have true evidence that demonstrates the linkage. For example, in our Deloitte Global Human Capital Trends Report 2019, in its ninth year running, we noticed a very important finding in this regard: Purpose-focused companies outperformed their S&P 500 peers by a factor of 8. And, when you think about, “Well, how do you get to purpose for people working in an organization?” It’s about creating that strong experience.

What’s more, I was really intrigued when MIT recently published a study that demonstrated the direct linkage between positive employee experience and business performance. They showed that those with really strong employee experiences have twice the innovation, double the satisfaction of customers, and 25 percent greater profitability.

So those kinds of statistics tell me pretty clearly that it matters — and it’s driving business results.

Gardner: It’s seemingly commonsense and an inevitable outcome when employees and their positive experiences impact the business. But reflecting on my own experiences, some companies will nonetheless talk the talk, but not always walk the walk on building better employee experiences, unless they are forced to.

Do you sense, Art, that there are some pressures on companies now that hadn’t been there before?

Purposeful employees produce profits 

Mazor: Yes, I think there are. Some of those pressures, appropriately, are coming from the market. Customers have a very high bar with which they measure their experience with an organization. We know that if the employee or workforce experience is not up to par, the customers feel it.

That demand, that pressure, is coming from customers who have louder voices now than ever before. They have the power of social media, the ability to make their voices known, and their perspectives heard.

There is also a tremendous amount of competition among a variety of customers. As a result, leaders recognize that they have to get this right. They have to get their workers in a place where those workers feel they can be highly productive and in the service of customer outcomes.

Minahan: Yes, I totally agree with Art. In addition, there is an added pressure going on in the market today and that is the fact that there is a huge talent crunch. Globally McKinsey estimates there is a shortage of 95 million medium- to high-skilled workers.

Tim Minahan

Minahan

We are beginning to see even forward-thinking digital companies like Amazon saying, “Hey, look, we can’t go out and hire everyone we need; certainly not in one location.” So that’s why you have the HQ2 competition, and the like.

Just in July, Amazon committed to investing more than $700 million to retrain a third of their workforce with the skills that they need to continue to advance. This is part of that pressure companies are feeling: “Hey, we need to drive growth. We need to digitize our businesses. We need to provide a greater customer experience. But we need these new skills to do it, and there just is not enough talent in the market.”

So companies are rethinking that whole employee engagement model to advance.

Gardner: Tim, the concept of employee experience was largely in the domain of people like Art and those that he supports in the marketplace — the human resources and human capital management (HCM) people.

How does IT now have more of a role? Why do IT and HR leaders need to be more attached at the hip?

Download The Economist Research

On How Technology Drives 

The Modern Employee Experience 

Minahan: Much of what chief human resources officers (CHROs) and chief people officers (CPOs) have done to advance the culture and physical environment with which to attract and retain the right talent has gone extremely far. That includes improving benefits, ensuring there is a purpose, and ensuring that the work environment is very pleasurable.

However, we just conducted a study together with The Economist, a global study into employee experience and how companies are prioritizing it. And one of things that we found is organizations have neglected to take a look at the tools and the access to information that they give their employees to get their jobs done. And that seems to be a big gap.

This gap was reaffirmed by a recent global Gallup study where right behind the manager, the number one indicator of employee engagement was if they feel they have the right access to the information and tools they need to do their best job.

So technology — the digital workspace, if you will — plays an increasingly important role, particularly in how we work today. We don’t always work at a desk or in a physical environment. In fact, most of us work in multiple locations throughout the day. And so our digital workspace needs to travel with us, and it needs to simplify our day — not make it more complex.

Gardner: Art, as part of The Economist study that Tim cited, “ease of access to information required to get work done” was one of the top things those surveyed identified as being part of a world-class employee experience.

That doesn’t surprise me because we are asking people to be more data-driven. But to do so we have to give them that data in a way they can use it.

Are you seeing people thinking more about the technology and the experience of using and accessing technology when it comes to HR challenges and improvement?

HR plus IT gets the job done 

Mazor: Yes, for sure. And in the HR function, technology has been front and center for many years. In fact, HR executives, their teams, and the workers they serve have been at an advantage in that technology investments have been quite rich. The HR space was one of the first to move to the cloud. That’s created lots of opportunities beyond those that may have been available even just a few short years ago.

To your point, though, and building on Tim’s comments, [employee experience requirements] go well beyond the traditional HR technologies. They are focused around areas like collaboration, knowledge sharing, interaction, and go into the toolsets that foster those kinds of necessities. They are at the heart of being able to drive work in the way that work needs to get done today.

The days of traditional hierarchies — where your manager tells you what to do and you do it — are quickly dwindling. We are moving to a world where teams are forming in a more agile way, demanding new toolsets.

The days of traditional hierarchies — where your manager tells you what to do and you go do it — are quickly dwindling. Now, we still have leaders and they tell us to do things and that’s important; I don’t mean to take away from that. Yet, we are moving to a world where, in order to act with speed, teams are forming in a more agile way. Networked groups are operating together cross-functionally and across businesses, and geographies — and it’s all demanding, to your point, new toolsets.

Fortunately, there are a lot of tools that are out there for that. Like with any new area of innovation, though, it can be overwhelming because there are just so many technologies coming into the marketplace to take advantage of.

The trick we are finding is for organizations to be able to separate the noise from the impactful technologies and create a suite of tools that are easy to navigate and remove that kind of friction from the workplace.

Gardner: Tim, a fire hose of technology is certainly not the way to go. From The Economist survey we heard that making applications simple — with a consumer-like user experience — and with the ability to work from anywhere are all important. How do you get the right balance between the use of technology, but in a simplified and increasingly automated way?

A workspace to unify work

Minahan: Art hit the exact right word. All this choice and access to technology that we use to get our jobs done has actually created a lot more complexity. The typical employee now uses a dozen or more apps throughout the day, and oftentimes needs to navigate four more applications just to get a single task or a bit of information that they are looking for. As a result, they need to navigate a whole bunch of different environments, remember a whole bunch of different usernames and passwords, and it’s creating a lot of noise in their day.

To Art’s point, there is an emergence of a new category of technology, a digital workspace that unifies everything for an employee, gives them single sign-on access to everything they need to be productive, and one unified experience, so they don’t need to have as much noise in their day.

Workspace AppCertainly, it also provides an added layer of security around things. And then the third component that gets very, very exciting is that forward-thinking companies are beginning to infuse things like machine learning (ML) and simplified workflows or micro apps that connect some of these technologies together so that the employee can be guided through their day — very much like they are in their personal lives, where Facebook might guide you and curate your day for the news and social interactions you want.

Netflix, for example, will make up the recommendations based on your historical behaviors and preferences. And that’s beginning to work its way into the workplace. So the study we just did with The Economist clearly points to bringing that consumer-like experience into the workplace as a priority among IT and HR leaders.

Gardner: Art, you have said that a positive employee experience requires removing friction from work. What do you mean by friction and is that related to this technology issue, or is it something even bigger?

Remove friction, maximize productivity 

Mazor: I love that you are asking that, Dana. I think it is something bigger than technology — yet technology plays a massively important role.

When we think about friction, and what I love about that word in this context, is it’s a plain English word. We know that friction means. It’s what causes something to slow down.

And so it’s bigger than just technology in the sense that to create that positive worker experience we need to think about a broader construct, which is the human experience overall. And elevating that human experience is about, first and foremost, recognizing that everyone wakes up every morning as a human. We might play the role of a worker, we might play the role of customer, or some other role. But in our day-to-day life, anything that slows us down from being as productive as possible is, in my view, the element that is this  friction.

So that could be process-oriented, it could be policy and bureaucracy that gets in the way. It could be managers who may be struggling with empowerment of their teams. It might even be technology, to your point, that causes it to be more difficult to, as Tim was rightly saying, navigate through to all the different apps or tools.

And so this idea of friction and removing it is really about enabling that workforce to be focused myopically on delivering results for customers, the business, and the other workers in the enterprise. Whatever it may be, anything that stands in the way should be evaluated as a potential cause of friction.

Sometimes that friction is good in the sense of slowing things down for purposes like compliance or risk management. In other cases, it’s bad friction that just gets in the way of good results.

View Video on How Companies

Drive Improved Employee Experience

To Foster Better Business Results 

Minahan: I love what Art’s talking about. That is the next wave we will see in technology. When we talk about these digital workspaces — moving from traditional enterprise applications — built around giving functions and modern collaborations tools, they are focused on team-based collaboration. Still, individuals need to navigate all of these environments — and oftentimes work in different ways.

And so this idea of people-centric computing, in which you put the person at the center, makes it easy for them to interact with all of these different channels and remove some of the noise from their day. They can do much more meaningful work — or in some cases, as one person put it to me, “Get the job done that I was hired to do.” I really believe this is where we are now going.

And you have seen it in consumer technologies. The web came about to organize the world’s information, and apps came about to organize the web. Now you have this idea of the workspace coming about to organize all of those apps so that we can finally get all the utility that had been promised.

Gardner: If we return to our North Star concept, the guiding principle, that this is all about the customer experience, how do we make a connection between solidifying that employee experience as Tim just described but to the benefit of the customer experience?

Art, who in the organization needs to make sure that there isn’t a disconnect or dissonance between that guiding principle of the customer experience and buttressing it through the employee experience?

Leaders emphasize end-customer experience 

Mazor: We are finding this is one of the biggest challenges, because there isn’t a clear-cut owner for the workforce experience. That’s probably a good thing in the long run, because there are way too many individual groups, teams, and leaders who must be involved to have only one accountable leader.

That said, we are finding a number of organizations achieving great success by at least appointing either an existing function — and in many cases we are finding that happens to be HR — or in some organizations finding a different way of having accountability for orchestrating the experience. The best meaning is around bringing together a variety of groups — those could be HR, IT, real estate, marketing, finance, and the business leaders for sure to all play their roles inside of that experience.

Delivering on that end-customer experience as the brass ring, or the North Star to mix metaphors, becomes a way of thinking. It requires a different mindset that enterprises are shaping for themselves — and their leaders can model that behavior.

Delivering on that end-customer experience as the brass ring becomes a way of thinking. It requires a different mindset that enterprises are shaping for themselves — and their leaders can model that behavior.

I will share with you one great example of this. In the typical world of an airline, you would expect that flight attendants are there — as you hear on the announcements — for your safety first, and then to provide services. But one particular major airline recognized that those flight attendants are also the ones who can create the greatest stickiness to customer relationships because they see their top customers in flight, where it matters the most.

Deloitte logoAnd they have equipped that group of flight attendants with data in the form of a mobile device app that they use to see who is on board and where they sit in the importance of being customers in terms of revenue and other important factors. That provides triggers to those flight attendants, and others on the flight staff, to help recognize those customers and to ensure that they are having a great experience. And when things don’t go as well as possible, perhaps due to Mother Nature, those flight attendants are there to keep watch over their most important customers.

That’s a very new kind of construct in a world where the typical job was not focused on customers. Now, in an unwitting way, those flight attendants are playing a critical role in fostering and advancing those relationships with key customers.

There are many, many examples like that that are the outcome of leaders across functions coming together to orchestrate an experience that ultimately is centered around creating a rich customer experience where it matters the most.

Minahan: Two points. One, what Art said is absolutely consistent with the findings of the study we conducted jointly with The Economist. There is no clear-cut leader on employee experience today. In fact, both CHROs and CIOs equally indicated that they were on-point as the lead for driving that experience.

 We are beginning to see the emergence of a digital employee experience officer that’s emerging at some organizations to help drive the coordination that Art is talking about.

But the second point to your question, Dana, around how do we keep employees focused on the customer experience, it goes back to your opening question around purpose. Increasingly, as Art indicated, there is clear demonstration of companies that have clear purpose and are performing better — and that’s because that purpose tends to be on some business outcome. It drives some greater experience or innovation or business outcome for their customers.

If we can ensure that employees have the right tools, information, skills, and training to deliver that customer experience, then they are clearly aligned. I think it all ties very well together.

Gardner: Tim, when I heard Art talking about the flight attendants, it occurred to me that there is a whole class of such employees that are in that direct-interaction-with-the-customer role. It could be retail, the person on the floor of a clothing seller; or it could be a help desk operator. These are the power users that need to get more data, help, and inference knowledge delivered to them. They might be the perfect early types of users that you provide a digital workspace to.

Let’s focus on that workspace. What sort of qualities does that workspace need to have? Why are we in a better position, when it comes to automation and intelligence, than ever before to empower those employees, the ones on the front lines interacting with the customers?

Effective digital workspace requirements

Minahan: Excellent question. There are three, and an emerging fourth, capabilities required for an effective digital workspace. The first is it needs to be unified. We talked about all of the complexity and noise that bogs down an employee’s day, and all of the applications they need to navigate. Well, the digital workspace must unify that by giving a single-sign-on experience into the workspace to access all the apps and content that an employee needs to be productive and to do engaging work, whether they are at the office, on the corporate network, or on their tablet at home, or on their smartphone on a train or a plane.

The second part is obviously — in this day and age, considering especially those front-line employees that are touching customer information — it all needs to be secure. The apps and content need to be more secure within the workspace than when accessed natively. That means dynamically applying security policies and perhaps asking for a second layer of authentication, based on that employee’s behavior.

The third part is around intelligence. Bringing things like machine learning and simplified workflows into the workspace to create a consumer-like experience, where the employee is presented with the right information and the right task within the workspace so that they can quickly access those — rather than needing to log-in to multiple applications and go four layers deep.

citrix-logo-blackThe fourth capability that’s emerging, and that we hear a lot about, is the assurance that those applications — especially for front-line employees who are engaged with customers — are performing at their very best within the workspace. [Such high-level performance needs to be delivered] whether that employee is at a corporate office or more likely at a remote retail branch.

Bringing some of the historical infrastructure like networking technology to bear in order to ensure those applications are always on and reliable is the fourth pillar of what’s making new digital workspace strategies emerge in the enterprise.

The Employee Experience is Broken,

Learn How IT and HR Together Can Fix it 

Gardner: Art, for folks like Tim and me, we live in this IT world and we sometimes get lost in the weeds and start talking in acronyms and techy-talk. At Deloitte, you are widely regarded as the world’s number-one HR transformation consultancy.

First, tell us about the HR consultancy practice at Deloitte. And then, is explaining what technology does and is capable of a big part of what you do? Are you trying to explain the tech to the HR people, and then perhaps HR to the tech people?

Transforming HR with technology 

Mazor: First, thanks for the recognition. We are truly humbled and yet proud to be the world’s leading HR transformation firm. By having the opportunity as we do to partner with the world’s leading enterprises to shape and influence the future of HR, it gives us a really interesting window into exactly what you are describing.

At a lot of the organizations we work with, the HR leaders and their teams are increasingly well-versed in the various technologies out there. The biggest challenge we find is being able to harness the value of those technologies, to find the ones that are going to produce impact at a pace and at a cost and return that really is valued by the enterprise overall.

For sure, the technology elements are critical enablers. We recently published a piece on the future of HR-as-a-function that’s based on a combination of our research and field experience. What we identified is that the future of HR requires a shift in four big areas:

  • The mindset, meaning the culture and the behaviors of the HR function.

  • The focus, meaning focusing in on the customers themselves.

  • The lens through which the HR function operates, meaning the operating model and the shift toward a more agile-network kind of enterprise HR function.

  • The enablers, meaning the wide array of technologies from core HR platform technologies to collaboration tools to automation, ML, artificial intelligence (AI), and so on.

The combination of these four areas enables HR-as-a-function to shift into what we’re referring to as a world that is exponential. I will give you one quick example though where all this comes together.

There is a solution set that we are finding is incredibly powerful inside of driving employee experiences that we refer to as creating a unified engagement platform, meaning the blend of all these technologies in a simple-to-navigate experience that empowers the workers across an enterprise.

We, Deloitte, have actually created one of those platforms in the market that leads the space, called ConnectMe, and there are certainly others. And in that, what we are essentially finding is that HR leaders are looking for that simple-to-navigate, frictionless kind of environment where people can get their jobs done and enjoy doing them at the same time using technology to empower them.

HR leaders are navigating this complex set of technologies out there that are terrific because they’re providing advantages for the business functions. A lot of technology firms are investing heavily in worker-facing technologies.

The premise that you described is spot-on. HR leaders are navigating this complex set of technologies out there that are terrific because they’re providing advantages for the business functions. A lot of the technology firms are investing heavily in worker-facing technology platforms, for exactly the reason we have been chatting about here.

Gardner: Tim, when it comes to the skills gap, it is an employee’s market. Unemployment rates are very low, and the types of skills in demand are hard to find. And so the satisfaction of that top-tier worker is essential.

It seems to me that the better tools you can give them, the more they want to work. If I were a top-skilled employee, I would want to go with the place that has the best information that empowers me in the best way and brings contextual information with security to my fingertips.

But that’s really difficult to do. How do businesses then best enhance and entice employees by giving them the best intelligence tools?

Intelligent tools support smart workers 

Minahan: If you think about your top-performing employees, they want to do their most meaningful work and to perform at their best. As a result, they want to eliminate a lot of the noise from their day, and, as Art mentioned before, that friction.

And that friction is not solely technological, it’s often manifested through technology due to certain tasks or requirements that we need to do that may not pertain to our core jobs.

So, last time I checked, I don’t think either Art or myself were hired to review and approve expense reports or to spend a good chunk of our time approving vacations or doing full-scale performance reviews. Yet those types of applications that may not be pertinent to our jobs or processes, tend to take up a good part of our time.

What digital workspaces or digital work platforms do in the first phase is remove that noise from your day so that your best-performing employees can do their best work. The second phase uses those same platforms to help employees do better work through making sure that information is pushed to them as they need it.

Citrix campusThat’s information that is pertinent to their jobs. In a salesperson’s environment that might be a change in pipeline status, or a change in a prospect or customer activity. Not only do they get information at their fingertips, they can take action.

And what gets very exciting about that is you have the opportunity now to elevate the skills of every employee. We talk about the skills gap, but this is but one way to go re-train everybody.

Another way is to make sure that you’re giving them an unfair advantage within the work platforms you are using to guide them through the right process. So a great example is sales force productivity. A typical company takes 9-12 months to get a salesperson up to full productivity. Average tenure of a salesperson is somewhere around 36 months. So a company is getting a year-and-a-half of productivity out of a salesperson.

What if by eliminating all that noise, and by using this digital work platform to help push the right information, tasks, right actions, and the right customer sales pitches to them at the right time, you can cut that time to full productivity in half?

Think about the real business value that comes from using technology to actually elevate the skill set of the entire workforce, rather than bog it down.

Gardner: Tim, do you have any examples that illustrate what you just described? Any named or use case types of examples that show how what you’re doing at Citrix has been a big contributor?

Minahan: One example that’s top-of-mind not only helps improve employee experiences to elevate the experience for customers, but also allows companies to rethink work models in ways they probably haven’t since the days of Henry Ford. And the example that comes to mind is eBay.

We are all familiar with eBay, one of the world’s largest online digital marketplaces. Like many other companies, they have a large customer call center where buyers and sellers ask questions. These call center employees have to have the right information at their fingertips to get things done.

Well, the challenge they faced was with the talent gap and labor shortage. Traditionally they would build a big call center, hire a bunch of employees, and train them at the call center. But now, it’s harder to do that; they are competing with the likes of Amazon, Google and others who are all trying to do the same thing.

And so they used technology to break the traditional mold and to create a new work model. Now they go to where the talent is, such as the stay-at-home parent in Montana and the retiree in Florida, or the gig worker in Boston or New York. They can now arm them with a digital workspace and push the right information and toolsets to them. By doing so you ensure they get the job done even though if you or I call in we don’t know that they are not sitting in a centralized call center.

This is just one example as we begin to harness and unify this technology of how we can change work models. We can create not just the better employee experience, but entirely new ways to work.

How to Harness Technology

To Inspire Workers to Perform 

At Their Unencumbered Best 

Gardner: Art, it’s been historically difficult to measure productivity, and especially to find out what contributes to that productivity. The same unfortunately is the case with technology. It’s very difficult to measure quantitatively and qualitatively what technology directly does for both employee productivity and overall organizational productivity.

Are there ways for us to try to measure how new workspaces and good HR contribute to good employee satisfaction — and ultimately customer satisfaction? How do we know when we are doing this all right?

Success, measured 

Mazor: This is the holy grail in many ways, right? You get what you measure, and this whole space of workforce experience in many ways is a newer discipline. Customer experience has been around for a while and gained great traction and measurement. We can measure customer feedback. We can measure net promoter scores, and a variety of other indicators, not the least of which may be revenue, for example, or even profitability relative to customer base. We equally are now starting to see the emergence of measurements in the workforce experience arena.

And at the top-line we can see measurements like measuring workforce engagement. As that rises, likely there is a connection to positive worker experience. We can measure productivity. We can even measure the growth of capabilities within the workforce that are being gained as a result of — as we like to say — learning in the flow of work, to develop their capabilities.

That path is really important to chart out because it has similarities to those tools, methods, and approaches used inside the customer space. We think about it in very simple terms, we need to first look, listen, and understand to sense what’s happening with the workforce.

We need to generate and prioritize different ideas of ways in which the experience for the workforce can be moved. Then we need to iterate, test, refine, and plan the kinds of changes you might prototype that provides you that foundation to measure. And in the workforce experience space, it’s a variety of measures that we are starting to see to get down into the granular levels below those top-line measures that I mentioned.

What comes to mind for me are things like measuring the user experience for all of the workers. How effective is the product or service that they are being asked to use? How quickly can they deliver their work? What feedback do we get from workers? So kind of a worker feedback category.

We need to generate and prioritize different ideas of ways in which the experience for the workforce can be moved. We need to iterate, test, refine, and plan the types of changes you might prototype that provide a foundation to measure.

And then there are a set of operational measures that can track inputs and outputs from various processes and various portions of the experience. There is that kind of categorization “in those three buckets” that really seems to be working well for many of our clients to measure that notion of workforce experience to your point, of, “Did we get it right?”

But in the end, as I shared at the beginning, I think it’s really critical that organizations measure that workforce experience through the ultimate lens, which is, “How are we dealing with our customers?” When that’s performing well, chances are pretty good, based on the research that we have seen, that the connection is there to the employee or workforce experience.

Minahan: When we are talking about the employee experience, we should be careful — it’s not synonymous with just productivity. It’s a balance of productivity and employee engagement that together ultimately drives greater business results, customer experience, satisfaction, and improved profitability. Employee experience has been synonymous with productivity, it’s certainly a key integer into it, but it’s not the only one.

Gardner: Tim, how should IT people be thinking differently when it comes to how they view their own success? It was not that long ago where simply the performance of the systems — when all the green lights were on and the networks were not down — was the gauge of success. Should IT be elevating how it perceives itself and therefore how it should rate itself when it’s successful within these larger digital transformation endeavors?

Information, technology, and better business

Minahan: Yes, absolutely. I think this could be the revitalization of IT as it moves beyond the items that you mentioned: keeping the networks up, keeping the applications performing well. IT can now drive better business outcomes and results.

Human Resources SectionThose forward-thinking companies looking to digitize their business realize that it’s very hard to ask an employee base to drive a greater digital customer experience without arming them with the right tools, information, and experience in their own right in order to get that done. IT plays a very major role here, locking arms in unison with the CHRO, to move the needle and turn employee experience into a competitive edge — not just for attracting and retaining talent, but ultimately for driving better business results.

Gardner: I hope, if anything, this conversation prompts more opportunity for the human resources leadership and the IT leadership to spend time together and brainstorm and find commonality.

Before we sign off, just a quick look to the future. Art, for you, what might be changing soon that will help remove even more friction for employees? What is  it that’s down the pike over the next three to five years — technologies, processes, market forces — that might be an accelerant to removing friction? Are there bright spots in your thinking about the future?

Bright symphony ahead

Mazor: I think the future is really bright. We are optimistic by nature, and we see enterprises making terrific, bold moves to embrace their future as challenging as the future is.

One of the biggest opportunities is the recognition of the imperative for executives and their teams to operate in a more symphonic way. And when I say that I mean to work together to achieve a common set of results, moving away from the historical silos that were emerging from a zeal for efficiency and that led to organizations having these various departments, and then the departments working within themselves and finding it a struggle to create integration.

We are seeing a huge unlocking of that, in the spirit of creating more cross-functional teams and more agile ways of working — truly operating in the digital age. As we talked about in one of our recent capital trends reports, the idea of driving this is a more symphonic C-Suite, which then has a cascading effect for teams across the board inside of enterprises all to be working better together.

And then, secondly, there is a big recognition by enterprises now around the imperative to create meaning in the work that workers are doing. Increasingly, we are seeing this as a demand. This is not a single-generational demand. It’s not that the younger generation needs meaning or anything like that, that fits into stereotypes.

Rather, it’s a recognition that when we create purpose and meaning for the workers in an enterprise, they are more committed. They are more focused on outcomes, as opposed to activities. They begin to recognize the outcomes’ linkage to their own personal purpose, meaning for the enterprise, and for the work itself.

 And so, I think those two things will continue to emerge on a fairly rapid basis, to be able to embrace that need for symphonic operations and symphonic collaboration, as well as the imperative to create meaning and purpose for the workers of an enterprise. This will all unlock and unleash those capabilities focused on the customer through creating terrific employee or workforce experiences.

Gardner: Tim, last word to you. How do you foresee over the next several years technology evolving to support and engender the symphonic culture that Art just described?

Minahan: We have gotten to the point where employees are asking for a simplification of their environment, a unified access to everything, and to remove noise from their days so they can do that meaningful, purposeful work.

But what’s exciting is that same platform can be enabled to elevate the skill sets of all employees, giving them the right information, and the right task at the right time so they can perform at their very best.

But what gets me very excited about the future is the technology and a lot of the new thinking that’s going on. In the next few years, we’re going to see work models similar to the example I shared about eBay. We will see change in ways we work that we haven’t see in the past 100 years, where the lines between different functions and different organizations begin to evaporate.

What gets me excited about the future is the technology and a lot of new thinking that’s going on. In the next few years, we’re going to see new work models. We will see change in the ways we work that we haven’t seen in the past 100 years.

Instead we will have work models where companies are beginning to organize around pools of talent, where they know who has the right skills and the right knowledge, regardless if they are full-time employees or a contractor. Technology will pull them together into workgroups no matter where they are in the world, to solve the given problem or produce a given outcome, and then dissolve them very quickly again. So I am very excited about what we are going to see in just the next five years ahead.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Posted in application transformation, Citrix, Cloud computing, Cyber security, Data center transformation, Deloitte, Enterprise architect, machine learning, Mobile apps, mobile computing, Security, social media, User experience | Tagged , , , , , , , , , , , , , , , | Leave a comment

How rapid machine learning at the racing edge accelerates Venturi Formula E Team to top-efficiency wins

Venturi E frontThe next BriefingsDirect Voice of the Customer advanced motorsports efficiency innovation discussion explores some of the edge computing and deep analytics technology behind the Formula E auto racing sport.

Our interview focuses on how data-driven technology and innovation make high-performance electric racing cars an example for all endeavors where limits are continuously tested and bested.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the latest in Formula E efficiency strategy and refinement, please welcome Susie Wolff, Team Principal at Venturi Formula E Team in Monaco. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Aside from providing a great viewing and fan experience, what are the primary motivators for Formula E racing? Why did it start at all?

Wolff: It’s a really interesting story, because Formula E is like a startup. We are only in our fifth season, and Formula E and the management of Formula E disrupted the world of motorsport because it brought to the table a new concept of growing racing.

Susie Wolff

Wolff

We race in city centers. That means that the tracks are built up just for one-day events, right in the heart of some of the most iconic capitals throughout the world. Because it’s built up within a city center and it’s usually only a one-day event, you get very limited track time, which is quite unusual in motorsport. In the morning we get up, we test, we go straight into qualifying, and then we race.

Yet, it’s attracting a new audience because people don’t need to travel to a race circuit. They don’t need to buy an expensive ticket. The race comes to the people, as opposed to the people going out to see racing.

Obviously, the technology is something completely new for people. There is very little noise, mostly you hear the whooshing of the cars going past. It’s a showcase for new technologies, which we are all going to see appearing on the road in the next three to five years.

Race down to Electric Avenue 

The automotive industry is going through a massive change with electric mobility and motorsport is following suit with Formula E.

We already see some of the applications on the roads, and I think that will increase year on year. What motorsport is so good at is testing and showcasing the very latest technology.

Gardner: I was going to ask you about the noise because I had the privilege and joy of watching a Formula One event in Monaco years ago, and the noise was a big part of it. Aside from these cars being so quiet, what is also different in terms of an electric Formula E race compared to traditional Formula One?

Wolff: The noise is the biggest factor, and that takes a bit of getting used to. It’s the roaring of the engines that creates emotion and passion. Obviously, in the Formula E cars you are missing any kind of noise.

Venturi E cityEven the cars we are starting to drive on the roads now have a little electric start and every time I switch it on I think, “Oh, the car is not working, I have a problem.” I forget that there is no noise when you switch an electric car on.

Also, in Formula E, the way that technology is developing and how quickly it’s developing is very clear through the racing. Last season, the drivers had two cars and they had to switch cars in the middle of the race because the battery wouldn’t last long enough for a complete race distance. Now, because the battery technology has advanced so quickly, we are doing one race with one car and one battery. So I think that’s really the beauty of what Formula E is. It’s showcasing this new technology and electric mobility. Add to this the incredible racing and the excitement that brings, and you have a really enticing offering.

Gardner: Please tell us about Venturi, as a startup, and how you became Team Principal. You have been involved with racing for quite some time.

A new way to manage a driving career

Wolff: Yes, my background is predominately in racing. I started racing cars when I was only eight years old, and I made it through the ranks as a racing driver, all the way to becoming a test driver in Formula One.

Then I stepped away and decided to do something completely different and started a second career. I was pretty sure it wouldn’t be in motorsport, because my husband, Toto Wolff, works in motorsport. I didn’t want to work for him and didn’t want to work against him, so I was very much looking for a different challenge and then Venturi came along.

The President of Venturi, a great gentleman, Gildo Pastor, is a pioneer in electric mobility. He was one of the first to see the possibility of using batteries in cars, and he set a number of land speed records — all electric. He joined Formula E from the very beginning, realizing the potential it had.

Venturi bugThe team is based in Monaco, which is a very small principality, but one with a very rich history in racing because of the Grand Prix. Gildo had approached me previously when I was still racing to drive for his team in Formula E. I was one of the cynics, not sure Formula E was going to be for the long-term. So I said, “Thank you, but no thank you.”

But then he contacted me last year and said, “Look, I think we should work together. I think you will be fantastic running the team.” We very quickly found a great way to work together, and for me, it was just the perfect challenge. It’s a new form of racing, it’s breaking new ground and it’s at such an exciting stage of development. So, it was the perfect step for me into the business and management side of motorsports.

Gardner: For me, the noise difference is not much of an issue because the geek factorgets me jazzed about automobiles, and I don’t think I am alone in that. I love the technology. I love the idea of the tiny refinements that improve things and that interaction between the best of what people can do and what machines can do.

Tell us about your geek factor. What is new and fascinating for you about Formula E cars? What’s different from the refinement process that goes on with traditional motorsport and the new electric version?

The software challenge 

Wolff: It’s a massively different challenge than what we are used to within traditional forms of motorsport.

The new concept behind Formula E has functioned really well. Just this season, for example, we had eight races with eight different winners. In other categories, for example in Formula One, you just don’t get that. There is only the possibility for threeteams to win a race, whereas in Formula E, the competition is very different.

Also, as a team, we don’t build the cars from scratch. A Formula One team would be responsible for the design and build of their whole car. In Formula E, 80 percent of the car is standardized. So every team receives the same car up to that 80 percent. The last part is the power train, the rear suspension, and some of the rear-end design of the car.

HPE bugThe big challenge within Formula E then, is in the software. It’s ultimately a software race: Who can develop, upgrade, and react quickly enough on the software side. And obviously, as soon as you deal with software, you are dealing with a lot of data.

That’s one of the biggest challenges in Formula E — it’s predominantly a software race as opposed to a hardware race. If it’s hardware, it’s set at the beginning of the season, it’s homologated, and it can’t be changed.

In Formula E, the performance differentiators are the software and how quickly you can analyze, use, and redevelop your data to enable you to find the weak points and correct them quickly enough to bring to the on-track performance.

Gardner: It’s fascinating to me that this could be the ultimate software development challenge, because the 80/20 rule applies to a lot of other software development, too. The first 80 percent can be fairly straightforward and modular; it’s the last 20 percent that can make or break an endeavor.

Tell us about the real-time aspects. Are you refining the software during the race day? How does that possibly happen?

Winning: When preparation meets data 

Wolff: Well, the preparation work is a big part of a race performance. We have a simulator based back at our factory in Monaco. That’s where the bulk of the preparation work is done. Because we are dealing with only a one-day event, it means we have to get everything crammed into an eight-hour window, which leaves us very little time between stations to analyze and use the data.

The bulk of the preparation work is done in the simulator back at the factory. Each driver does between four to six days in a simulator preparing for a race. That’s where we do all of the coding and try to find the most efficient ways to get from the start to the finish of the race. That’s where we do the bulk of the analytical work.

Venturi_Massa_Marrakesch_2019When we arrive at the actual race, we are just doing the very fine tweaks because the race day is so compact. It means that you need to be really efficient. You need to minimize the errors and maximize the opportunities, and that’s something that is hugely challenging.

If you had a team of 200 engineers, it would be doable. But in Formula E, the regulations limit you to 20 people on your technical team on a race day. So that means that efficiency is of the utmost importance to get the best performance.

Gardner: I’m sure in the simulation and modeling phase you leverage high-performance computing (HPC) and other data technologies. But I’m particularly curious about that real-time aspect, with a limit of 20 people and the ability to still make some tweaks. How did you solve the data issues in a real-time, intensive, human-factor-limited environment like that?

Wolff: First of all, it’s about getting the right people on-board and being able to work with the right people to make sure that we have the knowhow on the team. The data is real-time, so in a race situation we are aware if there is a problem starting to arise in the car. It’s very much up to the driver to control that themselves, from within the car, because they have a lot of the controls. The very important data numbers are on their steering wheel.

They have the ability to change settings within the car — and that’s also what makes it massively challenging for the driver. This is not just about how fast you can go, it’s also how much extra capacity you have to manage in your car and your battery — to make sure that you are being efficient.

The data is utmost in importance for how it’s created and then how quickly it can be analyzed and used to help performance. That’s something HPE has been a huge benefit for us for. … We can apply that ability to crunch the numbers more quickly. 

The data is utmost in importance for how it’s created and then how quickly it can be analyzed and used to help performance. That’s something that Hewlett Packard Enterprise (HPE) has been a huge benefit to us for. First of all, HPE has been able to increase the speed at which we can send data from factory to race track, between engineers. That technology has also increased the level of our simulator and what it’s able to crunch through in the preparation work.

And that was just the start. We are now looking at all the different areas where we can apply that ability to crunch the numbers more quickly. It allows us to look at every different aspect, and it will all come down to those marginal gains in the end.

Gardner: Given that this is a team sport on many levels, you are therefore working with a number of technology partners. What do you look for in a technology partner?

Partner for performance 

Wolff: In motorsport, you very quickly realize if you are doing a good job or not. Every second weekend you go racing, and the results are shown on the track. It’s brutal because if you are at the top step of the podium, you have done a great job. If you are at the end, you need to do a better job. That’s a reality check we get every time we go racing.

For us to be the best, we need to work with the best. We’re obviously very keen to always work with the best-in-field, but also with companies able to identify the exact needs we have and build a product or a package that helps us. Within motorsports, it’s very specific. It’s not like a normal IT company or a normal business where you can plug-and-play. We need to redefine what we can do, and what will bring added performance.

Edgeline boxWe need to work with companies that are agile. Ideally they have experience within motorsports. They know what you need, and they are able to deliver. They know what’s not needed in motorsports because everything is very time sensitive. We need to make sure we are working on the areas that bring performance — and not wasting resources and time in areas that ultimately are not going to help our track performance.

Gardner: A lot of times with motorsports it’s about eking out the most performance and the highest numbers when it comes to variables like skidpad and the amounts of friction versus acceleration. But I can see that Formula E is more about the interplay between the driver, the performance, and the electrical systems efficiency.

Is there something we can learn from Formula E and apply back to the more general electric automobile industry? It seems to me they are also fighting the battle to make the batteries last longest and make the performance so efficient that every electron is used properly.

Wolff: Absolutely. That’s why we have so many manufacturers in Formula E … the biggest names in the industry, like BMW, AudiJaguar and now Mercedes and Porsche. They are all in Formula E because they are all using it as a platform to develop and showcase their technology. And there are huge sums of money being spent within the automotive industry now because there is such a race on to get the right technology in the next generation of electric cars. The technology is advancing so quickly. The beauty of Formula E is that we are at the very pinnacle of that.

We are purely performance-based and it means that those race cars and power trains need to be the most efficient, and the quickest. All of the technology and everything that’s learned from the manufacturers doing Formula E eventually filters back into the organizations. It helps them to understand where they can improve and what the main challenges are for their electrification and electric mobility in the end.

Gardner: There is also an auspicious timing element here. You are pursuing the refinement and optimization of electric motorsports at the same time that artificial intelligence (AI) and machine learning (ML) technologies are becoming more pervasive, more accessible, and brought right to the very edge … such as on a steering wheel.

Is there an opportunity for you to also highlight the use of such intelligence technologies? Will data analytics start to infer what should be happening next, rather than just people analyzing data? Is there a new chapter, if you will, in how AI can come to bear on your quest for the Formula E best?

AI accelerates data 

Wolff: A new chapter is just beginning. Certainly, in some of the conversations we’ve had with our partners — and particularly with HPE — it’s like opening up a treasure chest, because the one thing we are very good at in motorsports is generating lots of data.

The one thing that we are clear at, and it’s purely down to manpower and time and resource, is the analyzing of data. There is only so much that we have capacity for. And with AI there are a couple of examples that I wouldn’t even want to share because I wouldn’t want my competitors to know what’s possible.

There are a couple of examples where we have seen that AI can constitute the numbers in a matter of seconds and spit out the results. I can’t even comprehend how long it would take us to get to those numbers otherwise. It’s a clear example of how much AI is going to accelerate our learning on the data side, and, particularly, because it’s software, there’s so much analyzing of the data needed to bring new levels of performance. For us it’s going to be game changer and we are only at the start.

It’s incredibly exciting but also so important to make sure that we are getting it right. There is so much possibility that if we don’t get it right, there could be big areas that we could end up losing on.

48 Edoardo MortaraGardner: Perhaps soon, race spectators will not only be watching the cars and how fast they are going. Perhaps there will be a dashboard that provides views of the AI environment’s performance, too. It could be a whole new type of viewer experience — when you’re looking at what the AI can do as well as the car. Whoever thought that AI would be a spectator sport?

Wolff: It’s true and it’s not far away. It’s very exciting to think that that could be coming.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in artificial intelligence, big data, Business intelligence, Cloud computing, data analysis, Hewlett Packard Enterprise, Information management, Internet of Things, machine learning, performance engineering, Software | Tagged , , , , , , , , , , , , , , | Leave a comment

The budding storage relationship between HPE and Cohesity brings the best of startup innovation to global enterprise reach

cohesity-scale-out-file-servicesThe next BriefingsDirect enterprise storage partnership innovation discussion explores how the best of startup culture and innovation can be married to the global reach, maturity, and solutions breadth of a major IT provider.

Stay with us to unpack the budding relationship between an upstart in the data management space, Cohesity, and venerable global IT provider Hewlett Packard Enterprise (HPE).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about the latest in total storage efficiency strategies and HPE’s Pathfinder program we welcome Rob Salmon, President and Chief Operating Officer at Cohesity in San Jose, California, and Paul Glaser, Vice President and Head of the Pathfinder Program at HPE. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Paul, how have technology innovation, the nature of startups, and the pace of business today made a deep relationship between HPE and Cohesity the right fit for your mutual customers?

Paul Glaser

Glaser

Glaser: That’s an interesting question, Dana. To start, the technology ecosystem and the startup ecosystem in Silicon Valley, California — as well as other tech centers on a global basis — fuel the fire of innovation. And so, the ample funding that’s available to startups, the research that’s coming out of top tier universities such as Stanford, Carnegie Mellon, or MIT out on the East Coast, fuels a lot of interesting ideas — disruptive ideas that lead their way into small startups.

The challenge for HPE as a large, global technology player is to figure out how to tap into the ecosystem of startups and the new disruptive technologies coming out of the universities, as well as serial entrepreneurs, foster and embrace that, and deliver those solutions and technologies to our customers.

Gardner: Paul, please describe the Pathfinder thesis and approach. What does it aim to do?

Insight, investment, and solutions

Glaser: Pathfinder, at the top level is the venture capital (VC) program of HPE and can be subdivided into three core functions. First is insight, second is investments, and third is the solutions function. The insight component acts like a center of excellence, it keeps a finger on the pulse, if you will, of disruptive innovation in the startup community. It helps HPE as a whole interact with the startup community, the VC community, and identifies and curates leading technology innovations that we can ultimately deliver to our customers.

The second component is investments. It’s fairly straight-forward. We act like a VC firm, taking small equity stakes in some of these startup companies.

And third, solutions. For the companies that are in our portfolio, we work with them to make introductions to product and technical organizations inside of HPE, fostering dialogue from a product evolution perspective and a solution perspective. We intertwine HPE’s products and technologies with the startup technology to create one-plus-one-equals-three. And we deliver that solution to customers and solve their challenges from a digital transformation perspective.

Gardner: How many startup companies are we talking about? How many in a year have typically been included in Pathfinder?

Glaser: We are a very focused program, so we align around the strategies for HPE. Because of that close collaboration with our portfolio companies and the business units, we are limited to about eight investments or new portfolio companies on an annual basis.

Today, the four-and-a-half-year-old program has about two dozen companies inside in the portfolio. We expect to add another eight over the next 12 months.

Gardner: Rob, tell us about Cohesity and why it’s such a good candidate, partner, and success story when it comes to the Pathfinder program.

Rob Salmon

Salmon

Salmon: Cohesity is a five-year-old company focused on data management for about 70 to 80 percent of all the data in an enterprise today. This is for large enterprises trying to figure out the next great thing to make them more operationally efficient, and to give them better access to data.

Companies like HPE are doing exactly the same thing, looking to figure out how to bring new conversations to their customers and partners. We are a software-defined platform. The company was founded by Dr. Mohit Aron, who has spent his entire 20-plus-year career working on distributed file systems. He is one of the key architects of the Google File System and co-founder of Nutanix. The hyperconverged infrastructure (HCI) movement, really, was his brainchild.

He started Cohesity five years ago because he realized there was a new, better way to manage large sets of data. Not only in the data protection space, but for file services, test dev, and analytics. The company has been selling the product for more than two and a half years now, and we’ve been a partner with Paul and the HPE Pathfinder team for more than three years now. It’s been a quite successful partnership between the two companies.

Gardner: As I mentioned in my set-up, Rob, speed-to-value is the name of the game for businesses today. How have HPE and Cohesity together been able to help each other be faster to market for your customers?

One plus one equals three

Salmon: The partnership is complimentary. What HPE brings to Cohesity is experience and reach. We get a lot of value by working with Paul, his team, and the entire executive team at HPE to bring our product and solutions to market.

When we think about the combination between the products from HPE and Cohesity, one-plus-one-equals-three-plus. That’s what customers are seeing as well. The largest customers we have in the world running Cohesity solutions run on HPE’s platform.

HPE brings credibility to a company of our size, in all areas of the world, and with large customers. We just could not do that on our own.

Gardner: And how does working with HPE specifically get you into these markets faster?

Salmon: In fact, we just announced an original equipment manufacturer (OEM) relationship with HPE whereby they are selling our solutions. We’re very excited about it.

Simplify Secondary Storage with HPE and Cohesity

I can give you a great example. I met with one of the largest healthcare providers in the world a year ago. They loved hearing about the solution. The question they had was, “Rob, how are you going to handle us? How will you support us?” And they said, “You are going to let us know, I’m sure.”

They immediately introduced me to the general manager of their account at HPE. We took that support question right off the table. Everything has been done through HPE. It’s our solution, wrapped around the broad support services and hardware capabilities of HPE. That made for a total solution for our customers, because that’s ultimately what these kinds of customers are looking for.

They are not just looking for great, new innovative solutions. They are looking for how they can roll that out at scale in their environments and be assured it’s going to work all the time.

Gardner: Paul, HPE has had huge market success in storage over the past several years, being on the forefront of flash and of bringing intelligence to how storage is managed on a holistic basis. How does the rest of storage, the so-called secondary level, fit into that? Where do you see this secondary storage market’s potential?

Glaser: HPE’s internal product strategy has been around the primary storage capability. You mentioned flash, so such brands as 3PAR and Nimble Storage. That’s where HPE has a lot of its own intellectual property today.

On the secondary storage side, we’ve looked to partners to round out our portfolio, and we will continue to do so going forward. Cohesity has become an important part of that partner portfolio for us.

But we think about more than just secondary storage from Cohesity. It’s really about data management. What does the data management lifecycle of the future look like? How do you get more insights on where your data is? How do you better utilize that?

Cohesity and that ecosystem will be an important part of how we think about rounding out our portfolio and addressing what is a tens of billions of dollars market opportunity for both companies.

Gardner: Rob, let’s dig into that total data management and lifecycle value. What are the drivers in the market making a holistic total approach to data necessary?

Cohesity makes data searchable, usable 

Salmon: When you look at the sheer size of the datasets that enterprises are dealing with today, there is an enormous data management copy problem. You have islands of infrastructures set up for different use cases for secondary data and storage. Oftentimes the end users don’t know where to look, and it may be in the wrong place. After a time, the data has to be moved.

The Cohesity platform indexes the data on ingest. We therefore have Google-like search capabilities across the entire platform, regardless of the use-case and how you want to use the data.

storage-iceberg

When we think about the legacy storage solutions out there for data protection, for example, all you can do is protect the data. You can’t do anything else. You can’t glean any insights from that data. Because of our indexing on ingest, we are able to provide insights into the data and metadata in ways unlike customers and enterprises have ever seen before. As we think about the opportunity, the larger the datasets that are run on the Cohesity platform and solution, the more insight customers can have into their data.

And it’s not just about our own applications. We recently introduced a marketplace where applications such as Splunk reside and can sit on top and access the data in the Cohesity platform. It’s about bringing compute, storage, networking, and the applications all together to where the data is, versus moving data to the compute and to the applications.

Gardner: It sounds like a solution tailor-made for many of the new requirements we’re seeing at the edge. That means massive amounts of data generated from the Internet of things (IoT) and the industrial Internet of things (IIoT). What are you doing with secondary storage and data management that aligns with the many things HPE is doing at the edge?

Seamless at the edge

Salmon: When you think about both the edge and the public cloud, the beauty of a next-generation solution like Cohesity is we are not redesigning something to take advantage of the edge or the public clouds. We can run a virtual edition of our software at the edge, and in public cloud. We have a multiple-cloud offering today.

So, from the edge all the way to on-premises and into public clouds it’s a seamless look at all of your data. You have access and visibility to all of the data without moving the data around.

Gardner: Paul, it sounds like there’s another level of alignment here, and it’s around HPE’s product strategies. With HPE InfoSightOneView — managing core-to-edge issues across multiple clouds as well as a hybrid cloud — this all sounds quite well-positioned. Tell us more about the product strategy synergy between HPE and Cohesity.

Glaser: Dana, I think you hit it spot-on. HPE CEO Antonio Neri talks about a strategy for HPE that’s edge-centric, cloud-enabled, and data-driven. As we think about building our infrastructure capabilities — both for on-premise data centers and extending out to the edge — we are looking for partners that can help provide that software layer, in this case the data management capability, that extends our product portfolio across that hybrid cloud experience for our customers.

As you think about a product strategy for HPE, you really step up to the macro strategy, which is, how do we provide a solution for our customers that allows us to span from the edge all the way to the core data center? We look at partners that have similar capabilities and similar visions. We work through the OEMs and other types of partnership arrangements to embed that down into the product portfolio.

Gardner: Rob, anything to offer additionally on the alignment between Cohesity and HPE, particularly when it comes to the data lifecycle management?

Salmon: The partnership started with Pathfinder, and we are absolutely thrilled with the partnership we have with HPE’s Pathfinder group. But when we did the recent OEM partnership with HPE, it was actually with HPE’s storage business unit. That’s really interesting because as you think about competing or not, we are working directly with HPE’s storage group. This is very complementary to what they are doing.

When we did the recent OEM partnership with HPE, it was actually with HPE’s storage business unit. That’s really interesting because as you think about competing or not, we are working directly with HPE storage. This is very complementary to what they are doing.

We understand our swim lane. They understand our swim lane. And yet this gives HPE a far broader portfolio into environments where they are looking at what the competitors are doing. They are saying, “We now have a better solution for what we are up to in this particular area by working with Cohesity.”

We are excited not just to work with the Pathfinder group but by the opportunity we have with Antonio Neri’s entire team. We have been welcomed into the HPE family quite well over the last three years, and we are just getting started with the opportunity as we see it.

Gardner: Another area that is top-of-mind for businesses is not just the technology strategy, but the economics of IT and how it’s shifted given the cloud, Software as a Service (SaaS), and pay-on-demand models. Is there something about what HPE is doing with its GreenLake Flex Capacity approach that is attractive to Cohesity? Do you see the reception in your global market improved because of the opportunity to finance, acquire, and consume IT in a variety of different ways?

Flexibility increases startups’ strength 

Salmon: Without question! Large enterprises want to buy it the way they want to buy it, whether it be for personalized licenses or a subscription model. They want to dictate how it will be used in their environments. By working with HPE and GreenLake, we are able to offer the flexible options required to win in this market today.

Gardner: Paul, any thoughts about the economics of consuming IT and how Pathfinder might be attractive to more startups because of that?

Glaser: There are two points Rob touched on that are important. One, working with HPE as a large company, it’s a journey. As a startup you are looking for that introduction or that leg up that gives you visibility across the global HPE organization. That’s what Pathfinder provides. So, you start working directly with the Pathfinder organization, but then you have the ability to spread out across HPE.

For Cohesity, it’s led to the OEM agreement with the storage business unit. It is the ability to leverage different consumption models utilizing GreenLake, and some of our flexible pricing and flexible consumption offers.

The second point is Amazon Web Services has conditioned customers to think about pay-per-use. Customers are asking for that, and they are looking for flexibility. As a startup, that sometimes is hard to figure out — how to economically provide that capability. Being able to partner with HPE and Pathfinder, to utilizing GreenLake or some of our other tools, it really provides them a leg up in terms of the conversation with customers. It helps them trust that the solution will be there and that somebody will be there to stand behind it over the coming years.

Gardner: Before we close out, I would like to peek in the crystal ball for the future. When you think about the alignment between Cohesity and HPE, and when we look at what we can anticipate — an explosion of activity at the edge and rapidly growing public cloud market — there is a gorilla in the room. It’s the new role for inference and artificial intelligence (AI), to bring more data-driven analytics to more places more rapidly.

Any thoughts about where the relationship between HPE and Cohesity will go on an AI tangent product strategy?

AI enhances data partnership

Salmon: You touched earlier, Dana, on HPE InfoSight, and we are really excited about the opportunity to partner even closer with HPE on it. That’s an incredibly successful product in its own right. The opportunity for us to work closer and do some things together around InfoSight is exciting.

On the Cohesity side, we talk a lot about not just AI but machine learning (ML) and where we can go proactively to give customers insights into not only the data, but also the environment itself. It can be very predictive. We are working incredibly hard on that right now. And again, I think this is an area that is really just getting started in terms of what we are going to be able to do over a long period of time.

Gardner: Paul, anything to offer on the AI future?

Glaser: Rob touched on the immediate opportunity for the two companies to work together, which is around HPE InfoSight and marrying our capabilities in terms of predictability and ML around IT infrastructure and creative solutions around that.

As you extend the vision to being edge-centric, as you look into the future where applications become more edge-centric and compute is going to move toward the data at the edge, the lifecycle of what that data looks like from a data management perspective at the edge — and where it ultimately resides — is going to become an interesting opportunity. Some of the AI capabilities can provide insight on where the best place is for that computation, and for that data, to live. I think that will be interesting down the road.

As you extend the vision to being edge-centric, compute is going to move toward the data at the edge. The lifecycle of what that data looks like from a data management perspective at the edge is an interesting opportunity.

Gardner: Rob, for other startups that might be interested in working with a big vendor like HPE through a program like Pathfinder, any advice that you can offer?

Salmon: As a startup, you know you are good at something, and it’s typically around the technology itself. You may have a founder like Mohit Aron, who is absolutely brilliant in his own right in terms of what he has already done in the industry and what we are going to continue to do. But you have got to do all the building around that brilliance and that technology and turn it into a true solution.

And again, back to this notion of solution, the solution needs global scale, it’s giving the support to costumers, not just one experience with you, but what they are expecting to experience from the enterprises that support them. You can learn a lot from working with large enterprises. They may not be the ones to tell you exactly how you are going to code your product; we have got that figured out with the brilliance of a Mohit and the engineering team around him. But as we think about getting to scale, and scaling the operation in terms of what we are doing, leaning on someone like the Pathfinder group at HPE has helped us an awful lot.

AWS-Cohesity-Blog

Salmon: The other great thing about working with the Pathfinder group is, as Paul touched on earlier, they work with other portfolio companies. They are working with companies that may be in a little different space than we are, but they are seeing a similar challenge as we are.

How do you grow? How do you open up a market? How do you look at bringing the product to market in different ways? We talked about consumption pricing and the new consumption models. Since they are experiencing that with others, and what they have already done at HPE, we can benefit from that experience. So leveraging a large enterprise like an HPE and the Pathfinder group, for what they know and what they are good at, has been invaluable to Cohesity.

Gardner: Paul, for those organizations that might want to get involved with Pathfinder, where should they go and what would you guide them to in terms of becoming a potential fit?

Glaser: I’d just point them to hewlettpackardpathfinder.com. You can find information on the program there, contact information, portfolio companies, and that type of thing.

We also put out a set of perspectives that talk about some of our investment theses and you can see our areas of interest. So at a high level, we look for companies that are aligned to HPE’s core strategies, which is going to be around building up the hybrid IT business as well as the intelligent edge.

So we have those specific swim lanes from a strategic perspective. And then second is we are looking for folks who have demonstrated success from a product perspective, and so whether that’s a couple of initial customer wins and then needing help to scale that business, those are the types of opportunities that we are looking for.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, Cloud computing, Data center transformation, Hewlett Packard Enterprise, hyperconverged infrastructure, Internet of Things, machine learning, multicloud, Nutanix, storage | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

HPE’s Erik Vogel on what’s driving success in hybrid cloud adoption and optimization

ISS-49_Multi-hued_clouds_over_the_Bering_SeaThe next BriefingsDirect Voice of the Innovator discussion explores the latest insights into hybrid cloud success strategies.

As with the often ad hoc adoption of public cloud services by various groups across an enterprise, getting the right mix and operational coordination required of true hybrid cloud cannot be successful if it’s not well managed. While many businesses recognize there’s a hybrid cloud future, far fewer are adopting a hybrid cloud approach with due diligence, governance, and cost optimization.

Stay with us as we examine the innovation maturing around hybrid cloud models and operations and learn how proper common management of hybrid cloud can make or break the realization of its promised returns.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

Here to explain how to safeguard successful hybrid cloud deployments and operations is Erik Vogel, Global Vice President of Hybrid IT and Cloud at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: The cloud model was very attractive, people jumped into it, but like with many things, there are unintended consequences. What’s driving cloud and hybrid cloud adoption, and what’s holding people back?

Vogel: All enterprises are hybrid at this point, and whether they have accepted that realization depends on the client. But pretty much all of them are hybrid. They are all using a combination of on-premises, public cloud, and software-as-a-service (SaaS) solutions. They have brought all of that into the enterprise. There are very few enterprises we talk to that don’t have some hybrid mix already in place.

Hybrid is here, but needs rationalization

But when we ask them how they got there; most have done it in an ad hoc fashion. Most have had developers who went out to one or multiple hyperscale cloud providers, or the business units went out and started to consume SaaS solutions, or IT organizations built their own on-premises solutions whether that’s an open private cloud or a Microsoft Azure Stack environment.

Erik VogelThey have done all of this in pockets within the organization. Now, they are seeing the challenge of how to start managing and operating this in a consistent, common fashion. There are a lot of different solutions and technologies, yet everyone has their own operating model, own consoles, and own rules to work within.

And that is where we see our clients struggling. They don’t have a holistic strategy or approach to hybrid, but rather they’ve done it in this bespoke or ad hoc fashion. Now they realize they are going to have to take a step back to think this through and decide what is the right approach to enforce common governance and gain common management and operating principles, so that they’re not running 5, 6, 8 or even 10 different operating models. Rather, they need to ask, “How do we get back to where we started?” And that is a common operating model across the entire IT estate.

Gardner: IT traditionally over the years has had waves of adoption that led to heterogeneity that created complexity. Then that had to be managed. When we deal with multicloud and hybrid cloud, how is that different from the UNIX wars, or distributed computing, and N-tier computing? Why is cloud a more difficult heterogeneity problem to solve than the previous ones?

Vogel: It’s more challenging. It’s funny, we typically referred to what we used to see in the data center as the  Noah’s Ark data center. You would typically walk into a data center and you’d see two of everything, two of every vendor, just about everything within the data center.

How to Better Manage 

Multicloud Sprawl 

And it was about 15 years ago when we started to consolidate all of that into common infrastructures, common platforms to reduce the operational complexity. It was an effort to reduce total cost of ownership (TCO) within the data center and to reduce that Noah’s Ark data center into common, standardized elements.

Now that pendulum is starting to swing back. It’s becoming more of a challenge because it’s now so easy to consume non-standard and heterogeneous solutions. Before there was still that gatekeeper to everything within the data center. Somebody had to make a decision that a certain piece of infrastructure or component would be deployed within the data center.

Now, we have developers go to a cloud and consume with just a swipe of a credit card, any of the three or four hybrid hyperscale solutions, and literally thousands of SaaS solutions. Just look at the Salesforce.com platform and all of the different options that surround that.

All of a sudden, we lost the gatekeeper. Now we are seeing sprawl toward more heterogeneous solutions occurring even much faster than what we saw 10 or 15 years ago with the Noah’s Ark data center.

The pendulum is definitely shifting back toward consuming lots of different solutions with lots of different capabilities and services. And we are seeing it moving much faster than it did before because of that loss of a gatekeeper.

Gardner: Another difference is that we’re talking mostly about services. By consuming things as services, we’re acquiring them not as a capital expenditure that has a three- to five-year cycle of renewal, this is on-demand consumption, as you use it.

That makes it more complicated, but it also makes it a problem that can be solved more easily. Is there something about the nature of an all-services’ hybrid and multicloud environment on an operations budget that makes it more solvable?

Services become the norm 

Vogel: Yes, absolutely. The economics definitely play into this. I have this vision that within the next five years, we will no longer call things “as a service” because it will be the norm, the standard. We will only refer to things that are not as a service, because as an industry we are seeing a push toward everything being consumed as a service.

From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. … [Before] we would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

From an operating standpoint, the idea of consuming and paying for only what we use is very, very attractive. Again, if you look back 10 or 15 years, typically within a data center, we’d be buying for a three- or four-year lifespan. That forced us to make predictions as to what type of demand we would be placing on capital expenditures.

And what would happen? We would always overestimate. If you looked at utilization of CPU, of disk, of memory, they were always 20 to 25 percent; very low utilization, especially pre-virtualization. We would end up overbuying, pay the full load, and still pay for full maintenance and support, too.

There was very little ability to dial that up or down. The economic capability of being able to consume everything as a service is definitely changing the game, even for things you wouldn’t think of as a service, such as buying a server. Our enterprise customers are really taking notice of that because it gives them the ability to flex the expenditures as their business cycles go up and down.

Rarely do we see enterprises with constant demand for compute capacity. So, it’s very nice for them to be able to flex that up and down, adjust the normal seasonal effects within a business, and be able to flex that operating expense as their business fluctuates.

That is a key driver of moving everything to an as-a-service model, giving flexibility that just a few years ago we did not have.

Gardner: The good news is that these are services — and we can manage them as services. The bad news is these are services coming from different providers with different economic and consumption models. There are different application programming interfaces (APIs), stock keeping unit (SKU) definitions, and management definitions that are unique to their own cloud organization. So how do we take advantage of the fact that it’s all services but conquer the fact that it’s from different organizations speaking, in effect, different languages?

Vogel: You’re getting to the heart of the challenge in terms of managing a hybrid environment. If you think about how applications are becoming more and more composed now, they are built with various different pieces, different services, that may or may not be on-premises solutions.

One of our clients, for example, has built an application for their sales teams that provides real-time client data and client analytics before a seller goes in and talks to a customer. And when you look at the complexity of that application, they are using Salesforce.com, they have an on-premises customer database, and they get point of sales solutions from another SaaS provider.

Why You Need Everything 

As a Service 

They also have analytics engines they get from one of the cloud hyperscalers. And all of this comes together to drive a mobile app that presents all of this information seamlessly to their end-user seller in real-time. They become better armed and have more information when they go meet with their end customer.

When we look at how these new applications or services – I don’t even call them applications because they are more services built from multiple applications — they are crossing multiple service providers, multiple SaaS providers, and multiple hyperscalers.

And as you look at how we interface and connect with those, how we pass data, exchange information across these different service providers, you are absolutely right, the taxonomies are different, the APIs are different, the interfaces and operations challenges are different.

When that seller goes to make that call, and they bring up their iPad app and all of a sudden, there is no data or it hasn’t been refreshed in three months, who do you call? How do you start to troubleshoot that? How do you start to determine if it’s a Salesforce problem, a database problem, a third-party service provider problem? Maybe it’s my encrypted connection I had to install between Salesforce and my on-premises solution. Maybe it’s the mobile app. Maybe it’s a setting on the iPad itself.

Adding up all of that complexity is what’s building the problem. We don’t have consistent APIs, consistent taxonomies, or even the way we look at billing and the underlying components for billing. And when we break that out, it varies greatly between service providers.

cloud journeyThis is where we understand the complexity of hybrid IT. We have all of these different service providers all working and operating independently. Yet we’re trying to bring them together to provide end-customer services. Composing those different services creates one of the biggest challenges we have today within hybrid cloud environment.

Gardner: Even if we solve the challenge on the functional level — of getting the apps and services to behave as we want — it seems as much or more a nightmare for the chief financial officer (CFO) who has to determine whether you’re getting a good deal or buying redundancy across different cloud providers. A lot of times in procurement you cut a deal on volume. But how you do that if you don’t know what you’re buying from whom?

How do we pay for these aggregate cloud services in some coordinated framework with the least amount of waste?

How to pay the bills

Vogel: That is probably one of the most difficult jobs within IT today, the finance side of it. There are a lot of challenges of putting that bill together. What does that bill really look like? And not just at an individual component level. I may be able to see what I’m paying from Amazon Web Services (AWS) or what Azure Stack is costing me. But how do we aggregate that? What is the cost to provide a service? And this has been a challenge for IT forever. It’s always been difficult to slice it by service.

We knew what compute costs, what network costs, and what the storage costs were. But it was always difficult to make that vertical slice across the budget. And now we have made that problem worse because we have all these different bills coming in from all of these different service providers.

The procurement challenge is even more acute because now we have these different service providers. How do we know what we are really paying? Developers swipe credit cards, where they don’t even see the bill or a true accounting of what’s being spent across the public clouds. It comes through as a credit card expense and so not really directed to IT.

We need to get our hands around these different expenses, where we are spending money, and think differently about our procurement models for these services.

In the past, we talked about this as a brokerage but it’s a lot more than that. It’s more about strategic sourcing procurement models for cloud and hybrid cloud-related services.

Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

It’s less about brokerage and looking for that lowest-cost provider and trying to reduce the spend. It’s more about, are we getting the service-level agreements (SLAs) we are paying for? Are we getting the services we are paying for? Are we getting the uptime we are paying for?

Our IT procurement models have to change to address the problem of how we really know what we are paying for. Are we getting the strategic value out of the expenses within hybrid that we had expected?

Gardner: In business over the years, when you have a challenge, you can try to solve it yourself and employ intelligence technologies to tackle complexity. Another way is to find a third-party that knows the business better than you do, especially for small- to medium-sized businesses (SMBs).

Are we starting to see an ecosystem develop where the consumption model for cloud services is managed more centrally, and then those services are repurposed and resold to the actual consumer business?

Third-parties help hybrid manage costs 

Vogel: Yes, I am definitely starting to see that. There’s a lot is being developed to help customers in terms of consuming and buying these services and being smarter about it. I always joke that the cheapest thing you can buy is somebody else’s experience, and that is absolutely the case when it comes to hybrid cloud services providers.

The reality is no enterprise can have expertise in all three of the hyperscalers, in all of the hundreds of SaaS providers, for all of the on-premises solutions that are out there. It just doesn’t exist. You just can’t do it all.

It really becomes important to look for people who can aggregate this capability and bring the collective experience back to you. You have to reduce overspend and make smarter purchasing decisions. You can prevent things like lock-in to and reduce the risk of buying via these third-party services. There is tremendous value being created by these firms that are jumping into that model and helping clients address these challenges.

The third-parties have people who have actually gone out and consumed and purchased within the hyperscalers, who have run workloads within those environments, and who can help predict what the true cost should be — and, more importantly, maintain that optimization going forward.

How to Remove Complexity 

From Multicloud and Hybrid IT 

It’s not just about going in and buying anymore. There is ongoing optimization that has to incur, ongoing cost optimization where we’re continuously evaluating about the right decisions. And we are finding that the calculus changes over time.

So, while it might have made a lot of sense to put a workload, for example, on-premises today, based on the demand for that application and on pricing changes, it may make more sense to move that same workload off-premises tomorrow. And then in the future it may also make sense to bring it back on-premises for a variety of reasons.

You have to constantly be evaluating that. That’s where a lot of the firms playing in the space can add a lot of value now, in helping with ongoing optimization, by making sure that we are always making the smart decision. It’s a very dynamic ecosystem, and the calculus, the metrics are constantly changing. We have the ability to constantly reevaluate. That’s the beauty of cloud, it’s the ability to flex between these different providers.

Gardner: Erik, for those organizations interested in getting a better handle on this, are there any practical approaches available now?

The right mix of data and advice 

Vogel: We have a tool, called HPE Right Mix Advisor, which is our ability to go in and assess very large application portfolios. The nice thing is, it scales up and down very nicely. It is delivered in a service model so we are able to go in and assess a set of applications against the variables I mentioned, in the weighing of the factors, and come up with a concrete list of recommendations as to what should our clients do right now.

In fact, we like to talk not about the thousand things they could do — but what are the 10 or 20 things they should start on tomorrow morning. The ones that are most impactful for their business.

The Right Mix Advisor tool helps identify those things that matter the most for the business right now, and provides a tactical plan to say, “This is what we should start on.”

And it’s not just the tool, we also bring our expertise, whether that’s from our Cloud Technology Partners (CTP) acquisition, RedPixie, or our existing HPE business where we have done this for years and years. So, it’s not just the tool, but also experts, looking at that data, helping to refine that data, and coming up with a smart list that makes sense for our clients to get started on right now.

And of course, once they have accomplished those things, we can come back and look at it again and say, “Here is your next list, the next 10 or 20 things.” And that’s really how Right Mix Advisor was designed to work.

Gardner: It seems to me there would be a huge advantage if you were able to get enough data about what’s going on at the market level, that is to aggregate how the cloud providers are selling, charging, and the consumption patterns.

If you were in a position to gather all of the data about enterprise consumption among and between the cloud providers, you would have a much better idea of how to procure properly, manage properly, and optimize. Is such a data well developing? Is there anyone in the right position to be able to gather the data and start applying machine learning (ML) technologies to develop predictions about the best course of action for a hybrid cloud or hybrid IT environment?

Vogel: Yes. In fact, we have started down that path. HPE has started to tackle this by developing an expert system, a set of logic rules that helps make those decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years, primarily with HPE’s history of doing a lot of application migration work. We really understand on the on-premises side where applications should reside based on how they are architected and what the requirements are, and what type of performance needs to be derived from that application.

HPE has developed an expert system, a set of logic rules, that helps make those hybrid decisions. We did it by combining a couple of fairly large datasets that we have developed over the last 15 years. We understand the on-premises side … We have now combined that with our other datasets from our acquisitions of CTP and RedPixie.

We have combined that with other datasets from some of our recent cloud acquisitions, CTP and RedPixie, for example. That has brought us a huge wealth of information based on a tremendous number of application migrations to the public clouds. And we are able to combine those datasets and develop this expert system that allows us to make those decisions pretty quickly as to where applications should reside based on a number of factors. Right now, we look at about 60 different variables.

But what’s really important when we do that is to understand from a client’s perspective what matters. This is why I go back to that strategic sourcing discussion. It’s easy to go in and assume that every client wants to reduce cost. And while every client wants to do that — no one would ever say no to that — usually that’s not the most important thing. Clients are worried about performance. They also want to drive agility, and faster time to market. To them that is more important than the amount they will save from a cost-reduction perspective.

The first thing we do when we run our expert system, is we go in and weight the variables based on what’s important to that specific client, aligned to their strategy. This is where it gets challenging for any enterprise trying to make smart decisions. In order to make strategic sourcing decisions, you have to understand strategically what’s important to your business. You have to make intelligent decisions about where workloads should go across the hybrid IT options that you have. So we run an expert system to help make those decisions.

Now, as we collect more data, this will move toward more artificial intelligence (AI). I am sure everybody is aware AI requires a lot of data, since we are still in the very early stages of true hybrid cloud and hybrid IT. We don’t have a massive enough dataset yet to make these decisions in a truly automated or learning-type model.

We started with an expert system to help us do that, to move down that path. But very quickly we are learning, and we are building those learnings into our models that we use to make decisions.

So, yes, there is a lot of value in people who have been there and done that. Being able to bring that data together in a unified fashion is exactly what we have done to help our clients. These decisions can take a year to figure out. You have to be able to make these decisions quickly because it’s a very dynamic model. A lot of things are constantly changing. You have to keep loading the models with the latest and greatest data so you are always making the best, smartest decision, and always optimizing the environment.

Innovation, across the enterprise 

Gardner: Not that long ago, innovation in a data center was about speeds and feeds. You would innovate on technology and pass along those fruits to your consumers. But now we have innovated on economics, management, and understanding indirect and direct procurement models. We have had to innovate around intelligence technologies and AI. We have had to innovate around making the right choices — not just on cost but on operations benefits like speed and agility.

How has innovation changed such that it used to be a technology innovation but now cuts across so many different dynamic principles of business?

HPE BugVogel: It’s a really interesting observation. That’s exactly what’s happening. You are right, even as recently as five years ago we talked about speeds and feeds, trying to squeeze a little more out of every processor, trying to enhance the speed of the memory or the storage devices.

But now, as we have pivoted toward a services mentality, nobody asks when you buy from a hyperscaler — Google Cloud, for example — what central processing unit (CPU) chips they are running or what the chip speeds are. That’s not really relevant in an as-a-service world. So, the innovation then is around the service sets, the economic models, the pricing models, that’s really where innovation is being driven.

At HPE, we have moved in that direction as well. We provide our HPE GreenLake model and offer a flex-capacity approach where clients can buy capacity on-demand. And it becomes about buying compute capacity. How we provide that, what speeds and feeds we are providing becomes less and less important. It’s the innovation around the economic model that our clients are looking for.

We are only going to continue to see that type of innovation going forward, where it’s less about the underlying components. In reality, if you are buying the service, you don’t care what sort of chips and speeds and feeds are being provided on the back end as long as you are getting the service you have asked for, with the SLA, the uptime, the reliability, and the capabilities you need. All of what sits behind that becomes less and less important.

Think about how you buy electricity. You just expect 110 volts at 60 hertz coming out of the wall, and you expect it to be on all the time. You expect it to be consistent, reliable, and safely delivered to you. How it gets generated, where it gets generated — whether it’s a wind turbine, a coal-burning plant, a nuclear plant — that’s not important to you. If it’s produced in one state and transferred to another over the grid, or if it’s produced in your local state, that all becomes less important. What really matters is that you are getting consistent and reliable services you can count on.

How to Leverage Cloud, IoT, 

Big Data, and Other Disruptive Technologies 

And we are seeing the same thing within IT as we move to that service model. The speeds and feeds, the infrastructure, become less important. All of the innovation is now being driven around the as-a-service model and what it takes to provide that service. We innovate at the service level, whether that’s for flex capacity or management services, in a true as-a-service capability.

Gardner: What do your consumer organizations need to think about to be innovative on their side? How can they be in a better position to consume these services such as hybrid IT management-as-a-service, hybrid cloud decision making, and the right mixture of decisions-as-a-service?

What comes next when it comes to how the enterprise IT organization needs to shift?

Business cycles speed IT up 

Vogel: At a business level, within almost every market or every industry, we are moving from what used to be slow-cycle business to standard-cycles. In a lot of cases it’s moving from standard-cycle business to a fast-cycle business. Even businesses that were traditionally slow-cycle or standard-cycle are accelerating. This underlying technology is creating that.

So every company is a technology company. That is becoming more and more true every day. As a result, it’s driving business cycles faster and faster. So, IT, in order to support those business cycles, has to move at that same speed.

And we see enterprises moving away from a traditional IT model when those enterprises’ IT cannot move at the speed the business is demanding. We will still see IT, for example, take six months to provide a platform when the business says, “I need it in 20 minutes.”

We will see a split between traditional IT and a digital innovation group within the enterprise. This group will be owned by the business unit as opposed to core IT.

So, businesses are responding to IT not being able to move fast enough and not being able to provide the responsiveness and the level of service by going out and looking outside and consuming services externally.

At HPE, as we look at some of the services we have announced, they are to help our clients move faster and to provide operational support and management for hybrid to remove that burden from IT so they can focus on the things that accelerate their businesses.

As we move forward, how can clients start to move in this direction? At HPE, as we look at some of the services we have announced and will be rolling out in the next six-12 months, they are to help our clients move faster. They are designed to provide operational support and management for hybrid to take that burden away from IT, especially where IT may not have the skill sets or capability and be able to provide that seamless operating experience to our IT customers. Those customers need to focus on the things that accelerate their business — that is what the business units are demanding.

To stay relevant, IT is going to have to do that, too. They are going to have to look for help and support so that they can move at the same speed and pace that businesses are demanding today. And I don’t see that slowing down. I don’t think anybody sees that slowing down; if anything, we see the pace continuing to accelerate.

When I talked about fast-cycle — where services or solutions we put into the market may have had a market shelf life of two to three years — we are seeing it compressed to six months. It’s amazing how fast competition comes in even if we are doing innovative type of solutions. So, IT has to accelerate at that speed as well.

The HPE GreenLake hybrid cloud offering, for example, gives our clients the ability to operate at that speed by providing managed services capabilities across the hybrid estate. It provides a consistent platform, and then allows them to innovate on top of it. It takes away the management operation from their focus and lets them focus on what matters to the business today, which is innovation.

Gardner: For you personally, Erik, where do you get inspiration for innovation? How do you think out of the box when we can now see that that’s a necessary requirement?

Inspired by others

Vogel: One of the best parts about my job is the time I get to spend with our customers and to really understand what their challenges are and what they are doing. One of the things we look at are adjacent businesses.

We try to learn what is working well in retail, for example. What innovation is there and what lessons learned can we apply elsewhere? A lot of times the industry shifts so quickly that we don’t have all of the answers. We can’t take a product-out approach any longer. We really have to start looking at the customers’ back end. And I think having kind of that broad view and looking outside is really helping us. It’s where we are getting a lot of our inspiration.

For example, we are really focused on the overall experience that our clients have with HPE, and trying to drive a very consistent, standardized, easy-to-choose type of experience with us as a company. And it’s interesting as an engineering company, with a lot of good development and engineering capabilities, that we tend to look at it from a product-out view. We build a portal that they can work within, we create better products, and we get that out in front of the customer.

But by looking outside, we are saying, “Wait a minute, what is it, for example, about Uber that everybody likes?” It’s not necessarily that their app is good, but it’s really about the clean car, it’s about not having to pay when you get out of the car, not have to fumble for a credit card. It’s about seeing a map and knowing where the driver is. It’s about a predictable cost, where you know what it’s going to cost. And that experience, that overall experience is what makes Uber, Uber. It’s not just creating an app and saying, “Well, the app is the experience.”

We are learning a lot from adjacent businesses, adjacent industries, and incorporating that into what we are doing. It’s just part of that as-a-service mentality where we have to think about the experience our customers are asking for and how do we start building solutions that meet that experience requirement — not just the technical requirement. We are very good at that, but how do we start to meet that experience requirement?

How to Develop Hybrid 

Cloud Strategies With Confidence 

And this has been a real eye-opener for me personally. It has been a really fun part of the job, to look at the experience we are trying to create. How do we think differently? Rather than producing products and putting them out into the market, how do we think about creating that experience first and then designing and creating the solutions that sit underneath it?

When you talk about where we get inspiration, it’s really about looking at those adjacencies. It’s understanding what’s happening in the broader as-a-service market and taking the best of what’s happening and saying, “How can we employ those types of techniques, those tricks, those lessons learned into what we are doing?” And that’s really driving a lot of our development and inspiration in terms of how we are innovating as a company within HPE.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, Data center transformation, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud, Security, User experience | Tagged , , , , , , , , , , , , , | Leave a comment

How total deployment intelligence overcomes the growing complexity of multicloud management

multicloud

The next BriefingsDirect Voice of the Innovator discussion focuses on the growing complexity around multicloud management and how greater accountability is needed to improve business impacts from all-too-common haphazard cloud adoption.

Stay with us to learn how new tools, processes, and methods are bringing insights and actionable analysis that help regain control over the increasing challenges from hybrid cloud and multicloud sprawl.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explore a more pragmatic path to modern IT deployment management is Harsh Singh, Director of Product Management for Hybrid Cloud Products and Solutions at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What is driving the need for multicloud at all? Why are people choosing multiple clouds and deployments?

Harsh Singh

Singh

Singh: That’s a very interesting question, especially today. However, you have to step back and think about why people went to the cloud in the first place – and what were the drivers – to understand how sprawl expanded to a multicloud environment.

Initially, when people began moving to public cloud services, the idea was speed, agility, and quick access to resources. IT was in the way for gaining on-premises resources. People said, “Let me get the work going and let me deploy things faster.”

And they were able to quickly launch applications, and this increased their velocity and time-to-market. Cloud helped them get there very fast. However, as we now get choices of multicloud environments, where you have various public cloud environments, you also now have private cloud environments where people can do similar things on-premises. There came a time when people realized, “Oh, certain applications fit in certain places better than others.”

From cloud sprawl to cloud smart

For example, if I want to run a serverless environment, I might want to run in one cloud provider versus another. But if I want to run more machine learning (ML), artificial intelligence (AI) kinds of functionality, I might want to run that somewhere else. And if I have a big data requirement, with a lot of data to crunch, I might want to run that on-premises.

So you now have more choices to make. People are thinking about where’s the best place to run their applications. And that’s where multicloud comes in. However, this doesn’t come for free, right?

How to Determine 

Ideal Workload Placement 

As you add more cloud environments and different tools, it leads to what we call tool sprawl. You now have people tying all of these tools together trying to figure out the cost of these different environments. Are they in compliance with the various norms we have within our organization? Now it becomes very complex very fast. It becomes a management problem in terms of, “How do I manage all of these environments together?”

Gardner: It’s become too much of a good thing. There are very good reasons to do cloud, hybrid cloud, and multicloud. But there hasn’t been a rationalization about how to go about it in an organizational way that’s in the best interest of the overall business. It seems like a rethinking of how we go about deploying IT in general needs to be part of it.

Singh: Absolutely right. I see three pillars that need to be addressed in terms of looking at this complexity and managing it well. Those are people, process, and technology. Technology exists, but unfortunately, unless you have the right skill set in the people — and you have the right processes in place — it’s going to be the Wild West. Everything is just going to be crazy. At the end you falter, not achieving what you really want to achieve.

I look at people, process, and technology as the three pillars of this tool sprawl, which is absolutely necessary for any company as they traverse their multicloud journey.

Gardner: This is a long-term, thorny problem. And it’s probably going to get worse before it gets better.

Singh: I do see it getting worse, but I also see a lot of people beginning to address these problems. Vendors, including we at HPE, are looking at this problem. We are trying to get ahead of it before a lot of enterprises crash and burn. We have experience with our customers, and we have engaged with them to help them on this journey.

You must deploy the applications to the right places that are right for your business — whether it’s multicloud or hybrid cloud. At the end of the day, you have to manage multiple environments.

It is going to get worse and people are going to realize that they need professional help. It requires that we work with these customers very closely and take them along based on what we have experienced together.

Gardner: Are you taking the approach that the solution for hybrid cloud management and multicloud management can be done in the same way? Or are they fundamentally different?

Singh: Fundamentally, it’s the same problem set. You must deploy the applications to the right places that are right for your business — whether it’s multicloud or hybrid cloud. Sometimes the terminology blurs. But at the end of the day, you have to manage multiple environments.

CIYou may be connecting private or off-premises hybrid clouds, and maybe there are different clouds. The problem will be the same — you have multiple tools, multiple environments, and the people need training and the processes need to be in place for them to operate properly.

Gardner: What makes me optimistic about the solution is there might be a fourth leg on that stool. People, process, and technology, yes, but I think there is also economics. One of the things that really motivates a business to change is when money is being lost and the business people think there is a way to resolve that.

The economics issue — about cost overruns and a lack of discipline around procurement – is both a part of the problem and the solution.

Economics elevates visibility 

Singh: I am laughing right now because I have talked to so many customers about this.  A CIO from an entertainment media company, for example, recently told me she had a problem. They had a cloud-first strategy, but they didn’t look at the economics piece of it. She didn’t realize, she told me, where their virtual machines (VMs) and workloads were running.

“At the end of the month, I’m seeing hundreds of thousands of dollars in bills. I am being surprised by all of this stuff,” she said. “I don’t even know whether they are in compliance. The overhead of these costs — I don’t know how to get a handle on it.”

So this is a real problem that customers are facing. I have heard this again and again: They don’t have visibility into the environment. They don’t know what’s being utilized. Sometimes they are underutilized, sometimes they are over utilized. And they don’t know what they are going to end up paying at the end of the day.

A common example is, in a public cloud, people will launch a very large number of VMs because that’s what they are used to doing. But they consume maybe 10 to 20 percent of that. What they don’t realize is that they are paying for the whole bill. More visibility is going to become key to getting a handle on the economics of these things.

Gardner: We have seen these kinds of problems before in general business procurement. Many times it’s the Wild West, but then they bring it under control. Then they can negotiate better rates as they combine services and look for redundancies. But you can’t do that until you know what you’re using and how it costs.

So, is the first step getting an inventory of where your cloud deployments are, what the true costs are, and then start to rationalize them?

Guardrails reduce risk, increase innovation

Singh: Absolutely, right. That’s where you start, and at HPE we have services to do that. The first thing is to understand where you are. Get a base level of what is on-premises, what is off-premises, and which applications are required to run where. What’s the footprint that I require in these different places? What is the overall cost I’m incurring, and where do I want to be? Answering those questions is the first step to getting a mixed environment you can control — and get away from the Wild West.

Put in the compliance guardrails so that IT is again looking at avoiding the problems we are seeing today.

Gardner: As a counterpoint, I don’t think that IT wants to be perceived as the big bad killjoy that comes to the data scientists and says, “You can’t get those clusters to support the data environment that you want.” So how do you balance that need for governance, security, and cost control with not stifling innovation and allowing creative freedom?

How to Transform

The Traditional Datacenter 

Singh: That’s a very good question. When we started building out our managed cloud solutions, a key criterion was to provide the guardrails yet not stifle innovation for the line of business managers and developers. The way you do that is that you don’t become the man in the middle. The idea is you allow the line of businesses and developers to access the resources they need. However, you put guardrails around which resources they can access, how much they can access, and you provide visibility into the budgets. You still let them access the direct APIs of the different multicloud environments.

You don’t say, “Hey, you have to put in a request to us to do these things.” You have to be more behind-the-scenes, hidden from view. At the same time, you need to provide those budgets and those controls. Then they can perform their tasks at the speed they want and access to the resources that they need — but within the guardrails, compliance, and the business requirements that IT has.

Gardner: Now that HPE has been on the vanguard of creating the tools and methods to get the necessary insights, make the measurements, recognize the need for balance between control and innovation — have you noticed changes in organizational patterns? Are there now centers of cloud excellence or cloud-management bureaus? Does there need to be a counterpart to the tools, of management structure changes as well?

Automate, yet hold hands, too

Singh: This is the process and the people parts that you want to address. How do you align your organizations, and what are the things that you need to do there? Some of our customers are beginning to make those changes, but organizations are difficult to change to get on this journey. Some of them are early; some of them are at much later stage. A lot of the customers frankly are still in the early phases of multicloud and hybrid cloud. We are working with them to make sure they understand the changes they’ll need to make in order to function properly in this new world.

Gardner: Unfortunately, these new requirements come at a time when cloud management skills — of understanding data and ops, IT and ops, and cloud and ops — are hard to find and harder to keep. So one of the things I’m seeing is the adoption of automation around guidance, strategy, and analysis. The systems start to do more for you. Tell me how automation is coming to bear on some of these problems, and perhaps mitigate the skill shortage issues.

sphere image

Singh: The tools can only do so much. So you automate. You make sure the infrastructure is automated. You make sure your access to public cloud — or any other cloud environment — is automated.

That can mitigate some of the problems, but I still see a need for hand-holding from time to time in terms of the process and people. That will still be required. Automation will help tie in a storage network, and compute, and you can put all of that together. This [composability] reduces the need and dependency on some of the process and people. Automation mitigates the physical labor and the need for someone to take days to do it. However, you need that expertise to understand what needs to be done. And this is where HPE is helping.

Automation will help tie in a storage network and compute, and you can put all of that together. Composability reduces the need and dependency on some of the process and the people. Automation mitigates the physical labor and the need for someone to take days to do it.

You might have heard about our HPE GreenLake managed cloud services offerings. We are moving toward an as-a-service model for a lot of our software and tooling. We are using the automation to help customers fill the expertise gap. We can offer more of a managed service by using automation tools underneath it to make our tasks easier. At the end of the day, the customer only sees an outcome or an experience — versus worrying about the details of how these things work.

Gardner: Let’s get back to the problem of multicloud management. Why can’t you just use the tools that the cloud providers themselves provide? Maybe you might have deployments across multiple clouds, but why can’t you use the tools from one to manage more? Why do we need a neutral third-party position for this?

Singh: Take a hypothetical case: I have deployments in Amazon Web Services (AWS) and I have deployments in Google Cloud Platform (GCP). And to make things more complicated, I have some workloads on premises as well. How would I go about tying these things together?

Now, if I go to AWS, they are very, very opinionated on AWS services. They have no interest in looking at builds coming out of GCP or Microsoft Azure. They are focused on their services and what they are delivering. The reality is, however, that customers are using these different environments for different things.

The multiple public cloud providers don’t have an interest in managing other clouds or to look at other environments. So third parties come in to tie everything together, and no one customer is locked into one environment.

If they go to AWS, for example, they can only look at billing, services, and performance metrics of that one service. And they do a very good job. Each one of these cloud guys does a very good job of exposing their own services and providing you visibility into their own services. But they don’t tie it across multiple environments. And especially if you throw the on-premises piece into the mix, it’s very difficult to look at and compare costs across these multiple environments.

Gardner: When we talk about on-premises, we are not just talking about the difference between your data center and a cloud provider’s data center. We are also taking about the difference between a traditional IT environment and the IT management tools that came out of that. How has HPE crossed the chasm between a traditional IT management automation and composability types of benefits and the higher-level, multicloud management?

Tying worlds together

Singh: It’s a struggle to tie these worlds together from my experience, and I have been doing this for some time. I have seen customers spend months and sometimes years, putting together a solution from various vendors, tying them together, and deploying something on premises and also trying to tie that to an off-premises environment.

At HPE, we fundamentally changed how on-premises and off-premises environments are managed by introducing our own SaaS management environment, which customers do not have to manage. Such a Software as a Service (SaaS) environment, a portal, connects on-premises environments. Since we have a native, programmable, API-driven infrastructure, we were able to connect that. And being able to drive it from the cloud itself made it very easy to hook up to other cloud providers like AWS, Azure, and GCP. This capability ties the two worlds together. As you build out the tools, the key is understanding automation on the infrastructure piece, and how can you connect and manage this from a centralized portal that ties all these things together with a click.

Through this common portal, people can onboard their multicloud environments, get visibility into their costs, get visibility into compliance — look at whether they are HIPAA compliant or not, PCI compliant or not — and get access to resources that allow them to begin to manage these environments.

How to Better Manage

Hybrid and Multicloud Economics 

For example, onboarding into any public cloud is very, very complex. Setting up a private cloud is very complex. But today, with the software that we are building, and some of our customers are using, we can set up a private cloud environment for people within hours. All you have to do is connect with our tools like HPE OneView and other things that we have built for the infrastructure and automation pieces. You then tie that together to a public cloud-facing tenant portal and onboard that with a few clicks. We can connect with their public cloud accounts and give them visibility into their complete environment.

And then we can bring in cost analytics. We have consumption analytics as part of our HPE GreenLake offering, which allows us to look at cost for on-premises as well as off-premises resources. You can get a dashboard that shows you what you are consuming and where.

Gardner: That level of management and the capability to be distributed across all these different deployment models strikes me as a gift that could keep on giving. Once you have accomplished this and get control over your costs, you are next able to rationalize what cloud providers to use for which types of workloads. It strikes me that you can then also use that same management and insight to start to actually move things around based on a dynamic or even algorithmic basis. You can get cost optimization on the fly. You can react to market forces and dynamics in terms of demand on your servers or on your virtual machines anywhere.

Are you going to be able to accelerate the capability for people to move their fungible workloads across different clouds, both hybrid and multicloud?

Optimizing for the future

Singh: Yes, absolutely right. There is more complexity in terms of moving workloads here and there, because there is data proximity requirements and various other requirements. But the optimization piece is absolutely something we can do on the fly, especially if you start throwing AI into the mix.

You will be learning over time what needs to be deployed where, and where your data gravity might be, and where you need applications closer to the data. Sometimes it’s here, sometimes it’s there. You might have edge environments that you might want to manage from this common portal, too. All that can be brought together.

HPE BugAnd then with those insights, you can make optimization decisions: “Hey, this application is best deployed in this location for these reasons.” You can even automate that. You can make that policy-driven.

Think about it this way — you are a person who wants to deploy something. You request a resource, and that gets deployed for you based on the algorithm that has already decided where the optimal place to put it is. All of that works behind the scenes without you having to really think about it. That’s the world we are headed to.

Gardner: We have talked about some really interesting subjects at a high level, even some thought leadership involved. But are there any concrete examples that illustrate how companies are already starting to do this? What kinds of benefits do they get?

Singh: I won’t name the company, but there was a business in the UK that was able to deploy VMs within minutes on their on-premises environment, as well as gain cost benefits out of their AWS deployments.

We were able to go in, connect to their VMware environment, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs. They saved 40 percent in operational efficiency. They gained self-service access.

We were able to go in, connect to their VMware environment, in this case, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs and request resources in that environment. They saved 40 percent in operational efficiency. So now they were mostly cost optimized, their IT team was less pressured to go and launch VMs for their developers, and they gained direct self-service access through which they could go and deploy VMs and other resources on-premises.

At the same time, IT had the visibility into what was being deployed in the public cloud environments. They could then optimize those environments for the size of the VMs and assets they were running there and gain some cost advantages there as well.

How to Solve Cost and Utilization

Challenges of Hybrid Cloud 

Gardner: For organizations that recognize they have a sprawl problem when it comes to cloud, that their costs are not being optimized, but that they are still needing to go about this at sort of a crawl, walk, run level — what should they be doing to put themselves in an advantageous position to be able to take advantage of these tools?

Are there any precursor activities that companies should be thinking about to get control over their clouds, and then be able to better leverage these tools when the time comes?

Watch your clouds

Singh: Start with visibility. You need an inventory of what you are doing. And then you need to ask the question, “Why?” What benefit are you getting from these different environments? Ask that question, and then begin to optimize. I am sure there are very good reasons for using multicloud environments, and many customers do. I have seen many customers use it, and for the right reasons.

However, there are other people who have struggled because there was no governance and guardrails around this. There were no processes in place. They truly got into a sprawled environment, and they didn’t know what they didn’t know.

So first and foremost, get an idea of what you want to do and where you are today — get a baseline. And then, understand the impact and what are the levers to the cost. What are the drivers to the efficiencies? Make sure you understand the people and process — more than the technology, because the technology does exist, but you need to make sure that your people and process are aligned.

And then lastly, call me. My phone is open. I am happy to have a talk with any customer that wants to have a talk.

How to Achieve Composability

Across Your Datacenter 

Gardner: On that note of the personal approach, people who are passionate in an organization around things like efficiency and cost control are looking for innovation. Where do you see the innovation taking place for cloud management? Is it the IT Ops people, the finance people, maybe procurement? Where is the innovative thinking around cloud sprawl manifesting itself?

cloud-journeySingh: All three are good places for innovation. I see IT Ops at the center of the innovation. They are the ones who will be effecting change.

Finance and procurement, they could benefit from these changes, and they could be drivers of the requirements. They are going to be saying, ‘I need to do this differently because it doesn’t work for me.” And the innovation also comes from developers and line of businesses managers who have been doing this for a while and who understand what they really need.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, Cloud computing, containers, data center, Data center transformation, DevOps, Enterprise architect, enterprise architecture, Hewlett Packard Enterprise, machine learning, multicloud, Security | Tagged , , , , , , , , , , , , , | Leave a comment

How an agile focus for Enterprise Architects builds competitive advantage for digital transformation

SpiralThe next BriefingsDirect business trends discussion explores the reinforcing nature of Enterprise Architecture (EA) and agile methods.

We’ll now learn how Enterprise Architects can embrace agile approaches to build competitive advantages for their companies.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about retraining and rethinking for EA in the Digital Transformation (DT) era, we are joined by Ryan Schmierer, Director of Operations at Sparx Services North America, and Chris Armstrong, President at Sparx Services North America. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Ryan, what’s happening in business now that’s forcing a new emphasis for Enterprise Architects? Why should Enterprise Architects do things any differently than they have in the past?

Ryan Schmierer

Schmierer

Schmierer: The biggest thing happening in the industry right now is around DT. We been hearing about DT for the last couple of years and most companies have embarked on some sort of a DT initiative, modernizing their business processes.

But now companies are looking beyond the initial transformation and asking, “What’s next?” We are seeing them focus on real-time, data-driven decision-making, with the ultimate goal of enterprise business agility — the capability for the enterprise to be aware of its environments, respond to changes, and adapt quickly.

For Enterprise Architects, that means learning how to be agile both in the work they do as individuals and how they approach architecture for their organizations. It’s not about making architectures that will last forever, but architectures that are nimble, agile, and adapt to change.

Gardner: Ryan, we have heard the word, agile, used in a structured way when it comes to software development — Agile methodologies, for example. Are we talking about the same thing? How are they related?

Agile, adaptive enterprise advances 

Schmierer: It’s the same concept. The idea is that you want to deliver results quickly, learn from what works, adapt, change, and evolve. It’s the same approach used in software development over the last few years. Look at how you develop software that delivers value quickly. We are now applying those same concepts in other contexts.

First is at the enterprise level. We look at how the business evolves quickly, learn from mistakes, and adapt the changes back into the environment.

Second, in the architecture domain, instead of waiting months or quarters to develop an architecture, vision, and roadmap, how do we start small, iterate, deliver quickly, accelerate time-to-value, and refine it as we go?

Gardner: Many businesses want DT, but far fewer of them seem to know how to get there. How does the role of the Enterprise Architect fit into helping companies attain DT?

The core job responsibility for Enterprise Architects is to be an extension of the company leadership and its executives. They need to look at where a company is trying to go … and develop a roadmap on how to get there.

Schmierer: The core job responsibility for Enterprise Architects is to be an extension of company leadership and its executives. They need to look at where a company is trying to go, all the different pieces that need to be addressed to get there, establish a future-state vision, and then develop a roadmap on how to get there.

This is what company leadership is trying to do. The EA is there to help them figure out how to do that. As the executives look outward and forward, the Enterprise Architect figures out how to deliver on the vision.

Gardner: Chris, tools and frameworks are only part of the solution. It’s also about the people and the process. There’s the need for training and best practices. How should people attain this emphasis for EA in that holistic definition?

Change is good 

Chris Armstrong

Armstrong

Armstrong: We want to take a step back and look at how Ryan was describing the elevation of value propositions and best practices that seem to be working for agile solution delivery. How might that work for delivering continual, regular value? One of the major attributes, in our experience, of the goodness of any architecture, is based on how well it responds to change.

In some ways, agile and EA are synonyms. If you’re doing good Enterprise Architecture, you must be agile because responding to change is one of those quality attributes. That’s a part of the traditional approach of architecture – to be concerned with the interoperability and integration.

As it relates to the techniques, tools, and frameworks we want to exploit — the experiences that we have had in the past – we try to push those forward into more of an operating model for Enterprise Architects and how they engage with the rest of the organization.

Learn About Agile Architecture

At The Open Group July Denver Event 

So not starting from scratch, but trying to embrace the concept of reuse, particularly reuse of knowledge and information. It’s a good best practice, obviously. That’s why in 2019 you certainly don’t want to be inventing your own architecture method or your own architecture framework, even though there may be various reasons to adapt them to your environment.

Starting with things like the TOGAF® Framework, particularly its Architecture Development Method (ADM) and reference models — those are there for individuals or vertical industries to accelerate the adding of value.

The challenge I’ve seen for a lot of architecture teams is they get sucked into the methodology and the framework, the semantics and concepts, and spend a lot of time trying to figure out how to do things with the tools. What we want to think about is how to enable the architecture profession in the same way we enable other people do their jobs — with instant-on service offerings, using modern common platforms, and the industry frameworks that are already out there.

We are seeing people more focused on not just what the framework is but helping to apply it to close that feedback loop. The TOGAF standard, a standard of The Open Group, makes perfect sense, but people often struggle with, “Well, how do I make this real in my organization?”

Partnering with organizations that have had that kind of experience helps close that gap and accelerates the use in a valuable fashion. It’s pretty important.

Gardner: It’s ironic that I’ve heard of recent instances where Enterprise Architects are being laid off. But it sounds increasingly like the role is a keystone to DT. What’s the mismatch there, Chris? Why do we see in some cases the EA position being undervalued, even though it seems critical?

EA here to stay 

Armstrong: You have identified something that has happened multiple times. Pendulum swings happen in our industry, particularly when there is a lot of change going on. People are getting a little conservative. We’ve seen this before in the context of fiscal downturns in economic climates.

But to me, it really points to the irony of what we perceive in the architecture profession based on successes that we have had. Enterprise Architecture is an essential part of running your business. But if executives don’t believe that and have not experienced that then it’s not surprising when there’s an opportunity to make changes in investment priorities that Enterprise Architecture might not be at the top of the list.

We need to be mindful of where we are in time with the architecture profession. A lot of organizations struggle with the glass ceiling of Enterprise Architecture. It’s something we have encountered pretty regularly, where executives are, “I really don’t get what this EA thing is, and what’s in it for me? Why should I give you my support and resources?”

Learn About Agile Architecture

At The Open Group July Denver Event 

But what’s interesting about that, of course, is if you take a step back you don’t see executives saying the same thing about human resources or accounting. Not to suggest that they aren’t thinking about ways to optimize those as a core competency or as strategic. We still do have an issue with acceptance of enterprise architecture based on the educational and developmental experiences a lot of executives have had.

We’re very hopeful that that trend is going to be moving in a different direction, particularly as relates to new master’s programs and doctorate programs, for example, in the Enterprise Architecture field. Those elevate and legitimize Enterprise Architecture as a profession. When people are going through an MBA program, they will have heard of enterprise architecture as an essential part of delivering upon strategy.

Pieces of jigsaw puzzle and global network concept.Gardner: Ryan, looking at what prevents companies from attaining DT, what are the major challenges? What’s holding up enterprises from getting used to real-time data, gaining agility, and using intelligence about how they do things?

Schmierer: There are a couple of things going on. One of them ties back to what Chris was just talking about — the role of Enterprise Architects, and the role of architects in general. DT requires a shift in the relationship between business and IT. With DT, business functions and IT functions become entirely and holistically integrated and inseparable.

When there are no separate IT processes and no businesses process — there are just processes because the two are intertwined. As we use more real-time data and as we leverage Enterprise Architecture, how do we move beyond the traditional relationship between business and IT? How do we look at such functions as data management and data architecture? How do we bring them into an integrated conversation with the folks who were part of the business and IT teams of the past?

A good example of how companies can do this comes in a recent release from The Open Group, the Digital Practitioner Body of Knowledge™ (DPBoK™). It says that there’s a core skill set that is general and describes what it means to be such a practitioner in the digital era, regardless of your job role or focus. It says we need to classify job roles more holistically and that everyone needs to have both a business mindset and a set of technical skills. We need to bring those together, and that’s really important.

As we look at what’s holding up DT we need to take functions that were once considered centralized assets like EA and data management and bring them into the forefront. … Enterprise Architects need to be living in the present.

As we look at what’s holding up DT — taking the next step to real-time data, broadening the scope of DT – we need to take functions that were once considered centralized assets, like EA and data management, and bring them into the forefront, and say, “You know what? You’re part of the digital transmission story as well. You’re key to bringing us along to the next stage of this journey, which is looking at how to optimize, bring in the data, and use it more effectively. How do we leverage technology in new ways?”

The second thing we need to improve is the mindset. It’s particularly an issue with Enterprise Architects right now. And it is that Enterprise Architects — and everyone in digital professions — need to be living in the present.

You asked why some EAs are getting laid off. Why is that? Think about how they approach their job in terms of the questions that would be asked in a performance review.

Those might be, “What have you done for me over the years?” If your answer focuses on what you did in the past, you are probably going to get laid off. What you did in the past is great, but the company is operating in the present.

What’s your grand idea for the future? Some ideal situation? Well, that’s probably going to get you shoved in a corner some place and probably eventually laid off because companies don’t know what the future is going to bring. They may have some idea of where they want to get to, but they can’t articulate a 5- to 10-year vision because the environment changes so quickly.

TOG BugWhat have you done for me lately? That’s a favorite thing to ask in performance-review discussions. You got your paycheck because you did your job over the last six months. That’s what companies care about, and yet that’s not what Enterprise Architects should be supporting.

Instead, the EA emphasis should be what can you do for the business over the next few months? Focus on the present and the near-term future.

That’s what gets Enterprise Architects a seat at the table. That’s what gets the entire organization, and all the job functions, contributing to DT. It helps them become aligned to delivering near-term value. If you are entirely focused on delivering near-term value, you’ve achieved business agility.

Gardner: Chris, because nothing stays the same for very long, we are seeing a lot more use of cloud services. We’re seeing composability and automation. It seems like we are shifting from building to assembly.
Doesn’t that fit in well with what EAs do, focusing on the assembly and the structure around automation? That’s an abstraction above putting in IT systems and configuring them.

Reuse to remain competitive 

Armstrong: It’s ironic that the profession that’s often been coming up with the concepts and thought-leadership around reuse struggles a with how to internalize that within their organizations. EAs have been pretty successful at the implementation of reuse on an operating level, with code libraries, open-source, cloud, and SaaS.

There is no reason to invent a new method or framework. There are plenty of them out there. Better to figure out how to exploit those to competitive advantage and focus on understanding the business organization, strategy, culture, and vision — and deliver value in the context of those.

For example, one of the common best practices in Enterprise Architecture is to create things called reference architectures, basically patterns that represent best practices, many of which can be created from existing content. If you are doing cloud or microservices, elevate that up to different types of business models. There’s a lot of good content out there from standards organizations that give organizations a good place to start.

Learn About Agile Architecture

At The Open Group July Denver Event 

But one of the things that we’ve observed is a lot of architecture communities tend to focus on building — as you were saying — those reference architectures, and don’t focus as much on making sure the organization knows that content exists, has been used, and has made a difference.

We have a great opportunity to connect the dots among different communities that are often not working together. We can provide that architectural leadership to pull it together and deliver great results and positive behaviors.

Gardner: Chris, tell us about Sparx Services North America. What do you all do, and how you are related to and work in conjunction with The Open Group?

Armstrong: Sparx Services is focused on helping end-user organizations be successful with Enterprise Architecture and related professions such as solution architecture and solution delivery, and systems engineering. We do that by taking advantage of the frameworks and best practices that standards organizations like The Open Group create, helping make those standards real, practical, and pragmatic for end-user organizations. We provide guidance on how to adapt and tailor them and provide support while they use those frameworks for doing real work.

And we provide a feedback loop to The Open Group to help understand what kinds of questions end-user organizations are asking. We look for opportunities for improving existing standards, areas where we might want to invest in new standards, and to accelerate the use of Enterprise Architecture best practices.

Gardner: Ryan, moving onto what’s working and what’s helping foster better DT, tell us what’s working. In a practical sense, how is EA making those shorter-term business benefits happen?

One day at a time 

Schmierer: That’s a great question. We have talked about some of the challenges. It’s important to focus on the right path as well. So, what’s working that an enterprise architect can do today in order to foster DT?

Number one, embrace agile approaches and an agile mindset in both architecture development (how you do your job) and the solutions you develop for your organizations. A good way to test whether you are approaching architecture in an agile way is the first iteration in the architecture. Can you go through the entire process of the Architecture Development Method (ADM) on a cocktail napkin in the time it takes you to have a drink with your boss? If so, great. It means you are focused on that first simple iteration and then able to build from there.

Number two, solve problems today with the components you have today. Don’t just look to the future. Look at what you have now and how you can create the most value possible out of those. Tomorrow the environment is going to change, and you can focus on tomorrow’s problems and tomorrow’s challenges tomorrow. So today’s problems today.

Third, look beyond your current DT initiative and what’s going on today, and talk to your leaders. Talk to your business clients about where they need to go in the future. That goal is enterprise business agility, which is helping the company become more nimble. DT is the first step, then start looking at steps two and three.

Architects need to understand technology better, such things as new cloud services, IoT, edge computing, ML, and AI. These are going to have disruptive effects on your businesses. You need to understand them to be a trusted advisor to your organization.

Fourth, Architects need to understand technology better, such things as fast-moving, emerging technology like new cloud services, Internet of Things (IoT), edge computingmachine learning (ML), and artificial intelligence (AI) — these are more than just buzz words and initiatives. They are real technology advancements. They are going to have disruptive effects on your businesses and the solutions to support those businesses. You need to understand the technologies; you need to start playing with them so you can truly be a trusted advisor to your organization about how to apply those technologies in business context.

Gardner: Chris, we hear a lot about AI and ML these days. How do you expect Enterprise Architects to help organizations leverage AI and ML to get to that DT? It seems really essential to me to become more data driven and analytics driven and then to re-purpose to reuse those analytics over and over again to attain an ongoing journey of efficiency and automation.

Better business outcomes 

Armstrong: We are now working with our partners to figure out how to best use AI and ML to help run the business, to do better product development, to gain a 360-degree view of the customer, and so forth.

Architecture_frameworkIt’s one of those weird things where we see the shoemaker’s children not having any shoes because they are so busy making shoes for everybody else. There is a real opportunity, when we look at some of the infrastructure that’s required to support the agile enterprise, to exploit those same technologies to help us do our jobs in enterprise architecture.

It is an emerging part of the profession. We and others are beginning to do some research on that, but when I think of how much time we and our clients have spent on the nuts and bolts collection of data and normalization of data, it sure seems like there is a real opportunity to leverage these emerging technologies for the benefit of the architecture practice. Then, again, the architects can be more focused on building relationships with people, understanding the strategy in less time, and figuring out where the data is and what the data means.

Obviously humans still need to be involved, but I think there is a great opportunity to eat your own dog food, as it were, and see if we can exploit those learning tools for the benefit of the architecture community and its consumers.

Gardner: Chris, do we have concrete examples of this at work, where EAs have elevated themselves and exposed their value for business outcomes? What’s possible when you do this right?

Armstrong: A lot of organizations are working things from the bottoms up, and that often starts in IT operations and then moves to solution delivery. That’s where there has been a lot of good progress, in improved methods and techniques such as scaled agile and DevOps.

But a lot of organizations struggle to elevate it higher. The DPBoK™  from The Open Group provides a lot of guidance to help organizations navigate that journey, particularly getting to the fourth level of the learning progression, which is at the enterprise level. That’s where Enterprise Architecture becomes essential. It’s great to develop software fast, but that’s not the whole point of agile solution delivery. It should be about building the right software the right way to meet the right kind of requirements — and do that as rapidly as possible.

We need an umbrella over different release trains, for example, to make sure the organization as a whole is marching forward. We have been working with a number of Fortune 100 companies that have made good progress at the operational implementation levels. They nonetheless now are finding that particularly trying, to connect to business architecture.

There have been some great advancements from the Business Architecture Guild and that’s been influencing the TOGAF framework, to connect the dots across those agile communities so that the learnings of a particular release train or the strategy of the enterprise is clearly understood and delivered to all of those different communities.

Gardner: Ryan, looking to the future, what should organizations be doing with the Enterprise Architect role and function?

EA evolution across environments 

Schmierer: The next steps don’t just apply to Enterprise Architects but really to all types of architects. So look at the job role and how your job role needs to evolve over the next few years. How do you need to approach it differently than you have in the past?

For example, we are seeing Enterprise Architects increasingly focus on issues like security, risk, reuse, and integration with partner ecosystems. How do you integrate with other companies and work in the broader environments?

We are seeing Business Architects who have been deeply engaged in DT discussions over the last couple of years start looking forward and shifting the role to focus on how we light up real-time decision-making capabilities. Solution Architects are shifting from building and designing components to designing assembly and designing the end systems that are often built out of third-party components instead of things that were built in-house.

Look at the job role and understand that the core need hasn’t changed. Companies need Enterprise Architects and Business Architects and Solution Architects more than ever right now to get them where they need to be. But the people serving those roles need to do that in a new way — and that’s focused on the future, what the business needs are over the next 6 to 18 months, and that’s different than what they have done in past.

Gardner: Where can organizations and individuals go to learn more about Agile Architecture as well as what The Open Group and Sparx Services are offering?

Schmierer: The Open Group has some great resources available. We have a July event in Denver focused on Agile Architecture, where they will discuss some of the latest thoughts coming out of The Open Group Architecture ForumDigital Practitioners Work Group, and more. It’s a great opportunity to learn about those things, network with others, and discuss how other companies are approaching these problems. I definitely point them there.

Learn About Agile Architecture 

At The Open Group July Denver Event 

I mentioned the DPBoK™. This is a recent release from The Open Group, looking at the future of IT and the roles for architects. There’s some great, forward-looking thinking in there. I encourage folks to take a look at that, provide feedback, and get involved in that discussion.

And then Sparx Services North America, we are here to help architects be more effective and add value to their organizations, be it through tools, training, consulting, best practices, and standards. We are here to help, so feel free to reach out at our website. We are happy to talk with you and see how we might be able to help.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Posted in application transformation, Cloud computing, Data center transformation, Enterprise architect, enterprise architecture, Enterprise transformation, machine learning, Microsoft, multicloud, The Open Group | Tagged , , , , , , , , , , , | Leave a comment

For a UK borough, solving security issues leads to operational improvements and cost-savings across its IT infrastructure

Barnsley_at_Night)The next BriefingsDirect enterprise IT productivity discussion focuses on solving tactical challenges around security to unlock strategic operational benefits in the public sector.

For a large metropolitan borough council in South Yorkshire, England, an initial move to thwarting recurring ransomware attacks ended up a catalyst to wider IT infrastructure performance, cost, operations, and management benefits.

This security innovations discussion then examines how the Barnsley Metropolitan Borough Council information and communications technology (ICT) team rapidly deployed malware protection across 3,500 physical and virtual workstations and servers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to share the story of how that one change in security software led to far higher levels of user satisfaction — and a heightened appreciation for the role and impact of small IT teams — is Stephen Furniss, ICT Technical Specialist for Infrastructure at Barnsley Borough Council. The interview was conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stephen, tell us about the Barnsley Metropolitan Borough. You are one of 36 metropolitan counties in England, and you have a population of about 240,000. But tell us more about what your government agencies provide to those citizens.

Stephen Furniss

Furniss

Furniss: As a Council, we provide wide-ranging services to all the citizens here, from things like refuse collection on a weekly basis; maintaining roads, potholes, all that kind of stuff, and making sure that we look after the vulnerable in society around here. There is a big raft of things that we have to deliver, and every year we are always challenged to deliver those same services, but actually with less money from central government.

So it does make our job harder, because then there is not just a squeeze across a specific department in the Council when we have these pressures, there is a squeeze across everything, including IT. And I guess one of our challenges has always been how we deliver more or the same standard of service to our end users, with less budget.

So we turn to products that provide single-pane-of-glass interfaces, to make the actual management and configuration of things a lot easier. And [we turn to] things that are more intuitive, that have automation. We try and drive, making everything that we do easier and simpler for us as an IT service.

Gardner: So that boils down to working smarter, not harder. But you need to have the right tools and technology to do that. And you have a fairly small team, 115 or so, supporting 2,800-plus users. And you have to be responsible for all aspects of ICT — the servers, networks, storage, and, of course, security. How does being a small team impact how you approach security?

Furniss: We are even smaller than that. In IT, we have around 115 people, and that’s the whole of IT. But just in our infrastructure team, we are only 13 people. And our security team is only three or four people.

In IT, we have around 115 people, but just in infrastructure we are only 13 people. It can become a hindrance when you get overwhelmed with security incidents, yet it’s great to have  a small team to bond and come up with solutions.

It can become a hindrance when you get overwhelmed with security incidents or issues that need resolving. Yet sometimes it’s great to have that small team of people. You can bond together and come up with really good solutions to resolve your issues.

Gardner: Clearly with such a small group you have to be automation-minded to solve problems quickly or your end users will be awfully disappointed. Tell us about your security journey over the past year-and-a-half. What’s changed?

Furniss: A year-and-a-half ago, we were stuck in a different mindset. With our existing security product, every year we went through a process of saying, “Okay, we are up for renewal. Can we get the same product for a cheaper price, or the best price?”

We didn’t think about what security issues we were getting the most, or what were the new technologies coming out, or if there were any new products that mitigate all of these issues and make our jobs — especially being a smaller team — a lot easier.

But we had a mindset change about 18 months back. We said, “You know what? We want to make our lives easier. Let’s think about what’s important to us from a security product. What issues have we been having that potentially the new products that are out there can actually mitigate and make our jobs easier, especially with us being a smaller team?”

Gardner: Were reoccurring ransomware attacks the last straw that broke the camel’s back?

Staying a step ahead of security breaches

Furniss: We had been suffering with ransomware attacks. Every couple of years, some user would be duped into clicking on a file, email, or something that would cause chaos and mayhem across the network, infecting file-shares, and not just that individual user’s file-share, but potentially the files across 700 to 800 users all at once. Suddenly they found their files had all been encrypted.

From an IT perspective, we had to restore from the previous backups, which obviously takes time, especially when you start talking about terabytes of data.

That was certainly one of the major issues we had. And the previous security vendor would come to us and say, “All right, you have this particular version of ransomware. Here are some settings to configure and then you won’t get it again.” And that’s great for that particular variant, but it doesn’t help us when the next version or something slightly different shows up, and the security product doesn’t detect it.

Barnsley Town HallThat was one of our real worries and pain that we suffered, that every so often we were just going to get hit with ransomware. So we had to change our mindset to want something that’s actually going to be able to do things like machine learning (ML) and have ransomware protection built-in so that we are not in that position. We could actually get on with our day-to-day jobs and be more proactive – rather than being reactive — in the environment. That’s was a big thing for us.

Also, we need to have a lot of certifications and accreditations, being a government authority, in order to connect back to the central government of the UK for such things as pensions. So there were a lot of security things that would get picked up. The testers would do a penetration test on our network and tell us we needed to think about changing stuff.

Gardner: It sounds like you went from a tactical approach to security to more of an enterprise-wide security mindset. So let’s go back to your thought process. You had recurring malware and ransomware issues, you had an audit problem, and you needed to do more with less. Tell us how you went from that point to get to a much better place.

Safe at home, and at work 

Furniss: As a local authority, with any large purchase, usually over 2,500 pounds (US$3,125), we have to go through a tender process. We write in our requirements, what we want from the products, and that goes on a tender website. Companies then bid for the work.

It’s a process I’m not involved in. I am purely involved in the techie side of things, the deployment, and managing and looking after the kit. That tender process is all done separately by our procurement team.

So we pushed out this tender for a new security product that we wanted, and obviously we got responses from various different companies, including Bitdefender. When we do the scoring, we work on the features and functionality required. Some 70 percent of the scoring is based on the features and functionality, with 30 percent based on the cost.

What was really interesting was that Bitdefender scored the highest on all the features and functionalities — everything that we had put down as a must-have. And when we looked at the actual costs involved — what they were going to charge us to procure their software and also provide us with deployment with their consultants — it came out at half of what we were paying for our previous product.

Bitdefender scored the highest on all the features and functionalities — everything that we had put down as must-have. And the actual costs were half of what we were paying.

So you suddenly step back and you think, “I wish that we had done this a long time ago, because we could have saved money as well as gotten a better product.”

Gardner: Had you been familiar with Bitdefender?

Furniss: Yes, a couple of years ago my wife had some malware on her phone, and we started to look at what we were running on our personal devices at home. And I came up with Bitdefender as one of the best products after I had a really good look around at different options.

I went and bought a family pack, so effectively I deployed Bitdefender at home on my own personal mobile, my wife’s, my kids’, on the tablets, on the computers in the house, and what they used for doing schoolwork. And it’s been great at protecting us from anything. We have never had any issues with an infection or malware or anything like that at home.

It was quite interesting to find out, once we went through the tender process, that it was Bitdefender. I didn’t even know at that stage who was in the running. When the guys told me we are going to be deploying Bitdefender, I was thinking, “Oh, yeah, I use that at home and they are really good.”

Monday, Monday, IT’s here to stay 

Gardner: Stephen, what was the attitude of your end users around their experiences with their workstations, with performance, at that time?

Furniss: We had had big problems with end users’ service desk calls to us. Our previous security product required a weekly scan that would run on the devices. We would scan their entire hard drives every Friday around lunchtime.

You try to identify when the quiet periods are, when you can run an end-user scan on their machine, and we had come up with Friday’s lunchtime. In the Council we can take our lunch between noon and 2 p.m., so we would kick it off at 12 and hope it would finish in time for when users came back and did some work on the devices.

And with the previous product — no matter what we did, trying to change dates, trying to change times — we couldn’t get anything that would work in a quick enough time frame and complete the scans rapidly. It could be running for two to three hours, taking high resources on their devices. A lot of that was down to the spec of the end-user devices not being very good. But, again, when you are constrained with budgets, you can only put so many resources into buying kit for your users.

So, we would end up with service desk calls, with people complaining, saying, “Is there any chance you can change the date and time of the scan? My device is running slow. Can I have a new device?” And so, we received a lot of complaints.

And we also noticed, usually Monday mornings, that we would also have issues. The weekend was when we did our server scans and our full backup. So we would have the two things clashing, causing issues. Monday morning, we would come in expecting those backups to have completed, but because it was trying to fight with the scanning, neither was fully completed. We worried if we were going to be able to recover back to the previous week.

Our backups ended up running longer and longer as the scans took longer. So, yes, it was a bit painful for us in the past.

Gardner: What happened next?

Smooth deployer 

Furniss: Deployment was a really, really good experience. In the past, we have had suppliers come along and provide us a deployment document, some description, and it would be their standard document, there was nothing customized. They wouldn’t speak with us to find out what’s actually deployed and how their product fit in. It was just, “We are going to deploy it like this.” And we would then have issues trying to get things working properly, and we’d have to go backward and forward with a third party to get things resolved.

In this instance, we had Bitdefender’s consultants. They came on-site to see us, and we had a really good meeting. They were asking us questions: “Can you tell us about your environment? Where are your DMZs? What applications have you got deployed? What systems are you using? What hypervisor platforms have you got?” And all of that information was taken into account in the design document that they customized completely to best fit their best practices and what we had in place.

We ended up with something we could deploy ourselves, if we wanted to. We didn’t do that. We took their consultancy as a part of the deployment process. We had the Bitdefender guys on-site for a couple of days working with us to build the proper infrastructure services to run GravityZone.

And it went really well. Nothing was missed from the design. They gave us all the ports and firewall rules needed, and it went really, really smoothly.

We initially thought we were going to have a problem with deploying out to the clients, but we worked with the consultants to come up with a way around impacting our end-users during the deployment.

One of our big worries was that when you deploy Bitdefender, the first thing it does is see if there is a competitive vendor’s product on the machine. If it finds that, it will remove it, and then restart the user’s device to continue the installation. Now, that was going to be a concern to us.

So we came up with a scripted solution that we pushed out through Microsoft System Center Configuration Manager. We were able to run the uninstall command for the third-party product, and then Bitdefender triggered for the install straightaway. The devices didn’t need rebooting, and it didn’t impact any of our end users at all. They didn’t even know there was anything happening. The only thing that would see is the little icon in the taskbar changing from the previous vendor’s icon to Bitdefender.

It was really smooth. We got the automation to run and push out the client to our end users, and they just didn’t know about it.

Gardner: What was the impact on the servers?

Environmental change for the better 

Furniss: Our server impact has completely changed. The full scanning that Bitdefender does, which might take 15 minutes, is far less time than the two to three hours before on some of the bigger file servers.

And then once it’s done with that full scan, we have it set up to do more frequent quick scans that take about three minutes. The resource utilization of this new scan set up has just totally changed the environment.

Because we use virtualization predominantly across our server infrastructure, we have even deployed the Bitdefender scan servers, which allow us to do separate scans on each of our virtualized server hosts. It does all of the offloading of the scanning of files and malware and that kind of stuff.

It’s a lightweight agent, it takes less memory, less footprint, and less resources. And the scan is offloaded to the scan server that we run.

The impact from a server perspective is that you no longer see spikes in CPU or memory utilization with backups. We don’t have any issues with that kind of thing anymore. It’s really great to see a vendor come up with a solution to issues that people seem to have across the board.

Gardner: Has that impacted your utilization and ability to get the most virtual machines (VMs) per CPU? How has your total costs equation been impacted?

Furniss: The fact that we are not getting all these spikes across the virtualization platform means we can squeeze in more VMs per host without an issue. It means we can get more bank for buck, if you like.

Gardner: When you have a mixed environment — and I understand you have Nutanixhyperconverged (HCI), Hyper-V and vSphere VMs, some Citrix XenServer, and a mix of desktops — how does managing such heterogeneity with a common security approach work? It sounds like that could be kind of a mess.

Furniss: You would think it would be a mess. But from my perspective, Bitdefender GravityZone is really good because I have this all on a single pane of glass. It hooks into Microsoft ActiveDirectory, so it pulls back everything in there. I can see all the devices at once. It hooks into our Nutanix HCI environment. I can deploy small scan servers into the environment directly from GravityZone.

If I decide on an additional scan server, it automatically builds that scan server in the virtual environment for me, and it’s another box that we’ve got for scanning everything on the virtual service.

Bitdefender GravityZone is really good because I have this all on a single pane of glass. I can see all the devices at once. I can deploy small scan servers into the environment directly from GravityZone.

It’s nice that it hooks into all these various things. We currently have some legacy VMware. Bitdefender lets me see what’s in that environment. We don’t use the VMware NSX platform, but it gives me visibility across an older platform even as I’m moving to get everything to the Nutanix HCI.

So it makes our jobs easier. The additional patch management module that we have in there, it’s one of the big things for us.

For example, we have always been really good at keeping our Windows updates on devices and servers up to the latest level. But we tended to have problems keeping updates ongoing for all of our third-party apps, such as Adobe ReaderFlash, and Java across all of the devices.

You can get lost as to what is out there unless you do some kind of active scanning across your entire infrastructure, and the Bitdefender patch management allows us to see where we have different versions of apps and updates on client devices. It allows us to patch them up to the latest level and install the latest versions.

From that perspective, I am again using just one pane of glass, but I am getting so much benefit and extra features and functionality than I did previously in the many other products that we use.

Gardner: Stephen, you mentioned a total cost of ownership (TCO) benefit when it comes to server utilization and the increased VMs. Is there another economic metric when it comes to administration? You have a small number of people. Do you see a payback in terms of this administration and integration value?

Furniss: I do. We only have 13 people on the infrastructure team, but only two or three of us actively go into the Bitdefender GravityZone platform. And on a day-to-day basis, we don’t have to do that much. If we deploy a new system, we might have to monitor and see if there is anything that’s needed as an exception if it’s some funky application.

But once our applications are deployed and our servers are up and running, we don’t have to make any real changes. We only have to look at patch levels with third-parties, or to see if there are any issues on our end points and needs our attention.

The actual amount of time we need to be in the Bitdefender console is quite reduced so it’s really useful to us.

Gardner: What’s been the result this last year that you have had Bitdefender running in terms of the main goal — which is to be free of security concerns?

Proactive infection protection 

Furniss: That’s just been the crux of it. We haven’t had any malware any ransomware attacks on our network. We have not had to spend days, weeks, or hours restoring files back or anything like that — or rebuilding hundreds of machines because they have something on them. So that’s been a good thing.

Another interesting thing for us, we began looking at the Bitdefender reports from day one. And it had actually found, going back 5, 6, or 7 years, that there was malware or some sort of viruses still out there in our systems.

And the weird thing is, our previous security product had never even seen this stuff. It had obviously let it through to start with. It got through all our filtering and everything, and it was sitting in somebody’s mailbox ready — if they clicked on it – to launch and infect the entire network.

Straightaway from day one, we were detecting stuff that sat for years in people’s mailboxes. We just didn’t even know about it.

So, from that perspective, it’s been fantastic. We’ve not had any security outbreaks that we had to deal with, or anything like that.

And just recently, we had our security audit from our penetration testers. One of the things they try to do is actually put some malware on to a test device. They came back and said they had not been able to do that. They have been unable to infect any of our devices. So that’s been a really, really good thing from our perspective.

Gardner: How is that translated into the perception from your end users and your overseers, those people managing your budgets? Has there been a sense of getting more value? What’s the satisfaction quotient, if you will, from your end users?

Furniss: A really good, positive thing has been that they have not come back and said that there’s anything that we’ve lost. There are no complaints about machines being slow.

We even had one of our applications guys say that their machine was running faster than it normally does on Fridays. When we explained that we had swapped out the old version of the security product for Bitdefender, it was like, “Oh, that’s great, keep it up.”

There are no complaints about machines being slow. One of our apps guy says that their machine was running faster than normal. From IT, we are really pleased. 

For the people higher up, at the minute, I don’t think they appreciate what we’ve done.  That will come in the next month as we start presenting to them our security reports and the reports from the audit about how they were unable to infect an end-user device.

From our side, from IT, we are really, really pleased with it. We understand what it does and how much it’s saving us from the pains of having to restore files. We are not being seen as one of these councils or entities that’s suddenly plastered across the newspaper and had its reputation tarnished because anyone has suddenly lost all their systems or been infected or whatever.

Gardner: Having a smoothly running organization is the payoff.

Before we close out, what about the future? Where would you like to see your security products go in terms of more intelligence, using data, and getting more of a proactive benefit?

Cloud on the horizon 

Furniss: We are doing a lot more now with virtualization. We have only about 50 physical servers left. We are also thinking about the cloud journey. So we want the security products working with all of that stuff up in the cloud. It’s going to be the next big thing for us. We want to secure that area of our environment if we start moving infrastructure servers up there.

Can we protect stuff up in the cloud as well as what we have here?

Gardner: Yes, and you mentioned, Stephen, at home that you are using Bitdefender down into your mobile devices, is that also the case with your users in the council, in the governance there or is there a bring your own device benefit or some way that you are looking to allow people to use more of their own devices in context of work? How does that mobile edge work in the future?

Furniss: Well, I don’t know. I think a mobile device is quite costly for councils to actually deploy, but we have taken the approach of — if you need it for work, then you get one. We currently have got a project to look at deploying the mobile version of Bitdefender to our actual existing Android users.

Gardner: Now that you have 20/20 hindsight with using this type of security environment over the course of a year, any advice for folks in a similar situation?

Furniss: Don’t be scared of change. I think one of the things that always used to worry me was that we knew what we were doing with a particular vendor. We knew what our difficulties were. Are we going to be able to remove it from all the devices?

Don’t worry about that. If you are getting the right product, it’s going to take care of lot of the issues that you currently have. We found that deploying the new product was relatively easy and didn’t cause any pain to our end-users. It was seamless. They didn’t even know we had done it.

Some people might be thinking that they have a massive estate and it’s going to be a real headache. But with automation and a bit of thinking about how and what are you going to do, it’s fairly straightforward to deploy a new antivirus product to your end users. Don’t be afraid of change and moving into something new. Get the best use of the new products that there are out there.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Bitdefender

You may also be interested in:

Posted in Bitdefender, Cloud computing, Cyber security, machine learning, risk assessment, Security, User experience, Virtualization | Tagged , , , , , , , , , , , | Leave a comment

Financial stability, a critical factor for choosing a business partner, is now much easier to assess

The next BriefingsDirect digital business risk remediation discussion explores new ways companies can gain improved visibility, analytics, and predictive indicators to better assess the financial viability of partners and global supply chains.

Businesses are now heavily relying upon their trading partners across their supply chains — and no business can afford to be dependent on suppliers that pose risks due to poor financial health.

We will now examine new tools and methods that create a financial health rating system to determine the probability of bankruptcy, default, or disruption for both public and private companies — as much as 36 months in advance.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To learn more about the exploding sophistication around gaining insights into supply-chain risk of a financial nature, please welcome Eric Evans, Managing Director of Business Development at RapidRatings in New York, and Kristen Jordeth, Go-to-Market Director for Supplier Management Solutions, North America at SAP Ariba. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Eric, how do the technologies and processes available now provide a step-change in managing supplier risk, particularly financial risk?

Eric Evans

Evans

Evans: Platform-to-platform integrations enabled by application programming interfaces (APIs), which we have launched over the past few years, allows partnering with SAP Ariba Supplier Risk. It’s become a nice way for our clients to combine actionable data with their workflow in procurement processes to better manage suppliers end to end — from sourcing to on-boarding to continuous monitoring.

Gardner: The old adage of “garbage in, garbage out” still applies to the quality and availability of the data. What’s new about access to better data, even in the private sector?

Dig deep into risk factors

Evans: We go directly to the source, the suppliers our customers work with. They introduce us to those suppliers and we get the private company financial data, right from those companies. It’s a quantitative input, and then we do a deeper “CAT scan,” if you will, on the financials, using that data together with our predictive scoring.

Gardner: Kristen, procurement and supply chain integrity trends have been maturing over the past 10 years. How are you able to focus now on more types of risk? It seems we are getting better and deeper at preventing unknown unknowns.

Jordeth: Exactly, and what we are seeing is customers managing risk from all aspects of the business. The most important thing is to bring it all together through technology.

Within our platform, we enable a Controls Framework that identifies key areas of risk that need to be addressed for a specific type of engagement. For example, do they need to pull a financial rating? Do they need to do a background check? We use the technology to manage the controls across all of the different aspects of risk in one system.

Gardner: And because many companies are reliant on real-time logistics and supplier services, any disruption can be catastrophic.

No alt text provided for this image

Jordeth

Jordeth: Absolutely. We need to make sure that the information gets to the system as quickly as it’s available, which is why the API connect to RapidRatings is extremely important to our customers. On top of that, we also have proactive incidents tracking, which complements the scores.

If you see a medium-risk business, from a financial perspective, you can look into that incident to see if they are under investigation, or if things going on where they might be laying off departments.

It’s fantastic and to have it all in one place with one view. You can then slice and dice the data and roll it up into scores. It’s very helpful for our customers.

Gardner: And this is a team sport, with an ecosystem of partners, because there is such industry specialization. Eric, how important is it being in an ecosystem with other specialists examining other kinds of risk?

Evans: It’s really important. We listen to our customers and prospects. It’s about the larger picture of bringing data into an end-to-end procurement and supplier risk management process.

No alt text provided for this image

We feel really good about being part of SAP PartnerEdge and an app extension partner to SAP Ariba. It’s exciting to see our data and the integration for clients.

Gardner: Rapid Ratings International, Inc. is the creator of the proprietary Financial Health Rating (FHR), also known as RapidRatings. What led up to the solution? Why didn’t it exist 30 years ago?

Rate the risk over time

Evans: The company was founded by someone with a background in econometrics and modeling. We have 24 industry models that drive the analysis. It’s that kind of deep, precise, and accurate modeling — plus the historical database of more than 30 years of data that we have. When you combine those, it’s much more accurate and predictive, it’s really forward-looking data.

Gardner: You provide a 0 to 100 score. Is that like a credit rating for an individual? How does that score work in being mindful of potential risk?

Evans: The FHR is a short-term score, from 0 to 100, that looks at the next 12 months with a probability of default. Then a Core Health Score, which is around 24 to 36 months out, looks at operating efficiency and other indicators of how well a company is managing the business and operationalizing.

We can identify companies that are maybe weak short-term, but look fine long-term, or vice versa. Having industry depth — and the historical data behind it — that’s what drives the go-forward assessments.

When you combine the two, or look at them individually, you can identify companies that are maybe weak short-term, but look fine long-term, or vice versa. If they don’t look good in the long-term and in the short-term, they still may have less risk because they have cash on hand. And that’s happening out in the marketplace these days with a lot of the initial public offerings (IPOs) such as Pinterest or Lyft. They have a medium-risk FHR because they have cash, but their long-term operating efficiency needs to be improved because they are not yet profitable.

Gardner: How are you able to determine risk going 36 months out when you’re dealing mostly with short-term data?

Evans: It’s because of the historical nature and the discrete modeling underneath, that’s what gets precise about the industry that each company is in. Having 24 unique industry models is very different than taking all of the companies out there and stuffing them into a plain-vanilla industry template. A software company is very different than pharmaceuticals, which is very different than manufacturing.

Having that industry depth — and the historical data behind it — is what’s drives the go-forward assessments.

Gardner: And this is global in nature?

Evans: Absolutely. We have gone out to more than 130 countries to get data from those sources, those suppliers. It is a global data set that we have built on a one-to-one basis for our clients.

Gardner: Kristen, how does somebody in the Ariba orbit take advantage of this? How is this consumed?

Jordeth: As with everything at SAP Ariba, we want to simplify how our customers get access to information. The PartnerEdge program works with our third parties and partners to create an API whereby all our customers need to do is get a license key from RapidRatings and apply it to the system.

The infrastructure and connection are already there. Our deployment teams don’t have to do anything, just add that user license and the key within the system. So, it’s less touch, and easy to access the data.

Gardner: For those suppliers that want to be considered good partners with low financial risk, do they have access to this information? Can they work to boost up their scores?

To reduce risk, discuss data details 

Evans: Our clients actually own the subscription and the license, and they can share the data with their suppliers. The suppliers can also foster a dialogue with our tool, called the Financial Dialogue, and they can ask questions around areas of concern. That can be used to foster a better relationship, build transparency, and it doesn’t have to be a negative conversation to be a positive one.

No alt text provided for this image

They may want to invest in their company, extend payment terms or credit, work with them on service-level agreements (SLAs), and send in people to help manage. So, it could be a good way to just build up that deeper relationship with that supplier and use it as a better foundation.

Gardner: Kristen, when I put myself in the position of a buyer, I need to factor lots of other issues, such as around sustainability, compliance, and availability. So how do you see the future unfolding for the holistic approach to risk mitigation, of not only taking advantage of financial risk assessments, but the whole compendium of other risks? It’s not a simple, easy task.

Jordeth: When you look at financial data, you need to understand the whole story behind it. Why does that financial data look the way it does today? What I love about RapidRatings is they have financial scores, and it’s more about the health of the company in the future.

But in our SAP Ariba solution, we provide insights on other factors such as sustainability, information security, and are they funding things such as women’s rights in Third World countries? Once you start looking at the proactive awareness of what’s going on — and all the good and the bad together — you can weigh the suppliers in a total sense.

Their financials may not be up to par, but they are not high risk because they are funding women’s rights or doing a lot of things with the youth in America. To me, that may be more important. So I might put them on a tracker to address their financials more often, but I am not going to stop doing business with them because one of my goals is sustainability. That holistic picture helps tell the true story, a story that connects to our customers, and not just the story we want them to have. So, it creates and crafts that full picture for them.

Gardner: Empirical data that can then lead to a good judgment that takes into full account all the other variables. How does this now get to the SAP Ariba installed base? When is the general availability?

Customize categories, increase confidence 

Jordeth: It’s available now. Our supplier risk module is the entryway for all of these APIs, and within that module we connect to the companies that provide financial data, compliance screening, and information on forced labor, among others. We are heavily expanding in this area for categories of risk with our partners, so it’s a fantastic approach.

Within the supplier risk module, customers have the capability to not only access the information but also create their own custom scores on that data. Because we are a technology organization, we give them the keys so an administrator can go in and alter that the way they want. It is very customizable.

It’s all in our SAP Ariba Supplier Risk solution, and we recently released the connection to RapidRatings.

Evans: Our logo is right in there, built in, under the hood, and visible. In terms of getting it enabled, there’s no professional services or implementation wait time. So once the data set is built out on our end, if it’s a new client that’s through our implementation team, and basically we just give the API key credentials to our client. They take it and enable it in SAP Ariba Supplier Risk and they can instantly pull up the scores. So there is no wait time and no future developments to get at the data.

Once the data set is built on our end, we just give the API key to our client. They take it and enable it in SAP Ariba Supplier Risk and they can instantly pull up the scores. There is no wait time.

Jordeth: That helps us with security, too, because everybody wants to ensure that any data going in and out of a system is secure, with all of the compliance concerns we have. So our partner team also ensures the secure connection back and forth with their data system and our technology. So, that’s very important for customers.

Gardner: Are there any concrete examples? Maybe you can name them, maybe you can’t, instances where your rating system has proven auspicious? How does this work in the real world?

Evans: GE Healthcare did a joint-webinar with our CEO last year, explained their program, and showed how they were able to de-risk their supply base using RapidRatings. They were able to reduce the number of companies that were unhealthy financially. They were able to have mitigation plans put in place and corrective actions. So it was an across the board win-win.

No alt text provided for this image

Oftentimes, it’s not about the return on investment (ROI) on the platform, but the fact that companies were thwarting a disruption. An event did not happen because we were able to address it before it happened.

On the flip side, you can see how resilient companies are regardless of all the disruptions out there. They can use the financial health scores to observe the capability of a company to be resilient and bounce back from a cyber breach, a regulatory issue, or maybe a sustainability issue.

By looking at all of these risks inside of SAP Ariba Supplier Risk, they may want to order an FHR or look at an FHR for a new company that they hadn’t thought of if they are looking at other risks, operational risks. So that’s another way to tie it in.

Another interesting example is a large international retailer. A company got flagged as high risk and had just filed for bankruptcy, which alerted the buyer. The buyer had signed a contract, but they had the product on the shelf, so it had to be resourced and they had to find a new supplier. They mitigated risk, but they had to take quick action, get another product, and some scrambling had to be done. But they had de-risked some brand reputation damage by having done that. They hadn’t looked at that company before, it was a new company, and it was alerted. So that’s another way of not just running it at the time of contract, but it’s also running it when you’re going to market.

Identify related risks 

Gardner: It also seems logical that if a company is suffering on the financial aspects of doing business, then it might be an indicator that they’re not well-managed in general. It may not just be a cause, but an effect. Are there other areas, you could call them adjacencies, where risks to quality, delivery times, logistics are learned from financial indicators?

Evans: It’s a really good point. What’s interesting is we took a look at some data our clients had around timeliness, quality, performance, delivery, and overlaid it with the financial data on those suppliers. The companies that were weak financially were more than two times likely to ship a defective product. And companies that were weak financially were more than 2.5 times more likely to ship wrong or late.

The whole just-in-time shipping or delivery value went out the window. To your point, it can be construed that companies — when they are stressed financially – may be cutting corners, with things getting a little shoddy. They may not have replaced someone. Maybe there are infrastructure investments that should have been made but weren’t. So, all of those things have a reverberating effect in other operational risk areas.

Gardner: Kristen, now that we know that more data is good, and that you have more services like at RapidRatings, how will a big platform and network like SAP Ariba be able to use machine learning (ML) and artificial intelligence (AI) to further improve risk mitigation?

Jordeth: The opportunity exists for this to not only impact the assessment of a supplier, but throughout the full source-to-pay process, because it is embedded into the full SAP Ariba suite. So, even though you’re accessing it through risk, it’s visible when you’re sourcing, when you’re contracting, when you’re paying. So that direct connect is very important.

We want our customers to have it all. So I don’t cringe at the fact that they ask for it all because they should have it all. It’s just visualizing it in a manner that makes sense and it’s clear to them.

Gardner: And specifically on your set of solutions, Eric, where do you see things going in the next couple years? How can the technology get even better? How can the risk be reduced more?

Evans: We will be innovating products so our clients can bring in more scope around their supply base, not just the critical vendors but across the longer tail of a supply base and look at scores across different segments of suppliers. There could be sub-tiers, as a traversing with sub-tier third and fourth parties, particularly in the banking industry or manufacturing industry.

And so that coupled with more intelligence or enhanced APIs and data visualization, these are things that we are looking into as well as additional scoring capabilities.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: SAP Ariba.

You may also be interested in:

Posted in Cloud computing | Leave a comment

Using AI to solve data and IT complexity — and thereby better enable AI

The next BriefingsDirect data disruption discussion focuses on why the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence (AI) to the rescue.

Stay with us to explore how the latest AI innovations improve both data and services management across a cloud deployment continuum — and in doing so set up an even more powerful way for businesses to exploit AI.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how AI will help conquer complexity to allow for higher abstractions of benefits from across all sorts of analysis, we welcome Rebecca Lewington, Senior Manager of Innovation Marketing at Hewlett Packard Enterprise (HPE). The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We have been talking about massive amounts of data for quite some time. What’s new about data buildup that requires us to look to AI for help?

Lewington: Partly it is the sheer amount of data. IDC’s Data Age Study predicts the global data sphere will be 175 zettabytes by 2025, which is a rather large number. That’s what, 1 and 21 zeros? But we have always been in an era of exploding data.

 
rebecca-lewington
Lewington

Yet, things are different. One, it’s not just the amount of data; it’s the number of sources the data comes from. We are adding in things like mobile devices, and we are connecting factories’ operational technologies to information technology (IT). There are more and more sources.

Also, the time we have to do something with that data is shrinking to the point where we expect everything to be real-time or you are going to make a bad decision. An autonomous car, for example, might do something bad. Or we are going to miss a market or competitive intelligence opportunity.

So it’s not just the amount of data — but what you need to do with it that is challenging.

Gardner: We are also at a time when Al and machine learning (ML) technologies have matured. We can begin to turn them toward the data issue to better exploit the data. What is new and interesting about AI and ML that make them more applicable for this data complexity issue?

Data gets smarter with AI

Lewington: A lot of the key algorithms for AI were actually invented long ago in the 1950s, but at that time, the computers were hopeless relative to what we have today; so it wasn’t possible to harness them.

For example, you can train a deep-learning neural net to recognize pictures of kittens. To do that, you need to run millions of images to train a working model you can deploy. That’s a huge, computationally intensive task that only became practical a few years ago. But now that we have hit that inflection point, things are just taking off.

Gardner: We can begin to use machines to better manage data that we can then apply to machines. Does that change the definition of AI?

Lewington: The definition of AI is tricky. It’s malleable, depending on who you talk to. For some people, it’s anything that a human can do. To others, it means sophisticated techniques, like reinforcement learning and deep learning.

How to Remove Complexity

From Multicloud and Hybrid IT 

One useful definition is that AI is what you use when you know what the answer looks like, but not how to get there.

Traditional analytics effectively does at scale what you could do with pencil and paper. You could write the equations to decide where your data should live, depending on how quickly you need to access it.

But with AI, it’s like the kittens example. You know what the answer looks like, it’s trivial for you to look at the photograph and say, “That is a cat in the picture.” But it’s really, really difficult to write the equations to do it. But now, it’s become relatively easy to train a black box model to do that job for you.

Gardner: Now that we are able to train the black box, how can we apply that in a practical way to the business problem that we discussed at the outset? What is it about AI now that helps better manage data? What’s changed that gives us better data because we are using AI?

The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn’t get otherwise.

Lewington: It’s a circular thing. The heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn’t get otherwise.

Now, there are many ways you can apply that. You can apply it to the trivial case of the cat we just talked about. You can apply it to helping a surgeon review many more MRIs, for example, by allowing him to focus on the few that are borderline, and to do the mundane stuff for him.

But, one of the other things you can do with it is use it to manipulate the data itself. So we are using AI to make the data better — to make AI better.

Gardner: Not only is it circular, and potentially highly reinforcing, but when we apply this to operations in IT — particularly complexity in hybrid cloud, multicloud, and hybrid IT — we get an additional benefit. You can make the IT systems more powerful when it comes to the application of that circular capability — of making better AI and better data management.

AI scales data upward and outward

Lewington: Oh, absolutely. I think the key word here is scale. When you think about data — and all of the places it can be, all the formats it can be in — you could do it yourself. If you want to do a particular task, you could do what has traditionally been done. You can say, “Well, I need to import the data from here to here and to spin up these clusters and install these applications.” Those are all things you could do manually, and you can do them for one-off things.

But once you get to a certain scale, you need to do them hundreds of times, thousands of times, even millions of times. And you don’t have the humans to do it. It’s ridiculous. So AI gives you a way to augment the humans you do have, to take the mundane stuff away, so they can get straight to what they want to do, which is coming up with an answer instead of spending weeks and months preparing to start to work out the answer.

No alt text provided for this image

Gardner: So AI directed at IT, what some people call AIOps could be an accelerant to this circular advantageous relationship between AI and data? And is that part of what you are doing within the innovation and research work at HPE?

Lewington: That’s true, absolutely. The mission of Hewlett Packard Labs in this space is to assist the rest of the company to create more powerful, more flexible, more secure, and more efficient computing and data architectures. And for us in Labs, this tends to be a fairly specific series of research projects that feed into the bigger picture.

For example, we are now doing the Deep Learning Cookbook, which allows customers to find out ahead of time exactly what kind of hardware and software they are going to need to get to a desired outcome. We are automating the experimenting process, if you will.

And, as we talked about earlier, there is the shift to the edge. As we make more and more decisions — and gain more insights there, to where the data is created — there is a growing need to deploy AI at the edge. That means you need a data strategy to get the data in the right place together with the AI algorithm, at the edge. That’s because there often isn’t time to move that data into the cloud before making a decision and waiting for the required action to return.

Once you begin doing that, once you start moving from a few clouds to thousands and millions of endpoints, how do you handle multiple deployments? How do you maintain security and data integrity across all of those devices? As researchers, we aim to answer exactly those questions.

And, further out, we are looking to move the natural learning phase itself to the edge, to do the things we call swarm learning, where devices learn from their environment and each other, using a distributed model that doesn’t use a central cloud at all.

Gardner: Rebecca, given your title is Innovation Marketing Lead, is there something about the very nature of innovation that you have come to learn personally that’s different than what you expected? How has innovation itself changed in the past several years?

Innovation takes time and space 

Lewington: I began my career as a mechanical engineer. For many years, I was offended by the term innovation process, because that’s not how innovation works. You give people the space and you give them the time and ideas appear organically. You can’t have a process to have ideas. You can have a process to put those ideas into reality, to wean out the ones that aren’t going to succeed, and to promote the ones that work.

How to Better Understand

What AI Can do For Your Business

But the term innovation process to me is an oxymoron. And that’s the beautiful thing about Hewlett Packard Labs. It was set up to give people the space where they can work on things that just seem like a good idea when they pop up in their heads. They can work on these and figure out which ones will be of use to the broader organization — and then it’s full steam ahead.

Gardner: It seems to me that the relationship between infrastructure and AI has changed. It wasn’t that long ago when we thought of business intelligence (BI) as an application — above the infrastructure. But the way you are describing the requirements of management in an edge environment — of being able to harness complexity across multiple clouds and the edge — this is much more of a function of the capability of the infrastructure, too. Is that how you are seeing it, that only a supplier that’s deep in its infrastructure roots can solve these problems? This is not a bolt-on benefit.

Lewington: I wouldn’t say it’s impossible as a bolt-on; it’s impossible to do efficiently and securely as a bolt-on. One of the problems with AI is we are going to use a black box; you don’t know how it works. There were a number of news stories recently about AIs becoming corrupted, biased, and even racist, for example. Those kinds of problems are going to become more common.

And so you need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it’s going to be very difficult to attest that what’s underneath has its integrity unviolated.

If you are someone like HPE, which has its fingers in lots of pies, either directly or through our partners, it’s easier to make a more efficient solution.

You need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it’s going to be very difficult to attest that what’s underneath has its integrity unviolated.

Gardner: Is it fair to say that AI should be a new core competency, for not only data scientists and IT operators, but pretty much anybody in business? It seems to me this is an essential core competency across the board.

Lewington: I think that’s true. Think of AI as another layer of tools that, as we go forward, becomes increasingly sophisticated. We will add more and more tools to our AI toolbox. And this is one set of tools that you just cannot afford not to have.

Gardner: Rebecca, it seems to me that there is virtually nothing within an enterprise that won’t be impacted in one way or another by AI.

Lewington: I think that’s true. Anywhere in our lives where there is an equation, there could be AI. There is so much data coming from so many sources. Many things are now overwhelmed by the amount of data, even if it’s just as mundane as deciding what to read in the morning or what route to take to work, let alone how to manage my enterprise IT infrastructure. All things that are rule-based can be made more powerful, more flexible, and more responsive using AI.

Gardner: Returning to the circular nature of using AI to make more data available for AI — and recognizing that the IT infrastructure is a big part of that — what are doing in your research and development to make data services available and secure? Is there a relationship between things like HPE OneView and HPE OneSphere and AI when it comes to efficiency and security at scale?

Let the system deal with IT 

Lewington: Those tools historically have been rules-based. We know that if a storage disk gets to a certain percentage full, we need to spin up another disk — those kinds of things. But to scale flexibly, at some point that rules-based approach becomes unworkable. You want to have the system look after itself, to identify its own problems and deal with them.

Including AI techniques in things like HPE InfoSight, HPE Clearpath, and network user identity behavior software on the HPE Aruba side allows the AI algorithms to make those tools more powerful and more efficient.

You can think of AI here as another class of analytics tools. It’s not magic, it’s just a different and better way of doing IT analytics. The AI lets you harness more difficult datasets, more complicated datasets, and more distributed datasets.

Gardner: If I’m an IT operator in a global 2000 enterprise, and I’m using analytics to help run my IT systems, what should I be thinking about differently to begin using AI — rather than just analytics alone — to do my job better?

Lewington: If you are that person, you don’t really want to think about the AI. You don’t want the AI to intrude upon your consciousness. You just want the tools to do your job.

For example, I may have 1,000 people starting a factory in Azerbaijan, or somewhere, and I need to provision for all of that. I want to be able to put on my headset and say, “Hey, computer, set up all the stuff I need in Azerbaijan.” You don’t want to think about what’s under the hood. Our job is to make those tools invisible and powerful.

Composable, invisible, and insightful 

Gardner: That sounds a lot like composability. Is that another tangent that HPE is working on that aligns well with AI?

Lewington: It would be difficult to have AI be part of the fabric of an enterprise without composability, and without extending composability into more dimensions. It’s not just about being able to define the amount of storage and computer networking with a line of code, it’s about being able to define the amount of memory, where the data is, where the data should be, and what format the data should be in. All of those things – from the edge to cloud – need to be dimensions in composability.

How to Achieve Composability 

Across Your Datacenter 

You want everything to work behind the scenes for you in the best way with the quickest results, with the least energy, and in the most cost-effective way possible. That’s what we want to achieve — invisible infrastructure.

Gardner: We have been speaking at a fairly abstract level, but let’s look to some examples to illustrate what we’re getting at when we think about such composability sophistication.

Do you have any concrete examples or use cases within HPE that illustrate the business practicality of what we’ve been talking about?

Lewington: Yes, we have helped a tremendous number of customers either get started with AI in their operations or move from pilot to volume use. A couple of them stand out. One particular manufacturing company makes electronic components. They needed to improve the yields in their production lines, and they didn’t know how to attack the problem. We were able to partner with them to use such things as vision systems and photographs from their production tools to identify defects that only could be picked up by a human if they had a whole lot of humans watching everything all of the time.

This gets back to the notion of augmenting human capabilities. Their machines produce terabytes of data every day, and it just gets turned away. They don’t know what to do with it.

No alt text provided for this image

We began running some research projects with them to use some very sophisticated techniques, visual autoencoders, that allow you, without having a training set, to characterize a production line that is performing well versus one that is on the verge of moving away from the sweet spot. Those techniques can fingerprint a good line and also identify when the lines go just slightly bad. In that case, a human looking at line would think it was working just perfectly.

This takes the idea of predictive maintenance further into what we call prescriptive maintenance, where we have a much more sophisticated view into what represents a good line and what represents a bad line. Those are couple of examples for manufacturing that I think are relevant.

Gardner: If I am an IT strategist, a Chief Information Officer (CIO) or a Chief Technology Officer (CTO), for example, and I’m looking at what HPE is doing — perhaps at the HPE Discover conference — where should I focus my attention if I want to become better at using AI, even if it’s invisible? How can I become more capable as an organization to enable AI to become a bigger part of what we do as a company?

The new company man is AI

Lewington: For CIOs, their most important customers these days may be developers and increasingly data scientists, who are basically developers working with training models as opposed to programs and code. They don’t want to have to think about where that data is coming from and what it’s running on. They just want to be able to experiment, to put together frameworks that turn data into insights.

It’s very much like the programming world, where we’ve gradually abstracted things from bare-metal, to virtual machines, to containers, and now to the emerging paradigm of serverless in some of the walled-garden public clouds. Now, you want to do the same thing for that data scientist, in an analogous way.

Today, it’s a lot of heavy lifting, getting these things ready. It’s very difficult for a data scientist to experiment. They know what they want. They ask for it, but it takes weeks and months to set up a system so they can do that one experiment. Then they find it doesn’t work and move on to do something different. And that requires a complete re-spin of what’s under the hood.

Now, using things like software from the recent HPE BlueData acquisition, we can make all of that go away. And so the CIO’s job becomes much simpler because they can provide their customers the tools they need to get their work done without them calling up every 10 seconds and saying, “I need a cluster, I need a cluster, I need a cluster.”

That’s what a CIO should be looking for, a partner that can help them abstract complexity away, get it done at scale, and in a way that they can both afford and that takes the risk out. This is complicated, it’s daunting, and the field is changing so fast.

Gardner: So, in a nutshell, they need to look to the innovation that organizations like HPE are doing in order to then promulgate more innovation themselves within their own organization. It’s an interesting time.

Containers contend for the future 

Lewington: Yes, that’s very well put. Because it’s changing so fast they don’t just want a partner who has the stuff they need today, even if they don’t necessarily know what they need today. They want to know that the partner they are working with is working on what they are going to need five to 10 years down the line — and thinking even further out. So I think that’s one of the things that we bring to the table that others can’t.

Gardner: Can give us a hint as to what some of those innovations four or five years out might be? How should we not limit ourselves in our thinking when it comes to that relationship, that circular relationship between AI, data, and innovation?

Lewington: It was worth coming to HPE Discover in June, because we talked about some exciting new things around many different options. The discussion about increasing automation abstractions is just going to accelerate.

We are going to get to the point where using containers seems as complicated as bare-metal today and that’s really going to help simplify the whole data pipelines thing.

For example, the use of containers, which have a fairly small penetration rate across enterprises, is at about 10 percent adoption today because they are not the simplest thing in the world. But we are going to get to the point where using containers seems as complicated as bare-metal today and that’s really going to help simplify the whole data pipelines thing.

Beyond that, the elephant in the room for AI is that model complexity is growing incredibly fast. The compute requirements are going up, something like 10 times faster than Moore’s Law, even as Moore’s Law is slowing down.

We are already seeing an AI compute gap between what we can achieve and what we need to achieve — and it’s not just compute, it’s also energy. The world’s energy supply is going up, can only go up slowly, but if we have exponentially more data, exponentially more compute, exponentially more energy, and that’s just not going to be sustainable.

So we are also working on something called Emergent Computing, a super-energy-efficient architecture that moves data around wherever it needs to be — or not move data around but instead bring the compute to the data. That will help us close that gap.

How to Transform

The Traditional Datacenter

And that includes some very exciting new accelerator technologies: special-purpose compute engines designed specifically for certain AI algorithms. Not only are we using regular transistor-logic, we are using analog computing, and even optical computing to do some of these tasks, yet hundreds of times more efficiently and using hundreds of times less energy. This is all very exciting stuff, for a little further out in the future.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Posted in AIOps, artificial intelligence, big data, Cloud computing, data analysis | Tagged , , , , , , , , , , , , | Leave a comment

How IT can fix the broken employee experience

The next BriefingsDirect intelligent workspaces discussion explores how businesses are looking to the latest digital technologies to transform how employees work.

There is a tremendous amount of noise, clutter, and distraction in the scattershot, multi-cloud workplace of today — and it’s creating confusion and frustration that often pollute processes and hinder innovative and impactful work. 

We’ll now examine how IT can elevate the game of sorting through apps, services, data, and delivery of simpler, more intelligent experiences that enable people — in any context — to work on relevancy and consistently operate at their informed best. 

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. 

To illustrate new paths to the next generation of higher productivity work, please welcome Marco Stalder, Team Leader of Citrix Workspace Services at Bechtle AG, one of Europe’s leading IT providers, and Tim Minahan, Executive Vice President of Strategy and Chief Marketing Officer at Citrix. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tim, improving the employee experience has become a hot topic, with billions of productivity dollars at stake. Why has how workers do or don’t do their jobs well become such a prominent issue?

Minahan: The simple answer is the talent crunch. Just about everywhere you look, workforce, management, and talent acquisition have become a C-suite level, if not board level, priority.

Minahan

And this really boils down to three things. Number one, demographically there is not enough talent. You have heard the numbers from McKinsey that within the next year there will be a shortage of 95 million medium- to high-skilled workers around the globe. And that’s being exacerbated by the fact that our traditional work models — where we build a big office building or a call center and try to hire people around it — is fundamentally broken.

The second key reason is a skills gap. Many companies are reengineering their business to drive digital transformation and new digital business or engagement models with their customers. But oftentimes their employee base doesn’t have the right skills and they need to work on developing them. 

The third issue exacerbating the talent crunch is the fact that if you are fortunate enough to have the talent, it’s highly likely they are disengaged at work. Gallup just did its global Future of Work Study and found that 85 percent of employees are either disengaged or highly disengaged at work. A chief reason is they don’t feel they have access to the information and the tools they need to get their jobs done effectively.

Gardner: We have dissatisfaction, we have a hard time finding people, and we have a hard time keeping the right people. What can we bring to the table to help solve that? Is there some combination of what human resources (HR) used to do and IT maybe didn’t think about doing but has to do?

Enhance the employee experience 

Minahan: The concept of employee experience is working its way into the corporate parlance. The chief reason is that you want to be able to ensure the employees have the right combination of physical space and an environment conducive with interacting and partnering with their project teams — and for getting work done. 

Digital spaces, right? That is not just access to technology, but a digital space that is simplified and curated to ensure workers get the right information and insights to do their jobs. And then, obviously, cultural considerations, such as, “Who is my manager, what’s my development career, am I continuing to move forward?”

Those three things are combining when we talk about employee experience.

Gardner: And you talked about the where, the physical environment. A lot of companies have experimented with at-home workers, remote workers, and branch offices. But many have not gotten the formula right. At the same time, we are seeing cities become very congested and very expensive. 

The traditional work models of old just aren’t working, especially in light of the talent crunch and skills gap we’re seeing. Traditional work models are fundamentally broken. 

Do we need to give people even more choice? And if we do, how can we securely support that? 

Minahan: The traditional work models of old just aren’t working, especially in light of the talent crunch and skills gap we are seeing. The high-profile example is Amazon, right? So over the past year in the US there was a big deal over Amazon selecting their second and third headquarters. Years ago Amazon realized they couldn’t hire all the talent they needed in Seattle or Silicon Valley or Austin. Now they have 17-odd tech centers around the US, with anywhere from 400 to several thousand people at each one. So you need to go where the talent is. 

When we think about traditional work models — where we would build a call center and hire a lot of people around that call center – it’s fundamentally broken. As evidence of this, we did a study recently where we surveyed 5,000 professional knowledge workers in the US. These were folks who moved to cities because they had opportunities and they got paid more. Yet 70 percent of them said that they would move out of the city if they could have more flexible work schedules and reliable connectivity. 

Gardner: It’s pretty attractive when you can get twice the house for half the money, still make city wages, and have higher productivity. It’s a tough equation to beat. 

Minahan: Yes, there is that higher productivity thing, this whole concept of mindfulness that’s working its way into the lingo. People should be hired to do a core job, not spending their days doing things like expense report approvals, performance reviews, or purchase requisitions. Yet those are a big part of everyone’s job, when they are in an office. 

You compound that with two-hour commutes, and that there are a lot of distractions in the office. We often need to navigate multiple different applications just to get a bit of the information that we need. We often need to navigate multiple different applications to get a single business process done and that’s just not dealing with all the different interfaces, it’s dealing with all the different authentications, and so on. All of that noise in your day really frustrates workers. They feel they were hired to do a job based on core skills they are really passionate about – but they spend all their time doing task work. 

Gardner:I feel like I spend way too much time in email. I think everybody knows and feels that problem. Now, how do we start to solve this? What can the technology side bring to the table and how can that start to move into the culture, the methods, and the rethinking of how work gets done?

De-clutter intelligently

Minahan: The simple answer is you need to clear way the clutter. And you need to bring intelligence to bear. We believe that artificial intelligence (AI) and machine learning (ML) play a key role. And so Citrix has delivered a digital workspace that has three primary attributes. 

First, it’s unified. Users and employees gain everything they need to be productive in one unified experience. Via single sign-on they gain access to all of their Software as a service (SaaS) apps, web apps, mobile apps, virtualized apps, and all of their content in one place. That all travels consistently with them wherever they are — across their laptop, to a tablet, to a smartphone, or even if they need to log on from a distinct terminal. 

The second component, in addition to being unified, is being secure. When things are within the workspace, we can apply contextual security policies based on who you are. We know, for example, that Dana logs in every day from a specific network, using his device. If you were to act abnormally or outside of that pattern, we could apply an additional level of authentication, or some other rules like shutting off certain functionalities such as downloading. So your applications and content are far more secure inside of the workspace than outside. 

When things are within the workspace, we can apply contextual security policies based on who you are. Your applications and content are far more secure inside of the workspace than outside.

The third component, intelligence, gets to the frustration part for the employees. Infusing ML and simplified workflows — what we call micro apps — within the workspace brings in a lot of those consumer-like experiences, such as curating your information and news streams, like Facebook. Or, like Netflix, it provides recommendations on the content you would like to see.

We can bring that into the workspace so that when you show up you get presented in a very personalized way the insights and tasks that you need, when you need them, and remove that noise from your day so you can focus on your core job. 

Gardner: Getting that triage based on context and that has a relevancy to other team processes sounds super important.

When it comes to IT, they may have been part of the problem. They have just layered on more apps. But IT is clearly going to be part of the solution, too. Who else needs to play a role here? How else can we re-architect work other than just using more technology?

To get the job done, ask employees how 

Minahan: If you are going to deliver improved employee experiences, one of the big mistakes a lot of companies make is they leave out the employee. They go off and craft the great employee experience and then present it to them. So definitely bring employees in. 

When we do research and engage with customers who prioritize on the employee experience, it’s usually a union between IT and human resources to best understand what the work is that an employee needs to get done. What’s the preferred environment? How do they want to work? With that understanding, you can ensure you are adapting the digital workspaces — and the physical workplaces — to support that.

Gardner: It certainly makes sense in theory. Let’s learn how this works in practice. 

Marco, tell us about Bechtle, what you have been doing, and why you made solving employee productivity issues a priority.

Stalder: Bechtle AG is one of Europe’s leading IT providers. We currently have about 70 systems integrators (SIs) across Germany, Switzerland, and Austria, as well as e-commerce businesses in 14 different European countries. 

Stalder

We were founded in 1983 and our company headquarters is in Neckarsulm, a small town in the southern part of Germany. We currently have 10,300 employees spread across all of Europe.

As an IT company, one of our key priorities is to make IT as easy as possible for the end users. In the past, that wasn’t always the case because the priorities had been set in the wrong place. 

Gardner: And when you say the priorities were set in the wrong place, when you tried to create the right requirements and the right priorities, how did you go about that, what were the top issues you wanted to solve?

Stalder: The hard part is gaining the balance between security and user experience. In the past, priorities were more focused on the security part. We have tried to shift this through our Corporate Workspace Project to give the user the right kind of experience back again, and letting it show in the work and focus on what the user has to do.

Gardner: And just to be clear, are we talking about the users that are just within your corporation or did this extend also to some of your clients and how you interact with them?

Stalder: The primary focus was our internal user base, but of course we also have contractors that externally have to access our data and our applications.

Gardner: Tim, this is yet another issue companies are dealing with: contingent workforces, contractors that come and go, and creative people that are often on another continent. We have to think about supporting that mix of workers, too.

Synchronizing the talent pool 

Minahan: Absolutely. We are seeing a major shift in how companies think of the workforce, between full-time and part-time contractors, and the like. Leading companies are looking around for pools of talent. They are asking, “How do I organize the right skills and resources I need? How do I bring them together in an environment, whether it’s physical or digital, to collaborate around a project and then dissolve them when that project is complete?”

And these new work models excite me when we talk about the workspace opportunity that technology can enable. A great example is a customer of ours, eBay, which people are familiar with. A long time ago, eBay recognized that they could not get ahead of the older call center model. They kept training people, but the turnover was too fast. So they began using the Citrix Workspace together with some of our networking technologies to go to where the employees are.